top of page

2013.07 - Current State Assessment v1 28 Published

  • Writer: Pavan Raja
    Pavan Raja
  • Apr 8, 2025
  • 44 min read

Summary:

The document provided outlines a significant effort to improve the operational model of the at , utilizing HP's ArcSight technology. The proposed Target Operating Model (TOM) aims to transition from voice operations to data operations, with three phases designed for implementation: Housekeeping, Governance model enhancement, and Solution Design. ### Recommendations Based on Assessment 1. **Enhanced Alignment Between and CTSO Board:** - Clear communication channels should be established to ensure that the expectations of both parties are well-understood and aligned. This includes regular meetings and clear documentation of use case approvals. - Define pre-defined operational exceptions and responses to imminent threats to enhance the client's response capabilities during emergencies. - The should be empowered as a trusted advisor to the CTSO board by providing monthly analytical threat profiles and business metrics. ### Current Operational Model Assessment The current model is found to be broken and not aligned with expectations for its size and nature, indicating potential inefficiencies in communication, decision-making, and resource allocation. The proposed TOM aims to address these issues through a structured transformation process. ### Future State Model Implementation 1. **Housekeeping Phase:** - Focuses on improving efficiency by enhancing visibility and communication with stakeholders, setting up tactical reporting, defining roles and responsibilities, developing SCITT, using proactive intelligence, refining RCA processes, and enhancing event feed monitoring. 2. **Governance Model Enhancement Phase:** - Aims to rebuild the from basic functions to align with the proposed TOM by establishing a more robust governance structure that includes all stakeholders, ensuring clear visibility and communication paths for seamless collaboration. 3. **Solution Design Phase:** - Involves reviewing and adjusting the overall architecture and capabilities of the system to meet future business dynamics, including creating a data model, defining use cases, developing an architecture, and aligning devices with these use cases. ### Data Modeling for ArcSight SIEM Solution (Appendix A) The appendix outlines the need for a phased approach in automating a data model for ArcSight SIEM solution due to incomplete information in CMDB and IPAM databases. Key objectives include utilizing the customer's CMDB as a master database for asset management, integrating their IPAM for network management, and incorporating TCP/UDP service inventory and vulnerability data from McAfee solutions. ### Data Model Automation Process - **Creating an ArcSight Asset Database:** Consolidates data from multiple sources such as CMDB and DNS using the hostname as the key index value to relate IP address information. Data from IPAM is imported to include subnet and category information for ArcSight asset categories. - **Exporting in ArcSight Archive Bundle (ARB) format** and importing into ArcSight using a grep perl script. - **Manual Categorization:** Zones like VPN DHCP ranges are updated manually in the IPAM database, which then inherits these categorizations to assets. - **Vulnerability Data Import:** Imported based on asset IP address and key index value. ### Recommendations for Improvement: 1. **Implement a Clear Communication Strategy:** Ensure that there is a consistent flow of information between the and CTSO board, with clear documentation of use case approvals and decision-making processes. 2. **Enhance Emergency Response Capabilities:** Define pre-defined operational exceptions and emergency response protocols to improve the client's ability to respond effectively during crises. 3. **Establish a Trusted Advisor Role for the SOC:** Empower the to serve as a trusted advisor, providing regular threat profiles and business metrics to support strategic decision-making within the CTSO board. 4. **Streamline Operational Processes:** Refine roles and responsibilities, improve visibility through tactical reporting, and implement tools like SCITT for better intelligence gathering and proactive use of information. 5. **Automate Data Model Development:** Continue refining the data model automation process based on available information and integrate it with external systems to enhance the ArcSight SIEM solution's capabilities. ### Conclusion The proposed TOM is a comprehensive plan that aims to transform the into a more efficient, effective, and aligned operational unit capable of adapting to future business dynamics. The outlined recommendations focus on enhancing alignment, improving response mechanisms, and leveraging technology-driven solutions for better management of security information and threat intelligence.

Details:

The document titled "HP Security Services - Current State Assessment" provides a comprehensive analysis of the security operations within a client's organization, focusing on areas such as visibility and coverage, correlation rules, data modeling, engagement processes, tooling & purpose, service offerings, metrics reporting, roles & responsibilities, and inputs/outputs. **Executive Summary**: The report starts with an overview of key findings and recommendations derived from the assessment, highlighting improvements needed in various security aspects. **Current State Assessment (Section 2)**: This section outlines significant issues like lack of visibility into critical systems, outdated correlation rules, inadequate data modeling for threat detection, and gaps in service offerings. **Use Cases Current State (Section 3)**: Detailed analysis includes areas such as visibility/coverage, correlation rules, and data modeling. Specific findings include poor coverage due to unmonitored assets and a lack of standardization in rule creation which hampers effective correlation. Data modeling is found wanting with outdated models that do not support modern threat detection needs. **Engagement (Section 4)**: The assessment follows an engagement lifecycle, from initial review through to operations transition, highlighting key milestones like the pilot phase and technical alignment. Recommendations suggest enhancing training programs for staff roles and improving tool integration. **Tooling & Purpose (Section 5)**: It identifies a need to update current tool sets and enhance their capabilities to better align with evolving threats and operational needs. **Services (Section 6)**: The report highlights the existence of an incomplete services catalog, which should be expanded based on identified gaps in coverage as noted during visibility/coverage analysis. **Metrics Reporting and KPIs (Section 7)**: Weaknesses are found in both organizational and local market-specific KPI reporting mechanisms. Recommendations include enhancing metric collection tools and increasing the granularity of reports to better track performance against set objectives. **Roles and responsibilities (Section 8)**: The assessment clarifies roles within SOC, identifying mismatches between expected duties and actual job descriptions. Suggestions include clearer delineation of roles to enhance efficiency and accountability. **Inputs to (Section 9)**: Describes the primary communication channels used by the SOC for receiving inputs from various sources like emails, phone calls, and external threat intelligence. **Outputs from (Section 10)**: Lists alerts generated, incidents managed, and other outputs typically produced by a SOC. Overall, the document serves as a roadmap for enhancing the security posture of , with actionable insights into areas requiring immediate attention based on identified shortcomings in visibility, tool capabilities, and staff competencies. The current state assessment of 's Security Operations Center (SOC) was conducted by HP's Security consulting practice in June and July 2013. The main purpose was to document the existing operations and provide recommendations for transformation within the SOC towards a mature and capable security operation center. Key findings from interviews with skilled, motivated team members highlighted positive aspects such as cohesive teams, cooperation with HP, passion for quality, innovative thinking under constraints, adaptability to necessary changes, and an efficient SIEM solution without performance issues. However, deficiencies were identified in areas like processes, people, and technology, which hindered the SOC's capabilities. Recommendations focused on enhancing these areas through governance model improvements, better defined processes, effective operational models, and strategic use of tools and technology. The summary highlights that there are systemic issues within the Client SOC (a security operations center) which hinder its ability to operate efficiently and effectively, particularly in relation to incident detection and management. Key challenges include a lack of internal threat visibility due to limited data feeds from local markets' systems, poor alignment between management layers, inadequate strategic planning, and ineffective communication among various interfaces. Additionally, there are operational issues such as insufficient tools for centralized asset management, case handling, access to infrastructure information, and an outdated data model within the SIEM (Security Information and Event Management) tool. To address these challenges and improve performance, recommendations have been proposed aimed at enhancing synergy between different levels of management, establishing a clear strategic roadmap, improving communication channels, and upgrading technological tools such as SIEM systems. These steps are intended to increase visibility into internal threats, enable more effective resource allocation, reduce costs by focusing teams on specialized activities, and ultimately lead to significant improvements in the SOC's overall capabilities. The text provided is about a study or evaluation related to enhancing the effectiveness of a security operations capability within an organization, focusing on aligning people, processes, and technology with information security best practices. This aligns business objectives with risks and aims for high maturity in terms of operational efficiency and adaptability. The assessment process involved several key steps including a Current State Assessment (CSA) which was conducted to evaluate the external interactions within the organization and its local markets. During this discovery phase, detailed meetings were held with relevant internal team members, along with reviewing provided documentation to identify critical issues across various analyzed areas. These findings are summarized in Table 1, highlighting the most significant concerns by area. The primary objective of reaching a Level-3 maturity is to achieve an effective security operations group that can operate efficiently and adapt swiftly to changes in its environment through repeatable processes and procedures. This targeted approach sets up the organization for success in managing information security risks more effectively and aligns with the broader goals of cost savings and regulatory risk reduction as depicted in Figure 1, which also shows a correlation between increased capability and reduced costs or regulatory risks. The ability to consolidate monitoring costs and even offer internal managed security services to other technology teams within the organization is highlighted as an initial but significant achievement for Phase-2 implementation, positioning the SOC effectively to serve as an MSSP (Managed Security Services Provider). This strategic shift not only aids in cost containment but also enhances the overall capability of the SOC by fostering collaboration and expertise sharing among internal departments. In summary, this study aims to improve the security operations center's maturity level through a structured assessment that focuses on people, processes, and technology integration, aiming for a high-performance operational state capable of handling dynamic environments effectively. The Carnegie Mellon Software Engineering Institute's Capability Maturity Model for Integration has helped the Global Security Operations Center () achieve a stable level of operational maturity in its internal processes. Despite this, there are significant gaps in their use cases due to technical limitations, data quality issues, or lack of available data to support these use cases effectively. As a result, the struggles to provide meaningful value to the 's markets, as they currently have limited content for their use cases. The markets and the in general often lack a clear understanding of what SIEM (Security Information and Event Management) tools do or how important good content is for these systems. The SOC must work with the log feeds provided by the markets, which limits its ability to execute use cases effectively. Analysts find themselves less effective and timely due to inadequate situational awareness, as they are not always privy to crucial incident details from market teams. During the market onboarding process, there is a lack of focus on risk assessment or known use cases, leading to misconceptions that providing basic data feeds ensures security. A review of the current use cases revealed several gaps, including: 1. Internal Threats 2. Business Use Cases 3. Application Layer 4. Incident Handling 5. Cross device correlation 6. Zero Day threats To address these gaps, ongoing updates to the use cases are being developed by the operations team, with input from Level 2 personnel who discuss incidents and reports with markets monthly. New use cases are also being created based on feedback and requests from the markets. However, there is a persistent issue with data feed monitoring and consistency, as the operations team does not receive alerts when issues arise. The document outlines potential issues within a client's SOC (Security Operations Center) related to event feeds and use cases, particularly focusing on reactive behavior in AV (Antivirus), Beaconing, and Scanning. Key findings include: 1. **Lack of Comprehensive Monitoring**: If an event feed is lost, the SOC only monitors the super connector feed status, which poses a significant risk of feed outages and loss of real-time correlation abilities. This can lead to delayed detection of issues unless directly noticed by the client. 2. **Use Case Development and Enhancements**: The original use cases were developed during the system's implementation and have undergone enhancements over time. However, they are primarily technology-specific, focusing on AV, Beaconing, and Scanning, without significant input from business services for a more holistic approach. 3. **Catalogue Management**: A Use Case catalogue is maintained on a Wiki, broken down into 12 categories. The document does not list these specific categories or details about the use cases beyond their reactive nature and technology-driven development. 4. **Data Relevance and Alignment**: There are concerns with feed integrations lacking clear purpose, leading to misalignment between feeds and existing use cases. A significant portion of data is deemed non-meaningful due to this lack of alignment. 5. **Insufficient Business Input**: Use cases have been developed without adequate input from management or business services, resulting in a primarily reactive approach that may not effectively detect malicious behavior unless it precisely matches predefined criteria. 6. **Recommendations for Improvement**: The document suggests that future use case development should include more significant involvement from management and business services to better align technology coverage with overall business risk. This would help avoid overly technical, device-specific solutions and foster a more proactive approach to security detection. The document outlines strategies for improving visibility and managing risks across various markets and organizations, particularly focusing on use cases and related risk management. Here's a summary of the key points: 1. **Use Case Development**:

  • Current issue: Use case outputs provide minimal value at higher organizational levels due to limited information (primarily rule fire statistics).

  • Solution: Develop a service catalog of associated risks and map required event sources after use cases are developed.

2. **Risk Management**:

  • For new countries joining the system, they should either supply appropriate logs or have their CISO sign off on accepting the risks, indicating alternative mitigation strategies.

3. **Lessons Learned**:

  • Incidents not visible to internal teams and reported by other departments can be used as valuable lessons for improving future use cases, feeds, and overall visibility.

4. **Baseline Alignment**:

  • Each market should align with a defined set of use cases linked to specific risks.

  • It is crucial that both markets and management are aware of the current visibility level (set of use cases) in each market.

5. **Standard Visibility Levels**:

  • Implement standard levels enforced across all markets, using different use case packs to achieve these levels.

  • Each level has associated requirements such as log feeds, logging levels, network models, and asset model information.

6. **Event Feed Monitoring**:

  • Address the risk of missing malicious activities by monitoring super connectors as single feeds; deviations in event rates can enhance monitoring rules.

7. **Active Lists with TTLs**:

  • Use Active Lists to manage specific time-to-live periods for improved monitoring and response efficiency.

Overall, this document emphasizes the importance of improving visibility through use cases, linking these to risk management, and ensuring comprehensive coverage across all markets while maintaining a standardized approach to event monitoring and incident handling. To summarize, the document outlines a framework for creating proactive security operations center (SOC) use cases that involve business and service inputs, starting with risk assessments. The SOC should not be solely technology-driven but also focus on proactive measures through rule automation for repetitive threat pattern detection. Outputs from these use cases should consider different organizational levels and provide high-level, holistic views to management for visibility into achievements and improvement areas. The document then details the current state of correlation rules in place at the SOC: 1. **Asset Handling**: When an antivirus (AV) event occurs, the affected asset is added to a list as a trigger for future events if it becomes infected again within three days. The event is then pushed to active channels for analyst review. 2. **Cross-Device Correlation**: Currently absent, real-time alerts lack context and certainty due to their simplistic nature. 3. **Rule Types**: Some rules are set as ignore rules (those with "i_" in the name), which manage aggregated events from main channels or form part of customized event processing. These rules do not use data model information because it is either poorly defined or does not accurately represent the complex multi-hosted environment of . These points highlight the need for a more comprehensive approach to security management, moving away from purely reactive measures towards proactive and informed decision-making based on detailed analysis and context. The text discusses several aspects related to improving the effectiveness and efficiency of security information and event management (SIEM) systems, particularly within the context of 's SOC (Security Operations Center). Firstly, it highlights that certain rules, like Beaconing rules from ArcOSI, are generating a high number of false positives. Despite their low credibility, analysts still need to investigate these false positives due to insufficient contextual information. To address this, active lists are used to enhance the certainty of alerts, but they are mostly maintained manually. Secondly, it recommends focusing on cross-device correlation for detecting complex threats and reducing false positives by providing necessary context. Inconsistent timestamp data formats can be mitigated using manager or receipt time timestamps, with discrepancies in end device timestamps being corrected when possible. The text also stresses the importance of well-defined rules that consider the existing data model to enrich results and aid analysts in their daily tasks. It advises replacing unreliable information sources like ArcOSI with RepSM where applicable. Additionally, it emphasizes the automation of active lists for maintaining context as dynamic environments can be challenging and time-consuming. Finally, synchronizing end devices using NTP servers across all markets and datacenters is crucial to maintain data accuracy and consistency in rule results. Lastly, while acknowledging that multiple options are available for defining a data model in ArcSight, the text underscores its importance in enhancing the overall performance of the SIEM solution by providing examples of how it can be configured effectively. This passage discusses the benefits of implementing a well-planned data model for ArcSight SIEM, which enhances context to overall SIEM correlation, events schema, and reporting capabilities by providing enhanced understanding of protected assets, business value, associated risks, compliance contexts, and enables more efficient use of analytics. It also describes the ArcSight ESM Data Model's role in network modeling to identify devices involved in monitored traffic, contributing to detailed event identification and improved correlation capabilities within ArcSight ESM. The model requires ongoing operational maintenance for baseline configuration and is abstractly represented by a combination of Network Model and Asset Model diagrams. The ArcSight Network Model, as depicted in Figure 4, is designed to capture crucial information about network assets and their attributes for informed decision-making regarding events on a protected network. This model provides detailed insights into various aspects of the assets including open ports, operating systems, known vulnerabilities, applications hosted, and business operations supported by these applications. The Network Model includes several resources: 1. **Assets**: These represent individual nodes such as servers, routers, and laptops on the network. They capture specific attributes like IP addresses and host information necessary for understanding the asset's role in the network. 2. **Asset Ranges**: This resource represents a group of interconnected assets defined by a contiguous block of IP addresses. It helps to identify areas or segments within the network where multiple nodes are part of the same range, which can be crucial for risk assessment and management. 3. **Zones**: Zones delineate portions of the network based on their contiguous address blocks. They provide a way to group assets that share similar characteristics or functions, allowing for focused analysis and threat detection within specific segments of the network. 4. **Networks**: This resource differentiates between two private IP address spaces with overlapping ranges, ensuring accurate mapping and management of distinct network domains or sub-networks. It helps in isolating areas where security measures can be particularly targeted. 5. **Customers**: While not directly part of the assets, customers are related entities that represent internal or external cost centers or separate business units within an organization. This resource is important for financial tracking and managing relationships with stakeholders who may use the network differently. Overall, the ArcSight Network Model serves as a comprehensive framework to map out and analyze the critical elements of a network environment, facilitating better decision-making processes in response to various events or potential threats. Customer tagging is primarily designed for use within Managed Security Service Provider (MSSP) environments, but can also be utilized by private organizations to categorize different cost centers, internal groups, or subdivisions. This feature helps in maintaining separate identification of event traffic from multiple cost centers or business units. A customer can be considered as the "owner" of an event rather than its source or target. Assets are defined as any network endpoints with an IP address, MAC address, host name, or external ID that are significant enough to be characterized for correlation and reporting purposes. These assets can be automatically created through devices discovered by a Vulnerability Scanner or those reported through ArcSight SmartConnectors. When discovering assets, the following considerations should be taken into account:

  • Every network-visible interface IP address is considered a separate asset unless it is part of a specified asset range.

  • A network-visible interface includes any device with an IP or MAC address such as firewalls, routers, web servers, etc. The focus is on modeling assets that are relevant to correlation and reporting rather than all the network interfaces on the network.

  • Multiple active network interfaces from one piece of hardware (e.g., a router or web server) can be modeled manually in the Assets tab as alternate interfaces if they have multiple IP addresses.

ArcSight ESM provides an asset aging function that tracks the last time an asset was scanned and diminishes its confidence in the priority formula over time, eventually reducing it to zero if not scanned within a configurable amount of time. This helps in maintaining an up-to-date view of network assets. IP addresses are crucial for managing and identifying devices in a network, especially when there are numerous nodes to track like server VLANs, DMZs, or DHCP-utilizing desktop PCs and laptops. An asset range is useful for this purpose as it simplifies the tracking of multiple assets. When an event is processed by a SmartConnector or ArcSight ESM, the log source endpoints are identified either individually or through their assigned asset range identifier in the event schema. Zones in ArcSight serve to represent functional parts of a network with contiguous IP addresses, such as DMZs, VPNs, wireless LANs, or DHCP networks. Each asset or address range is associated with a zone, and these zones are pre-configured for standard global IP ranges unless specified otherwise. It's important to configure zones to represent both static and dynamic DHCP configurations, allowing even static assets to be manually assigned within dynamic zones. Asset Groups are logical groupings of one or more Asset resources, organized hierarchically where properties assigned to an Asset Group apply to all associated assets. Zones play a crucial role in importing assets via batch processes or from vulnerability scanners; IP addresses that match a zone automatically become assigned. It's recommended to have zones in place before importing assets, though manual assignment is possible if needed. Asset ranges are used for controlling permissions (role-based access control) and delegating responsibility for reporting on specific assets. The asset model resources describe the attributes of these assets, which are integral parts of the overall network modeling process. The diagram in Figure 5 represents an Asset Model which includes various components such as vulnerabilities, locations, and asset categories. Here's a summary of these elements within the model: Vulnerabilities: These are aspects of the network that can be discovered and updated using scanners. Changes to vulnerability resources are typically made manually by associating them with Knowledge Base articles. Vulnerabilities can be linked to assets through either the Vulnerabilities or Assets editors. Locations: The system provides a location database which maps IP addresses to their owning bodies, potentially including finer-grained details such as physical locations for controlled networks and corrections from the database. This information is encapsulated in the Location resource where users can override default mappings with specific network-relevant data. Asset Categories: These are ArcSight resources that define properties of assets based on how they are used within an organization. Asset categories serve to differentiate, provide relevance, and add context by establishing classifications (security classification), identities, ownership, and criticalities for the assets in a network. The root of a category such as "Criticality" in the group /All Asset Categories/System Asset Categories/Criticality defines the property itself, while its members (e.g., Very High, High) define possible values for that property. These categories can be assigned to individual assets, asset ranges, and asset groups. In summary, the diagram illustrates how an organization's network assets are modeled with a focus on categorization, vulnerability assessment, and precise location tracking, all of which are critical components in managing cybersecurity and operational resilience. The text discusses various aspects of asset categorization within a system, focusing on how assets are grouped, categorized, and assigned properties based on their relationship to each other. Here's a summary of the main points: 1. **Asset Groups**: These are folders that contain one or more Asset resources. They are hierarchical, meaning that any property assigned to an Asset Group applies to all the assets within that group. This includes inherited categories from the group as well as individual asset categories. 2. **Asset Ranges**: These refer to specific ranges of assets and inherit the asset categories defined for their respective ranges. It is the most granular level at which asset categories can be applied. 3. **Zones**: Zones are used to categorize networks where assets might not be constant, such as wireless or VPN networks. Categories assigned to zones describe the network itself rather than the assets within it. Examples include whether a network is wireless, encrypted, or a VPN. These categories do not apply to individual assets within the zone. 4. **Implementing a Data Model**: The document provides guidance on implementing a data model using standard options supported by HP. While there are various options available, they should be used collectively to develop an effective process for managing customer data from different sources into ArcSight without duplication. 5. **Manual Option**: This approach involves manually creating Zones, Assets, and Asset Ranges within the console. It is particularly useful when starting with a new system or when dealing with specific requirements that do not fit standard templates. In summary, this document outlines how to efficiently categorize and manage assets in various contexts using hierarchical groupings, inheritance of properties, and explicit zone definitions based on network characteristics. The implementation strategy also considers the integration of data from multiple sources through a systematic approach involving manual configuration where necessary. The goal is to either improve or correct already established zones, assets, and asset ranges in a system. **Pros:**

  • It provides an easy way to add, modify, or remove specific settings.

**Cons:**

  • This method might not be effective for large installations due to its complexity.

**Option - Network Modeling Tool:** This option involves importing three CSV files into the ArcSight console using a network model wizard to set up the network and asset model. **Pros:**

  • It is a straightforward, controlled approach to defining a network model.

  • Offers a structured method for gathering necessary information for the network model.

  • Allows for incremental updates to zones, assets, or ranges if needed.

**Cons:**

  • Relies on manual use of a graphical user interface (GUI) wizard for importing the network model.

  • Requires data to be formatted into specific CSV templates.

**Option - Asset Import Connector:** HP ArcSight offers a SmartConnector that automatically imports assets using the same CSV format template as mentioned above. **Pros:**

  • Automates the asset import process, saving time and effort.

**Cons:**

  • Does not support importing options for network zones or asset ranges.

  • Still requires data to be formatted into a CSV format, although it provides a data mapping GUI for configuration.

**Option - Vulnerability Scanner – Network Scanner:** HP ArcSight allows vulnerability scanners (or network scanners) to populate assets using the results of their scan data. They provide SmartConnectors that enable manual or automatic import of scan data from various vendor software into ArcSight. **Pros:**

  • These scanners are common in enterprise networks and can serve as a rich source of asset information.

  • They also offer vulnerability details for better risk prioritization of assets.

Overall, these options aim to enhance the network model by providing different methods for importing and updating zones, assets, and ranges with varying degrees of automation and complexity. This discussion revolves around managing assets in ArcSight for network security management. The primary objective is to provide context and assess the risk footprint of various assets through their TCP or UDP services usage. However, vulnerability scanners and network scanners are primarily used for asset information but not suitable for defining zones or ranges without additional manual configuration. **Pros:** 1. **Auto Asset Creation using SmartConnectors**: This feature allows automatic creation of assets when new IP addresses are detected as log sources, providing a supplementary method to address potential missed assets during initial import. 2. **Provides Context and Risk Footprint**: By identifying the services running on each asset, it helps in better correlation with other security data and enhances overall risk assessment. **Cons:** 1. **Inadequate for Sole Asset Creation**: While useful, SmartConnectors are not a standalone solution for creating assets; they should be used as an additional tool rather than the primary method. 2. **Requires Manual Configuration for Undefined Assets**: Any assets imported that do not fall under predefined zones or ranges will appear as undefined in the ArcSight console and need manual setup to function effectively within the system. **Data Model General Advice:** 1. **Minimum Requirement: Zone Creation**: Zones should be established as a fundamental dataset to support further data modeling options, such as direct imports from IPAM databases or using vulnerability reports for automatic asset definition. 2. **Manual Network Creation if Necessary**: Networks should be created manually when there are overlapping subnets in the organization to better manage and use data for event enrichment and correlation rules within ArcSight. 3. **Manageable Data Model**: The model should be structured to best serve the security analytics, SIEM needs, and provide context that supports both rule-based correlations and overall risk management capabilities. 4. **Interfacing Options**: Depending on specific operational requirements, interfaces of assets might not need to be linked together; thus, it's typical for a single device (like a firewall or server) to represent multiple assets in the model. This approach helps optimize the use of data and enhances the ability to correlate security events effectively within the ArcSight platform. The provided text outlines a strategy for defining data model requirements and implementing asset modeling strategies for ArcSight, an enterprise solution used in cybersecurity management. It emphasizes that IP addresses are considered individual assets within ArcSight's system framework, which is common among large enterprises using similar configuration information models (CIM) and CMDB systems. To optimize the return on investment of the ArcSight solution, HP Professional Services recommend developing a strategy for asset modeling with customers, ensuring that project requirements align closely with recommended integration methods to validate best practices within an organization's network architecture. This approach is also supported by documented options and illustrated through case studies in Appendix A, which provide practical examples for complex asset models tailored to specific customer needs. The section then transitions into a more technical discussion of defining data model requirements, using a scenario-based approach that contextualizes objectives before selecting methods to fulfill them. The example provided is a network model of a corporate environment, detailing various elements such as wide area networks, datacenters, satellite branches, and different types of wireless networks, all of which could be modeled within ArcSight's framework according to specified requirements. The text outlines an objective to create zones within the ArcSight system based on this network structure, reflecting a detailed corporate network environment that includes elements like wide area networks, datacenters with specific infrastructures (such as DMZ, Client VPN, and business-to-business VPN), satellite sales branches, head offices providing various networking services including wireless and DHCP services, and a management network. In summary, the text is about setting up an asset model for ArcSight to manage complex corporate networks efficiently, aligning with enterprise data management standards and tailored to specific organizational needs through strategic planning and implementation support from service providers like HP. To summarize the provided information, the task involves representing major network hubs with specific IP address ranges as single entities. These include subsidiary company, head office UK, head office US, datacenters 1 and 2, satellite sales branches WAN, and more. If there are overlapping IP addresses between different zones, corresponding networks should be created in ArcSight for detailed monitoring. The purpose of this task is to provide the foundation information necessary to either manually or automatically define individual assets using methods such as SmartConnectors or vulnerability scanning data. Additionally, it mentions a desire for automated updates of Zone definitions within ArcSight, which can be achieved through non-standard HP options provided by professional services if needed. The objective involves defining asset ranges for all subnets in the mentioned hubs and categorizing them according to their infrastructure type: DMZ with static IP addressing, Remote Access Client VPN with DHCP addressing, Remote Access VPN, specific customer VPNs, and NAT settings where applicable. Finally, it includes categorizing the Head Office US corporate wireless network as a "corporate wireless DHCP IP" range. The primary objective of this project is to categorize and organize assets within the organization's network, specifically focusing on printers, desktops with DHCP networks, corporate wireless networks, and management VRF subnets. These categories are assigned specific names (e.g., "Printer network," "Desktop DHCP IP," "Corporate Wireless DHCP IP," and "Management Network") to facilitate better management and tracking of these assets across the organization's US locations. Sales Satellite Branches and Head Office UK are excluded from this scope, as they do not need specific asset ranges defined. The categorization provides a framework for more detailed event schema information that can be used for enhanced analytics related to source and destination addresses in network communications. The project aims to provide automated updates of asset ranges to ArcSight using documented methods provided by HP or through consultancy engagements where non-standard options are available. As part of the implementation, all individual assets should be included for finance applications and public facing infrastructure within six months of going live with the project. This includes having valid vulnerability data and defined TCP/UDP services for these applications and infrastructure components. The following use cases for Event of Interest have been identified: 1. Network communications originating from the DMZ (a special security area) should not initiate connectivity into Datacentres or further into the network. 2. Network communications originating from any public IP address that is not a B2B VPN (business-to-business virtual private network) should only access publicly facing infrastructure as defined by an asset range or individual asset. The document outlines several key policy deviations regarding network communications and asset access, which include: 1. **Network Communications from Printer VLAN**: It is considered a deviation of policy to initiate any network communication from the printer VLAN. This includes not allowing data center networks (DataCentre 1 and 2) to communicate with printers directly. 2. **Data Center Network Communication Restrictions**: DataCentre 1 and 2 should only communicate within their respective zones:

  • Client VPN Infrastructure should only communicate with DataCentre 1 and 2 zones.

  • B2B VPN Infrastructure should only communicate with the DataCentre 1 and 2 DMZ.

  • Providing reports and dashboard statistics is required if such communication occurs to the Management network.

3. **Accessing Non-Desktop Devices via Default Usernames**: Using a system default username to access devices like printers, servers, firewalls, routers, etc., not originating from the management network raises priority as a deviation of policy. This includes situations where IP addresses are public or part of Client VPN, B2B VPN. 4. **Accessing Finance-Related Assets**: Only specific desktop DHCP ranges (10.10.10.1 to 10.10.10.254) should be used for accessing assets categorized as finance, except for management network access. The provided example illustrates a network diagram of a point of sales web application accessible over the internet, which includes:

  • A router connecting to a firewall and providing routing between the internet and internal networks.

  • Public (internet facing DMZ) and private (internal facing DMZ green and blue) communication routes for POS Webservers and database server.

  • Management DMZ for management network access to all devices.

  • Three POS point of sales Webservers and a single database server as components.

The objective is to create a comprehensive inventory of all assets used in a point-of-sale (POS) application, including infrastructure dependencies and linking them to financial management systems using their respective IP addresses. This includes performing vulnerability scans on various networks such as LAN, Internet, internal DMZ, external DMZ, and the database internal DMZ. The objective aims to enhance analytics by identifying risks associated with TCP/UDP services through scanning, which helps in understanding attack sources and vulnerabilities better. It also emphasizes providing enhanced threat landscape insights based on event origins. The vulnerability scans are designed to mimic access from different networks for comprehensive coverage. By conducting these scans, potential risks can be mitigated more effectively, especially when the IDS detects hostile payloads targeting incorrect service types (e.g., misidentifying a Windows Internet Information Server as Apache). This approach helps prioritize incidents and reduces severity based on accurate vulnerability assessments. Additionally, there is a goal to automate asset updates for ArcSight systems, which will likely improve efficiency in managing security measures across the POS application. The objective also outlines specific use cases for maintaining secure access within the POS system: no vulnerabilities should be visible from the internet; all user access to the POS database and web servers should be restricted to the management network; and only service accounts are allowed access from certain internal DMZs to other internal DMZs, ensuring a layered security approach. The document outlines that has implemented a workaround using Active Lists and ERCRI rules for data modeling, focusing on critical assets and executives to elevate their importance. They have requested the markets provide IP addresses of critical assets, with a static list containing less than 400 entries at any time. During onboarding, only about 10-30% of asset information is obtained, leading to an incomplete and unreliable asset model. This results in gaps in the asset classification/categorization, making it difficult for analysts to investigate incidents effectively due to a lack of context. Similarly, the network model is not utilized because it is deemed incomplete or unreliable, with all changes and updates being performed manually. The level of detail varies across markets, and there's little contextual support during daily investigations. This situation stems from the inability to adequately model the complexity of 's multi-hosted environment. This document outlines several key points related to asset management, data modeling, and engagement within a client's security operations center (SOC). It highlights that tracking the owner of a specific asset can take up to three weeks. The definition for critical assets is unclear, and their accuracy in ArcSight is questionable. Moreover, there's no clear market change management insight provided by the client except within VF Group. Vulnerability data collection lacks integration into detection rules and is only used as reference post-event occurrence. Data Modeling Recommendations suggest automating the data model creation and maintenance for better network event identification in ArcSight ESM, with various automation methods available including network modeling tools, asset import connectors, vulnerability scanner feeds, or auto asset creation. A well-defined data model improves decision-making about events of interest by providing more detailed context to analysts. Quick Wins include transitioning from ArcSight OSI to RepSM for improved reputational data quality and reduced false positives. Additionally, a more effective communication plan should be developed within the markets to involve asset owners beyond the single point of contact (SPOC). Lastly, during the first year of its existence, the client SOC successfully onboarded 16 out of 18 local markets as part of an engagement strategy. The article discusses the rapid implementation of cybersecurity measures for local markets, which was achieved through operational objectives using firewalls, Symantec Endpoint Protection (SEP), Host Intrusion Prevention System (HIPS)/Intrusion Detection Systems (IDS) devices. Initially led by the client's Security Operations Center (), the task has since been handed over to a third-party company managed by SSCH for project management. The on-boarding process involves three internal gates: Gate 1 focuses on reviewing scope, size, and design; Gate 2 is about starting the "Pilot Phase" or "Engineering Readiness"; and Gate 3 is for closing the "Pilot Phase" and the project, requiring participation from , local market, and facilitated by SSCH. The initial plan aimed for a balance of 70% standardization and 30% customization per local market, but in practice, it was 30% standardization and 70% customization. The process does not involve standardization of devices or operational processes across different local markets, resulting in each having customized projects. The local markets were not informed about the necessary security measures for their critical systems before onboarding, nor was a risk assessment conducted to identify risks/threats beforehand. Perimeter defense use cases and feeds are heavily focused on the onboarded devices. There are also issues with resource availability and commitment from the local markets, who mistakenly believe that once secured they are protected in every aspect. The SSCH group drives the initial two stages of the onboarding process for new local markets, using a pre-onboarding questionnaire to gather data about the environment. They review this data with the local market to ensure its accuracy before third-party SnT team members perform onsite work and provide validation. Challenges in gathering accurate information from the local market are encountered during this phase. Once data is collected and validated, a design document along with spreadsheets containing network diagrams, host names, and other networking/system information are provided. The first toll gate requires scoping and sizing details, identification of key players (SPOC), and review by the internal . This phase involves preparing for the handover from SSCH to the local team, ensuring all required information is validated and systems are connected properly. The provided text outlines a process related to setting up or validating security systems, particularly focusing on network configurations and data feeds for devices like firewalls, IDS/IPS, HIDS, WAF, and DAM. It emphasizes reviewing policies, checking stability of incoming feeds, and ensuring operational alignment before moving forward with more advanced stages such as engineering readiness and pilot phases. Key performance indicators (KPIs) for the engineering readiness milestone include network connection stability, correct ArcSight configurations, and error-free feed parsing. The process also involves developing low-level design from high-level topology information, testing service access, and consolidating local market data during the pilot phase. Custom content is developed and tested throughout this phase, with ongoing adjustments based on new or existing requirements. This summary outlines the completion of several tasks as part of a project transitioning to operations, focusing on technical alignment with client requirements and operational standards. Key activities include data feed tuning for asset information collection, implementation of ticketing system changes including template creation, certificate exchange, and service level agreement signing off by local representatives. Additionally, there is a focus on incident management processes such as security incident handling and remediation. The final review stage before transitioning to operations involves a comprehensive check of operational tasks, the alignment of incident management process, and the production of weekly/monthly reports for assessment purposes. The document outlines the challenges faced by in managing risks related to local markets due to lack of visibility and ability to react to incidents. To address these issues, a service impact review should be conducted to ensure that existing services can adapt to new market changes. The markets are responsible for providing accurate information, with onboarding requirements as part of their objectives. The on-boarding process must be mandate-driven to ensure local markets understand the process and collaborate effectively. Different boarding levels should be aligned with use case packs, requiring not only required feeds but also testing against target use cases. HP proposes using a code of connect catalogue to implement this approach, aiming to improve risk management through better visibility and response capabilities in local markets. Additionally, currently faces issues with communication tools such as email for interfacing with local markets:

  • Emails are time-consuming, manual, difficult to track, resource-intensive, fail to close the loop on cases, have a time lapse, and are insecure or unreliable.

The organization uses seven different instances of UCMDB, which store, control, and manage software and infrastructure components along with their relationships and dependencies. This information is crucial for providing adequate incident response. In summary, the client's Security Operations Center (SOC) faces challenges due to a lack of a unified Configuration Management Database (UCMDB), which leads to inefficiencies in response management and case handling. The current internal tool, SCITT, is used for internal cases only, lacking a customer interface module. To address this, the SOC plans to integrate a wiki as an internal knowledge base and eventually incorporate it into SCITT. Additionally, they have developed an SMS tool for high-severity case notifications but face issues with its accuracy and integration with SCITT. The SOC is currently unable to send out SMS alerts due to these problems. The SOC also seeks access to local market administration tools like NSM and Threat Reporter, though this process can be lengthy when managed by the markets. Monitoring services such as HP Open View are used to keep track of service availability and integrity at the top level for the client's services. Despite ongoing efforts, including internal developments (like SCITT) and external monitoring, significant improvements in case management efficiency through a central UCMDB remain an unmet need, posing potential risks to the organization during security incidents. The external customer interface planned for SCITT aims to address the issue of manually updating case information, which leads to inconsistencies and double work due to separate email updates followed by manual SCITT updates. This interface will also require access to markets' end devices management tools as part of onboarding requirements, improving the SOC's ability to investigate events and reduce reliance on Markets' full-time employees (FTEs). Regarding current tool usage:

  • Part of the devices are integrated in Remedy for Group.

  • eRrunBook is managed by Group but is incomplete and inconsistent, planned to be replaced with ADDM and managed by IT Services.

  • Imap provides information on IP allocation, subnets, sometimes locations, and owners, currently managed by VCNO (formerly NSU).

  • Threat Reporter gathers device information in some cases.

  • Spider covers laptops and desktops but does not cover the whole organization as not all local markets are part of the Global Desktop Domain.

For onboarding:

  • During the process, requests access to internal admin tools like Juniper defense center, which is typically granted months after on-boarding when the tools are managed by the Markets.

  • Remedy is used for Group; some markets do not have it, and they prefer using SecureEmail for communications. Templates exist for email communication with customers. The SOC avoids using market ticketing systems to reduce learning multiple systems.

The text discusses various tools and technologies used by a client organization, , in its incident management processes and overall operations. Here is a summary of the key points mentioned: 1. **Email Communication**: Emails are the primary mode of communication for dealing with local market incidents at both first (1st level) and second (2nd level) levels within the organization. Analysts sometimes use Remedy along with emails to manage cases. They append notification comments in SCITT, a new case management application developed internally using Perl and Mojolicious web framework, which replaced ArcSight's case management on December 1, 2012. 2. **SCITT**: This is an internal tool used for managing more than 3000 cases from level 1 to level 2. It serves as a replacement for both wiki and traditional case management systems. SCITT operates under a similar workflow process to ArcSight and aims to be eventually open to customers. 3. **DHCP Logs**: The DHCP log share, accessible via the Office IT (OIT) portal, allows viewing of DHCP logs but these are not available from all local markets yet. 4. **SecureCRT**: This tool provides secure remote access, file transfer, and data tunneling capabilities for the organization. 5. **Firefox Plugins**: Two plugins used by analysts include Cyberghost VPN Anonymizer for functional purchases and Wiki Search. 6. **Password Safe**: A password management tool that stores critical system passwords securely. 7. **Domain Management Portal**: This portal allows searching for business-criticality information of VF domains across all local markets. 8. **eRunbook**: It serves as a search server asset information tool, utilizing CMDB tools or assets management database, with data primarily populated by the group. However, not all markets feed data into eRunbook. 9. **Spider Database**: This is a search client asset (Vic Client) tool specifically designed for laptops equipped with VIC, allowing access to detailed information about devices as long as they are part of the Vic Client setup. The document outlines various tools and platforms used by for different purposes related to network security management. These include HP Openview (for server monitoring), Symantec Threat Reporter (for investigating HIDS events), IMAP (to search IP addresses for descriptions), NSM (network service monitor), NTO (vulnerability management tool), Group IPDB (similar functionality to IMAP), Security Service Platform Terminal Server (SSP) and Defense Center (operated by SSCH, providing access to various security devices and information), Imperva SecureSphere (Web Application Firewall management console). The document also mentions a recommendation for improving the case management process by automating certain tasks to free up analysts' time. Additionally, there is a suggestion to prioritize improvements in the ArcSight data model to enhance output context and overall content quality. The main challenge mentioned in this text is that a good data model can be seen as a compensating control for managing complex systems effectively. To overcome this, analysts must have access to necessary administration tools at all times throughout their work. During onboarding, it's crucial that these tools are available and accessible to ensure smooth operations and adherence to best practices in software development, which is vital for maintaining acceptable levels of availability and integrity in the application used by , specifically SCITT (which appears to be a case management tool). The document also addresses some issues within the service delivery:

  • The lack of clarity about the service offering among other teams and management can lead to miscommunications and unclear responsibilities.

  • There is an issue with service availability due to dependencies on unique resources who might not be available consistently, impacting the quality of outputs.

  • Misalignment between different services within , particularly in communication with local markets, indicates a need for improvement in this area.

  • Some roles are burdened with multiple responsibilities, potentially leading to poor performance and reduced service quality, especially when some services are handled on an ad hoc basis without formal risk assessments or remediation actions taken for identified threats, vulnerabilities, and intelligence reports.

To address these issues, a Service Catalog is in place but undergoing revision, along with documented Service Management agreements as part of the onboarding process with local markets. There's also a communication plan involving basic services, KPIs, feedback mechanisms between SPOCs and level 2’s assigned to respective markets for quality control. The focus includes enhancing clarity about service offerings, improving resource availability through better planning or alternative arrangements, and strengthening internal communications especially with external interfaces like local markets. Additionally, addressing the over-burdening of roles and lack of formal risk management should be prioritized to improve overall performance and reduce risks within the organization. The provided text outlines the various services offered by a company known as . These services include event monitoring, incident management (both internal and external), forensic services, malware investigations, threat & vulnerability analysis, and security problem management. Additionally, there are ongoing projects such as developing a Security Problem Management Team to find better solutions for RDP support, acting as liaisons to external parties, and conducting ad-hoc research on identified threats from sources outside the company. The SOC also handles remote device wiping services for mobile devices, develops content related to security incidents, and supports ArcSight infrastructure among other things. The internal team at manages various aspects related to market and client services, including custom reporting, providing reports on cases and devices, risk services, PCI compliance support, participating in NATO Cyber defense exercises, conducting cyber exercises in India, sharing intelligence through a shared portal with other companies, and developing a finalized service catalog. To improve efficiency and alignment, the team should finalize and agree upon an internal service catalog to clarify roles and responsibilities, revise operational structures to avoid single points of failure, align services with end2end delivery strategy, and ensure global priorities are met. Additionally, metrics, reports, and Key Performance Indicators (KPIs) are crucial for performance monitoring and driving change; they should be suitable for different organizational layers and not just provide data but inform decision-making based on the initial lack of analytics in existing reports. Existing Key Performance Indicators (KPIs) in 's Incident Response are limited to local Market Security Incident KPI's as well as some technical performance reports for internal use only. Reporting is primarily through manual work and platform-based, with most reports being technical in nature. The quality and coverage of the existing data are considered poor, affecting meaningful reporting. Recommendations include involving management in the KPIs definition process to ensure alignment with expectations, providing different metrics, reports, and KPIs tailored to various organizational layers based on the needs of the data consumers. The provided text emphasizes the importance of using Key Performance Indicators (KPIs), metrics, and reporting for continuous improvement in a service. It highlights that these tools should accurately reflect both the successes and pain-points of the service to be valuable to stakeholders. Negatives aspects should be communicated to management to drive necessary changes and mitigate issues. The text also stresses the need for thorough review of metrics for quality and granularity, cautioning against misleading or inappropriate representations. It advises that all metrics, reports, and KPIs should be pre-socialized with target audiences to prevent misinterpretation. Additionally, it calls for developing metrics and KPIs related to market performance and responsiveness, as well as the time window from event receipt to case creation. The text suggests adopting a top-down approach in the development of reports, KPIs, and metrics instead of the previously used bottom-up method. This strategic focus is intended to enhance clarity and effectiveness in monitoring service performance and driving improvements. This document outlines the role of a Level 1 Analyst within the Global Security Operations Center (), which is responsible for providing security analyst expertise, analyzing threats using various tools such as netflow traffic data, log files, consolidated event/alarm data, and firewall data. The analysts work closely with the Security Incident Manager to provide technical security expertise in order to deliver professional data analysis reports that inform corrective actions and measures to mitigate risks. The role involves working shifts around the clock (24/7), with each 8-hour shift followed by a half-hour break and a 30-minute shift change, resulting in an average day of 8.5 hours per shift. There are 12 Level 1 Analysts, one of whom is external; these analysts cover shifts together in pairs but have additional resources available for ad-hoc activities during the workweek. The typical daily duties involve monitoring the ArcSight main channel and providing expert analysis to mitigate threats effectively. During an 8-hour shift, Level 1 analysts monitor both the mailbox and hotline for reported events. They conduct initial severity assessments subjectively during event analysis, learning to maintain accuracy and consistency in determining severity levels. Level 1 analysts also cover a broader sliding window of up to 2 weeks, as correlated events may not always be picked up within a 12-hour period. Threat Intelligence and Malware analysts are considered Level 1 Analysts, specializing in providing services for the . Level 2 analysts play a crucial role in enhancing the efficiency and effectiveness of the Global Security Operations Center (). They contribute to optimizing SIEM filters, rules, expressions, and other identification mechanisms used within the center. Senior intrusion analysts who can handle multiple security incidents simultaneously are responsible for refining SIEM rules and logic to make the team more efficient. Level 2 analysts provide real-time threat information and indicators, recommend risk mitigation strategies, work closely with various teams including Engineering, Security Incident Management, and analyst teams, and ensure superior technical security expertise is delivered consistently. They are expected to deliver professional communication and continuously refine their skills in a fast-paced environment where accuracy and efficiency are paramount. This passage describes the roles of Level-2 analysts and Incident Managers within a client's SOC (Security Operations Center). The Level-2 analyst acts as both a mentor and teacher for Level-1 analysts, ensuring knowledge transfer among team members. They are responsible for improving content and use cases, working closely with Level-1 analysts. There are four Level-2 staff currently operating 24/7 alongside Level-1 analysts, focusing on enhancing the service provided to clients. Level-2 analysts are considered senior Level-1 analysts and are assigned to specific markets. They participate in monthly meetings with their respective markets to review active cases, issues that need resolution, and report back to the team. If a case is not responded to, they re-communicate through email; if necessary, this communication escalates to involve the Level-2 staff during monthly market calls. The goal is to ensure timely responses without predefined thresholds, focusing on best effort for client satisfaction. The Incident Manager role is crucial for managing security incidents and ensuring a coordinated response across various parties involved in service restoration and problem management. They are responsible for handling S1 to S3 incidents, which they review through the GSIM mailbox. Each incident manager has specific Service Level Agreement (SLA) targets that begin once an incident is declared. The SIEM Specialist role involves engineering and maintaining the system environment at a Global Security Operations Centre (SOC), supporting . They are responsible for maintaining, administering, and optimizing the security system environment including the SIEM system. This includes integrating new data feeds and systems to expand monitoring services, working closely with the security incident manager and analysts to adapt correlation rules, and providing reports on key performance indicators (KPIs). The SOC Lead Engineer is in charge of maintaining all technologies within the SOC and improving them both operationally and architecturally. They supervise the development of new technologies or services for use in the SOC. The SOC Operations Manager's main responsibility is managing the analyst team, including overseeing first-level and second-level analysts, ensuring smooth analytical processes, conducting Arcsight case reviews, handling daily operations tasks, producing reports, and planning SOC resources. Lastly, the SOC Manager has a dual role that involves both technical and non-technical responsibilities. They are responsible for their direct reports, which include analysts and lead engineers, and have direct management responsibility over the 24/7 operations of the SOC. The SOC Integration Manager's role is to enhance the Security Operations Center (SOC) by continually improving customer experience, coaching and developing staff, achieving quality and timeliness metrics, managing internal operations, analyzing staffing needs for coverage, and making necessary adjustments. This involves maintaining and extending the SOC service scope through lifecycle management of current capabilities and integration of new requirements, as well as establishing frameworks to define and maintain a Key Performance Indicator (KPI) driven service organization. The current operational model of the SOC is almost exclusively reactive, relying heavily on existing rules for event highlighting. As the SOC expands in complexity and integrates more services, this reactive approach will become less effective due to its inefficiencies. Threat Intelligence Services and Malware services are currently undefined and handled ad-hoc, with no documented processes or procedures followed by analysts. All tools, reports, and data are stored locally on analyst's drives, leading to a lack of focus and understaffing in these areas. The SOC Integration Manager also needs to document the processes and procedures that analysts follow, as well as establish a centralized system for storing all tools, reports, and data. This will help improve efficiency and provide better focus for resources allocated to Threat Intelligence Services and Malware handling. Additionally, clear roles and responsibilities should be assigned and clearly communicated to ensure understanding among team members. The faces challenges in clearly defining roles and responsibilities, particularly in areas such as onboarding and network use cases, leading to inefficiencies in operations. To address these issues, a recommendation is made for revising the operational model to be more service-driven and proactive, taking into account the dynamic nature of security landscapes. This would involve thoroughly documenting all roles and responsibilities and ensuring clarity on who is accountable for various tasks, which could improve synergy within the team and maintain consistency in operational practices. In terms of incident detection, relies on four main sources: ArcSight main channel, the mailbox, Threat intelligence, and a phone hotline. They manage an average of 8-10 cases per day, with case volumes fluctuating based on time and date. The input process is currently overwhelmed by the volume of data from various sources, necessitating manual review and response to each event. There are no specific Service Level Agreement (SLA) for email and hotline services; however, most requests are handled through daily shifts without backlog accumulation. These emails and calls often involve basic consulting or advice related to local markets. Additionally, the lack of defined roles can lead to confusion about who is responsible for handling various tasks, which hampers efficiency in operations. To improve this situation, it's recommended that the operational model be revised to become more service-driven and proactive, taking into account the dynamic nature of security landscapes. Furthermore, all roles and responsibilities should be clearly defined, documented, and socialized throughout the team to ensure better synergy and consistency in daily operations. This would help improve overall efficiency and effectiveness within the organization. This document outlines a control mechanism in place to manage an influx of requests, ensuring that case management processes are not overwhelmed when there is a high volume of inputs from various channels. The primary method for handling these requests includes using the ArcSight main channel and an externally facing SCITT interface for local markets. Both channels continually tune and filter events to maintain a manageable number of incidents. The receives input primarily through its email mailbox, which handles around 50-100 emails per day, with some being handled directly by level 1 analysts in response to ad-hoc requests from local markets for incident information or problem resolution. The majority of these requests are operational rather than security related. Secondary channels include a phone hotline that receives about 5-10 calls per week, which is managed through the same processes as other input channels. Additionally, threat intelligence updates are provided by analysts to both local markets and , potentially triggering case creation based on these insights. For unhandled events or those not escalated into a formal case, there's a process where they fall off after 12 hours from the initial active channel and remain inactive for up to 2 weeks before being archived if still unresolved. Overall, this system ensures that all inputs are managed effectively despite the varying volume and nature of requests received by . This text discusses the challenges faced by a client organization in managing various channels of input, lack of granularly defined categories, resource limitations during major events, and the need for improvements in data quality and categorization. To address these issues, recommendations include increasing focus on high severity cases from the main channel, developing a mitigation process using lessons learned from other channels, improving data quality to demonstrate platform value, and implementing a 2-way ticketing system for better communication with local markets. The client's primary outputs consist of local market reports (security and policy violations), CTSO Threat report, and metrics, case management issues, and onboarding of market data. The monthly security report from SCITT provides insights into cases closed and open over the past three months by type, while the customer case report with action items is discussed at a SPOC level. The CTSO Threat report offers an overview of performance, threats, and open cases but lacks broader visibility needed for decision-makers within the organization and its local markets. The CTSO provides monthly health check numbers for internal operational metrics and number of events per market through its primary reports, which include: 1. Markets Security report - This report tracks cases opened and closed over the last three months per market. 2. Follow-up security report - It contains follow-ups for the security report with actions taken. 3. Customer Policy Violations - Weekly technical policy violations within markets (P2P, key generators, etc.). 4. CTO Report - This report provides Key Performance Indicators specific to . 5. Internal reports - Technical performance reports intended solely for internal use. Key findings reveal that the CTSO reports present a misleading view of the security provided by . Metrics like "number of on-boarded markets" lack granularity, potentially giving management an overestimated coverage perception. There's confusion about content and format requirements set by management. The Threat Intelligence reports are sent to leadership and local markets but often overlooked or not acted upon in a timely manner. These reports fail to accurately reflect successful defense against threats, reveal pain points, security weaknesses, and high severity cases that should be publicized for management action. Most reports (95%) are developed internally with minimal input from management or local markets. The is uncertain about the usefulness and value of information being presented for improvement, growth, or decision-making purposes. To summarize the text provided, it outlines a scenario where the ENT SOC supports ad-hoc reports tailored to specific local market needs but also aims to create universally usable reports. The client's SOC does not provide compliance reporting directly but shares metrics and event data with the PCI team. Incident details are communicated monthly via email between the markets and the SOC, which uses Remedy for some markets to manage incidents. However, standardization of Remedy across the client is challenging due to inconsistencies in implementations. Recommendations suggest implementing a 2-way ticketing system for better communication between local markets and the . It also recommends enhancing communications with soliciting threat intelligence needs from local markets and establishing a baseline model that outlines metrics, including SLA reporting on attainment, case management activity reporting, analyst analytics, and key performance indicators. The governance model is centered around the CTSO board, which drives strategy, projects, SLAs, technologies, and future direction of the . The CTSO board manages service requests from local markets, IT operations, and other groups, with the reviewing these requests for feasibility. The process involves significant effort, costs, and resources to address a request from the client (denoted as ). Once the necessary information is provided by the client, the CTSO (Customer Technology Solutions Organization) board reviews and either approves, denies, or modifies the request. If approved, all subsequent requests follow a standard planning, budgeting, and resourcing cycle. The CTSO board plays a crucial role in reviewing and approving use cases that are strategic in nature. These use cases are evaluated by the client for feasibility. The CTSO board meets monthly to discuss current market conditions, security threats, and performance metrics of the client's activities. At these meetings, the Manager presents potential use cases and their impact on the business. The governance structure includes several decision levels and roles:

  • ** Level**: This involves internal processes and procedures as well as adherence to Service Level Agreements (SLA) defined by the client. The feasibility of use cases, operational decisions, and tactical guidance are also managed at this level.

  • **CTSO Level**: Strategic decisions regarding use cases, demand management, and strategic direction are made here. Changes in market conditions, SLA modifications, and updates to threat profiles are handled at this level.

Findings from the assessment indicate a lack of clear alignment between the and the CTSO board, with undefined exception and emergency processes that limit the client's response capabilities. There is confusion about management expectations beyond use cases. Recommendations include enhancing alignment between the and the CTSO board for use case approvals, defining pre-defined operational exceptions and responses to imminent threats, and establishing the as a trusted advisor to the CTSO board by providing monthly analytical threat profiles and business metrics. Additionally, updating documented processes and procedures within the is recommended to improve efficiency and effectiveness in operations. The current operational model for the at , utilizing HP's ArcSight technology, is found to be broken and not aligned with the expectations for a function of its size and nature. To address this issue, a proposed Target Operating Model (TOM) has been developed as part of a significant business change moving from Voice operations to Data operations. The TOM outlines how 's should operate in its future state. The current model is being enhanced through three phases: Housekeeping, Governance model enhancement, and Solution Design. In the Housekeeping phase, housekeeping activities are being undertaken to improve efficiency and lay a foundation for further remediation activities. These include improving visibility and communication with the CTSO community and Asset Owners, establishing tactical reporting, defining roles and responsibilities, developing SCITT (Security Content Automation Tool), proactive use of intelligence, refining RCA processes, enhancing event feed monitoring, and tuning existing events. The Governance model enhancement phase focuses on rebuilding the from its basic functions to align with the proposed TOM. This involves setting up a more robust governance structure that includes all stakeholders, ensuring clear visibility and communication paths for seamless collaboration. Lastly, in the Solution Design phase, the overall architecture and capabilities of the system are being reviewed and adjusted to meet the requirements of the new operating model, aiming to create an efficient and effective SOC capable of adapting to future business dynamics. The document outlines the need for both technical and service designs to be developed based on the proposed future state model, which serves as the blueprints for the client's service operations center (SOC). The design activities include creating a data model, defining use cases, developing an architecture, and aligning devices with these use cases. Additionally, it involves setting up UCMDB tactical and long-term strategies for consolidation, designing future state operating models, and integrating service desks. The technical design includes aspects such as architecture, device alignment to use cases, data model creation, and management of super connectors (FWCM) from local markets. The service design covers the definition of future state operating models, service catalogue, VIS Service, metrics & reporting including Key Performance Indicators (KPIs), Service Level Agreements (SLAs), and Operational Level Agreements (OLAs). Simplified versions may be created initially and refined over a 6-month period as the new service matures. The document also mentions specific roles such as SPOC for local markets, IMS for incident management, NSU for network service unit notifications, SSCH for operational, security, and infrastructure changes, and EPDM (Device Manager). Appendix A provides a data modeling case which is not detailed in the main text. The appendix outlines the basic requirements for a case study involving the automation of a data model for an ArcSight SIEM solution. The primary objective is to fully automate the data model, which necessitates a phased approach due to incomplete information available in the CMDB and IPAM databases. Key objectives include utilizing the customer's CMDB as a master database for asset management, integrating their IPAM for network management, and incorporating TCP/UDP service inventory and vulnerability data from McAfee solutions. Additionally, there is a need for classification of services such as financial systems, payment card systems, and human resources systems based on confidentiality, integrity, and availability values, which are currently unclassified in the CMDB. The integration will involve enhancing CMDB data with IP address information to support the ArcSight SIEM solution. The process involves creating a specific ArcSight Asset Database to consolidate data from multiple sources, including CMDB and DNS. This database uses the hostname as the key index value to relate IP address information from DNS and imports data from IPAM to include subnet and category information for ArcSight asset categories. The transformed dataset is then exported in the ArcSight Archive bundle (ARB) format and imported into ArcSight using a grep perl script. To maintain efficiency, categorization of zones such as VPN DHCP ranges is manually updated in the IPAM database, which then inherits these categorizations to the assets. Finally, vulnerability data is imported for each asset based on its IP address and key index value.

Disclaimer:
The content in this post is for informational and educational purposes only. It may reference technologies, configurations, or products that are outdated or no longer supported. If there are any comments or feedback, kindly leave a message and will be responded.

Recent Posts

See All
Zeus Bot Use Case

Summary: "Zeus Bot Version 5.0" is a document detailing ArcSight's enhancements to its Zeus botnet detection capabilities within the...

 
 
 
Windows Unified Connector

Summary: The document "iServe_Demo_System_Usage_for_HP_ESP_Canada_Solution_Architects_v1.1" outlines specific deployment guidelines for...

 
 
 

Comments


@2021 Copyrights reserved.

bottom of page