top of page

Complete Enchilada Presentation

  • Writer: Pavan Raja
    Pavan Raja
  • Apr 8, 2025
  • 144 min read

Summary:

The notices provided are for internal use at Hewlett-Packard (HP) and are part of a series related to security measures and software resources management. Here’s an overview of each notice along with details on what they entail: 1. **ArcSight Proof of Concept Boot Camp Training**: This training is designed for users who need to be proficient in using ArcSight products. The registration process should be done via email or phone call with Philippe Jouvellier at the Global Channel Partner Management Office. 2. **Resources for Downloading Demo VMs and Learning Materials**: Users can access various resources including: - **ArcSight Demo VMs**: These are downloadable virtual machines containing ArcSight software, requiring specific passwords to unzip them. - **HP Arcsight Self Learning Resources**: This includes multiple resources such as partner portals, software support online, and HP ESP University with community features for learning and collaboration. 3. **Partner Portals**: Users can access several partner-specific portals: - **HP Partner Portal**: General portal providing information on becoming a partner, accessing training and certification, and sales administration resources. - **HP Software Partner Central**: Focuses specifically on enabling partners for HP Software products by offering sales and marketing collateral, enablement tools, pricing guides, and demo software. - **HP ESP Partner Central**: A portal tailored to HP Enterprise Security Products (ESP) partners, with community forums, training materials, and support. - **Protect724**: An online community portal for accessing product documents, webinars, video tutorials, and more related to HP security solutions. - **CloudShare Virtual Demo/PoC Environment**: A virtual environment for testing and demonstrating ArcSight capabilities. These notices are part of a series that includes numbers 1429 through 1448, each corresponding to another internal resource or tool provided by HP. The content in these notices is copyrighted by Hewlett-Packard Development Company, L.P., and it is important to follow the instructions carefully to ensure compliance with usage rights and restrictions.

Details:

This document outlines a training event called the "HP ArcSight Partners Enablement Proof of Concept Boot Camp." The primary objective is to enhance the technical skills of participants in using HP ArcSight Solution Products. The session covers various topics including new features and functionalities, data normalization, correlation techniques, user interface design, network model, content practices, rule creation, and system installation. The boot camp is designed for security professionals who are expected to be technically competent in computer networking, proficient in debugging applications, have experience in IT security solutions, and comfortable working within Linux/BSD environments. Attendees typically work on engineering solutions for customers and are experienced in demonstrations and proof of concepts. The session includes practical workshops led by an instructor with a background in various companies including Hewlett-Packard, Arche Group Siemens, Telindus, EADS (now Airbus Cassidian Cybersecurity), and currently at HP as part of the HP Software and HP Enterprise Security Products division. The agenda consists of two parts, each covering different modules related to ArcSight Logger, CloudShare configuration, ArcMC solution overview, reviewing individual components of ESM (Event Management System), and hands-on lab sessions for configuring connectors on ESM in a CloudShare environment. This is a summary of the agenda for a HP ArcSight Proof of Concept Boot Camp held on June 30, 2015 in Johannesburg. The session covers various topics including Introduction to HP ArcSight, use of local and global variables in rules, Active Lists & Session Lists, Dashboard's & Data Monitors, Workflow and Case Management, Managing users and permissions, Outlining HP ArcSight Proof of Concept, Use Cases, Sizing and Architecture, and ArcSight Q/A. The session includes a lab during which attendees can apply the learned concepts to practical scenarios. The event is aimed at introducing participants to HP's ArcSight solution for threat monitoring and risk management across enterprises and SMBs. As of June 6th, 2014, ArcMCsoftware and appliance models replaced the existing Connector Appliance (ConApp) offering in all aspects including features and functionality. HP's ArcSight Solution Products Family includes products like ArcSight Management Center (ArcMC), Logger, Connectors, and ArcMCconnector servers for centralized management, interactive discovery, threat response, manageability enhancements, and more. These solutions aim to improve forensic data analysis and reporting capabilities through compliance insight, identity view, reputation security monitoring, and threat detection across various form factors like appliance, software, virtual machine, or cloud environments. The ArcSight Logger, Express, and ESM products are designed to meet business needs for real-time correlation, log management, user/flow/app monitoring, quick investigations, compliance reporting, and reputation analysis. They offer a range of licensed product options including pay as you grow license models, static correlation, and multi-language support. Compliance Insight Pack, Identity View, Reputation Security Monitor, and Threat Detector are some of the specific packs included in the portfolio to address different security concerns and regulatory requirements. The text talks about a management tool called "ArcSight Central Management" (also known as ArcMC), which helps simplify managing multiple HP ArcSight products. Some key features mentioned are automating change management, reducing the need for resources to handle security info and events, making large deployments easier to manage, optimizing bandwidth usage for log collection, supporting IT operational analytics, and unifying different parts of the HP ArcSight deployment. There's also a mention of a patch or update (Maintenance release) that supports managing Logger 6.0 and 6.0 P1, fixing some security vulnerabilities like Bash Code Injection Vulnerability and POODLE vulnerability, and updating timezone data for compliance with changes in Russia. Additionally, the text highlights differences between "ConApp" and "ArcMC," stating that ArcMC has more robust management capabilities. The text provides an overview of the capabilities and architecture of HP ArcSight Management Center (ArcMC), a system designed for managing and monitoring various components such as Loggers, SmartConnectors, and other connectors. Key features include: 1. **Bulk Management**: ArcMC supports bulk management of managed systems including software version upgrades on ArcSight Loggers. 2. **User Configuration Management**: It facilitates configuration management for ArcSight Loggers, ensuring that settings are consistent across multiple devices. 3. **Configuration Change Tracking**: The system tracks changes in configurations to maintain an audit trail and ensure compliance with change management policies. 4. **Health Monitoring Dashboard**: Provides a centralized dashboard for monitoring the health of all managed products and SmartConnectors. 5. **Remote Configuration**: Allows configuration settings for devices like BlueCoat and WUC SmartConnectors to be set remotely. 6. **Hardware and Software Form Factors**: The system can operate in both hardware (Form Factor) and software-only configurations, with the Management Server managing a variable number of connectors ranging from 4 to 1000 depending on its configuration. 7. **Centralized Management**: All components such as Loggers, SmartConnectors, Connector Servers, and more are managed through ArcMC's centralized interface, making it easier to oversee large-scale deployments. 8. **Monitoring Connectors**: Includes functionalities for managing and monitoring both ArcMC Connector Servers and individual connectors like Loggers and SmartConnectors. 9. **Scalability**: The system can scale up to manage a significant number of managed devices, suitable for enterprise environments where numerous systems need centralized management. 10. **Compliance and Reporting**: Provides the capability to track changes in configurations, monitor health statuses, and generate reports on compliance with IT standards and regulatory requirements. This document provides an overview of ArcSight Management Center (ArcMC), a system for managing and centralizing the administration of multiple ArcSight products such as connectors, connector servers, and loggers across various sites. Key definitions include hosts, nodes, golden configurations, subscribers, pushing configurations, compliance, and more. The main functionalities of ArcMC include user interface enhancements, expanded management capabilities for ConApp connectors, support for over 500 products, unified privileged user management, and efficient resource allocation by reducing operational expenses and support resources required for managing ArcSight products. Additionally, the document compares ConApp with ArcMC in terms of functionality, including connector management, configuration bulk management, gold configuration compliance check, remote backup scheduling, user management, and monitoring capabilities. Lastly, it introduces new form factors for ArcMC: the ConApp migration to C640XM/C650XM series, and a new ArcMC appliance (C65XX series) specifically designed for sales. This document outlines technical requirements for software connectors, connector appliances, loggers, and the installation process of ArcSight Management Center (ArcMC) on managed hosts. The information is intended for use by HP and partner internal staff only. Key points include: 1. Software Connectors: They must run version 6.0.3 or later. Compatibility exists across Connector Appliance, Logger (L3XXX), or separate servers. 2. Connector Appliances: Must be of the Software type, version v6.4 P3, and compatible with models CX200, CX400, or CX500. Both software and hardware form factors require installation of ArcSight Management Center Agent 2.0. 3. Loggers: Support versions include v5.3 SP1, v5.5, v5.5P1, or V6.0 for both software and hardware types, corresponding to models LX200, LX400, or LX500. They too require the ArcSight Management Center Agent 2.0 installation. 4. Adding a Host: The process depends on whether it's a hardware or software form factor. For hardware, the ArcMC server automatically pushes the agent; for software, you must manually transfer and install it. Completion of the installation starts the Agent service automatically. 5. Verification: Check the status by navigating to Setup > System Admin > Process Status in the managed host’s GUI, or via command line execution: /current/arcsight//bin/ status (or similar for ArcMC). 6. Starting and Stopping the Agent: This can be done through the GUI or by running specific commands based on whether it's a hardware or software form factor. 7. Global Architecture: Shows how LOGGER, HOST-1, ArcMC, Web Client Server, HTTPs, and HW appliance are interconnected within the system architecture for managing hosts effectively. The document outlines the setup and management of an ArcMC (ArcSight Management Center) server. Key details include: 1. **Connector Server**: Host-2 uses HTTPS with a hardware appliance called ArcMC. It requires manual installation for specific software components like ConApp & Logger CWSAPI. Other connectors such as SmartConnector 1 on HOST-3 and others are mentioned without detailed information. 2. **ArcMC Management Center Processes**: Includes Apache HTTPd service, APS processes management, a PostgreSQL database, and web services including Logger Web service. 3. **Console Access**: The ArcMC Console can be accessed via URL using the hostname or IP address with a configured port (e.g., https://hostname:port). Initial login credentials are admin/password. 4. **Configuration & Management**: Outlines a 4-step process for managing configurations in ArcMC, which includes creating, importing, adding subscribers, and pushing configuration compliance checks. 5. **Configurations**: Lists various configurations available under ArcMC Configurations (e.g., Connector Appliance Configurations, Logger Configurations) with specific examples like BlueCoat, FIPS, Map File, Parser Override, Syslog Connector, WUC Internal/External Parameters, and Logger Configuration Backup. Overall, this document provides a structured guide for setting up and maintaining the ArcMC server, ensuring proper configuration and compliance management of various connectors and services. This document provides an overview of HP ArcSight products, focusing on their capabilities and configurations for data collection, event processing, and security management. Key points include: 1. **System Configuration**: Describes the local configuration settings such as authentication methods (local password), session management, DNS, NTP, network, SMTP, and SNMP settings. These are crucial for system health and ensuring proper connectivity and synchronization across the network. 2. **Health Data Monitoring**: HP ArcMC collects detailed health data from managed devices at 1-minute, 5-minute, and 1-hour intervals to support performance monitoring and alert generation. The data includes CPU usage, memory allocation, disk activity, network traffic, EPS In/Out, event statistics, thread counts, and hardware status including fans, voltages, power supplies, temperatures, and RAID configurations. 3. **Event Lifecycle**: Outlines a seven-phase process for handling events in the ArcSight system, from initial collection to final analysis and reporting. This structured approach ensures comprehensive monitoring and responsive management of security incidents. 4. **Connectors Overview**: Explains that connectors are software used to collect data from various devices (like network devices) and send normalized information to the ArcSight Event Management Component (ESM). HP ArcSight supports 300+ SmartConnectors, including both standard protocols and proprietary APIs for integration with different vendors' systems. 5. **Smart Connectors & Flex Connectors**: Discusses specific types of connectors available in the ArcSight ecosystem, such as SmartConnectors that are pre-built but configurable by users and FlexConnectors which can be customized based on requirements. 6. **Objectives**: Sets out the learning outcomes for participants after attending a workshop focused on HP ArcSight Connectors and their practical application, including understanding connector framework, installing and configuring connectors, using scripts, interpreting logs, and handling alerts. This document is intended for internal use within Hewlett-Packard (HP) and its partners, providing detailed technical information about the company's security solutions designed to manage data collection and event processing in enterprise environments efficiently. This document provides an overview of various components and functionalities within the HP ArcSight ecosystem, aimed at facilitating event collection, processing, and management. Key points include: 1. **FlexConnectors**: These provide a customizable framework for specific device event processing needs, leveraging similar infrastructure to SmartConnectors. They handle raw data conversion into normalized events, categorization, filtering, aggregation, and transmission to the ESM Manager. 2. **Connector Appliance**: Available in hardware and software versions, it features a web interface that functions similarly to FlexConnectors but operates independently for event processing. It supports various log sources, including push-pull protocols, SSL encryption, centralized management, and NIST 800-92 compliance. 3. **ArcMC (ArcSight Management Center)**: This is the central management console for all ArcSight products, supporting data collection from connectors, event processing, filtering, aggregation, enrichment, and distribution to ESM Manager. It includes a CEF normalization process that involves parsing/normalization, categorization, network modeling, mapping, and enrichment. 4. **Common Framework/Component Flow**: The document outlines the sequence of data processing within ArcSight connectors, from raw event collection through various stages including parsing, normalization, categorization, filtering, aggregation, time correction, name resolution, network modeling, and finally to destination transport. 5. **CEF Normalization**: This involves converting unstructured log data into a structured CEF format for better management and analysis across the ArcSight suite of products. This document is primarily intended for internal use by HP and its partners, emphasizing security and confidentiality with a focus on training purposes only. This document outlines the structured enrichment process for network security events using ArcSight Forwarding Connector, focusing on Cisco PIX devices. The goal is to categorize and normalize these events for better understanding and future forensic analysis. Key components include: 1. **Event Schema Groups**: Defined by timestamps, event identification, and classification, this includes root-level details like Event, Category, Threat, IP Endpoint, File State, Request, Workflow, and Custom groups. 2. **Categorization**: Utilizes 6 dimensions to add meaning to events, requiring specific fields (Device Event Class ID, Device Product, Device Vendor) to be populated for categorization. This process helps in profiling events in a vendor-neutral context and applies predefined Event Categories. 3. **Normalization and Categorization Explained**: The example provided shows how raw network logs from devices like Cisco PIX are transformed into structured data using the defined schema, enhancing clarity and future analysis capabilities. 4. **Sample CEF Fields**: This section is not detailed in the text but implies that specific fields (around 400) relevant to Common Event Format are used for normalization and categorization, providing a comprehensive view of events across devices. The overall benefit of this structured enrichment process is future-proofing and efficient forensic analysis, enabling better decision making and compliance with regulatory requirements. The summary highlights the categorization of events within ArcSight's Event Schema, which is used to evaluate and display the core meaning of an event. This process occurs after event parsing and involves setting six category fields for each device-specific event class ID. These categories include Object (the target object), Behavior (action/behavior associated with the event), Outcome (success, failure, or attempt), Technique (type of event related to a security domain), Device Group (dimension added for multi-purpose products), and Significance (to distinguish between normal and hostile events). The categorization happens on ArcSight connectors and involves using a mapping table (categorization file) where each device has its own unique Event Class ID. This method helps in identifying the type of event, which is crucial for monitoring network intrusion detection systems (IDS) effectively. The provided example shows how different types of events are categorized based on their nature and outcome, such as Network IDS events like "Host/Application/Service /Modify/Content" being categorized under "/IDS/Host/File Integrity", indicating a modification in content with the device group being File Integrity and the outcome being an attempt. This document outlines various aspects of the ArcSight connector system from Hewlett-Packard (HP). It covers a wide range of topics including authorization, code applications for managing unauthorized access, scanning protocols, and different types of connectors used for collecting data from various log sources. The connectors are categorized based on their functionalities such as File Connectors, Database Connectors, Scanner Connectors, REST API Connectors, SNMP Connectors, Microsoft Windows Event Log Connectors, Syslog Connectors, and FlexConnectors. Specific examples include the Syslog Daemon Connector which is widely deployed for forwarding data via UDP 514 to a connector's IP address, and the HP ArcSight Connector that handles varying throughput rates depending on the log source type. The document also provides information on the internal workings of the connector servers, memory usage, and processing capabilities, as well as EPS sizing rates specific to different types of sources. The document provides an overview of the HP ArcSight Connector, specifically focusing on its compatibility with various operating systems, container usage, supported communication modes, directory structure, installation process, and upgrade packages. Here is a summarized breakdown of the key points discussed: 1. **Operating System Compatibility**: The ArcSight Connector supports several versions of Windows (including 32-bit and 64-bit) and Linux distributions such as RHEL and SUSE, along with Solaris and AIX. It is certified to work on specified platforms and supported means that the platform has been tested by HP for minimum functionality, while certification indicates additional regression testing after an update. 2. **Containers in JAVA**: The document explains that when referring to a connector, it often implies a container, which is essentially a JVM instance or installation of Java. Multiple connectors can be installed within one JVM, with up to 32 connectors possible across several containers managed through ArcMC 2.0 Connector Server. 3. **Communication Modes**: The connector supports SSL (uncompressed), console, cert, FIPS 140.2, XML via SSL, Express/ESM, and SmartMessage. These modes enable secure communication between the connector and other systems. 4. **Directory Structure**: It describes the hierarchical structure of connectors within the ArcSight environment, with examples such as C6502 having 2 containers, C6504 having 4 containers, and C6508 supporting up to 32 connectors. 5. **Installation and Configuration**: The installation process involves downloading the appropriate binary for the OS, running it, choosing a connector type, providing necessary inputs, and installing all required binaries in one folder. Multiple connectors can be installed on the same OS in separate folders, with each install capable of hosting several connectors. 6. **ArcSight Upgrade Packages**: The document mentions that updates are available through software support online for download, which update content between the Manager and SmartConnectors via ArcSight Update Packs (AUPs). These AUP files can contain information relevant to both SmartConnectors or ESM related updates. In summary, this documentation provides a comprehensive guide on how to install, configure, and manage HP ArcSight Connectors across different operating systems, ensuring secure communication and efficient management of the connector infrastructure within an organization's IT environment. The provided text discusses updates for a software tool called AUP (Advanced Update Process) used with HP's ArcSight product, specifically focusing on SmartConnectors. Key points include: 1. **AUP Capabilities**: It outlines that AUP provides updates for event categorizations, default zone mappings, and OS mappings. However, it clarifies that content such as filters, rules, and dashboards are not part of the AUP's scope. 2. **Upgrading SmartConnectors**: The text explains how to upgrade a SmartConnector using the latest AUP releases available for download from ESM (Extended Solution Manager). Steps include copying the .aup file to the ARCSIGHT_HOME\updates\ directory on a running Manager, where the SmartConnector registered to this ESM automatically downloads and applies the update. 3. **ESM and Logger Integration**: The text describes how SmartConnectors can send events to both ESM and Logger simultaneously or through AUP Master Destination feature if configured. This setup allows ESM to push AUP content to the SmartConnector used for its Logger destination. 4. **SmartConnector Upgrade Process**: Outlines the steps from initiating the upgrade command in ESM Console, having the SmartConnector upgraded, and checking the status of the upgrade in the ESM Console. Also explains how after an upgrade, Active Channel reflects corresponding internal events. 5. **Directory Structure Updates**: After an upgrade, the directory structure is updated where the previous version of the SmartConnector is renamed to 'x.y.z', and a new upgraded version is placed in the 'Current' directory. 6. **Connector Appliance Limitation**: Notes that Connector Appliances do not support automatic deployment of AUP; updates must be uploaded through their web-based user interface. 7. **Configuration Files Location**: Provides information on where SmartConnector configuration files are located, specifically within the /current/user/agent directory, with agent.properties containing global configurations and Agent.xml for destination-specific settings. The document discusses SmartConnector Configuration Files which are used to enhance or modify the functionality of connectors. These files include Parser Overrides, Map Files, and FlexConnectors. Parser Overrides allow users to override or fix the parsing logic for supported SmartConnectors by modifying a property file in the connector's file system. This can be useful when connectors do not parse data correctly, lack support for specific versions of devices, or fail to parse all necessary fields. To create a Parser Override, one must obtain the original parser and modify it using a property file. Map Files are used to enhance or augment events by mapping additional data from source user names to device custom strings. This can be done through a map.x.properties file which contains details like event.sourceUserName and set.event.deviceCustomString1. Mapping is often not comprehensive, allowing for on-demand mapping via the console. FlexConnectors are connectors designed for non-supported sources and allow users to control parsing customization using FlexConnector kit functionality. This feature enables tailoring of fields through parser-like expressions, providing flexibility in data handling. Map Files and Parser Overrides enhance event processing by adding detailed mappings or augmenting existing information from the source. These tools help bridge gaps in direct mapping between source and target schema, ensuring comprehensive data capture for analysis and reporting within the SIEM system. This document discusses various aspects of parsing and using FlexConnectors in HP ArcSight software, including parser overrides, map files, conditional mapping, and best practices for POC (Proof of Concept). It covers the importance of having a parser file name match its original parser, placing it under specific directories, and how to use FlexConnector kit functionality to tailor field parsing if needed. The document also mentions key features such as Syslog to Key=value conversion, multiline events, model import connectors for reference data, reputation data, identity information, business or transaction information, and merge capabilities. It provides guidance on troubleshooting with LogFu utility for log analysis and emphasizes the importance of reading the Flexconnector Guide, familiarizing oneself with the event schema, downloading the latest connector build, and understanding basic Regex for effective use of FlexConnectors. HP ArcSight FlexConnectors are versatile tools designed to collect data from various sources and formats. They can be categorized based on the type of file or database they process: 1. **Log-file**: Real-time fixed delimited text log format. 2. **Regexlog-file**: Real-time variable-format log processed by regular expressions. 3. **RegexFolderlog**: Non-real-time (batched), recursively reads variable-format log files in a folder. 4. **Multiple Folder Follower**: Non-real-time (batched), recursively reads multiple formats in different folders. 5. **Time-based Database**: ODBC/JDBC-compatible SQL database accessed via timestamp for rows retrieval. 6. **ID-based Database**: ODBC/JDBC-compatible SQL database accessed via unique ID for rows retrieval. 7. **Multi-Database**: Supports multiple databases (time-based and id-based). 8. **Syslog Regex add-on**: Adds regex capabilities to existing Syslog SmartConnectors. 9. **Snmp**: Supports SNMP trap versions 1, 2, and 3 for network management. 10. **XML Folder**: Recursively reads events from XML-based files in a folder. 11. **Scanner for Text, XML, Database**: Vulnerability scan extracts or inputs to ESM (Enterprise Security Manager) asset/network modeling. 12. **Rest API**: Uses REST API endpoints with JSON parser and OAuth2 authentication to collect security events from cloud vendors like Box, Salesforce, or Google Apps. Additionally, these FlexConnectors can be used for:

  • Executing commands from the ESM console or rules engine.

  • Reading from databases using SQL, either time-based or ID-based, stored in a DB view or table.

  • Accessing database sources via Squirrel SQL or similar tools.

  • SNMP over RESTful API with JSON output and OAuth2 authentication.

To create these connectors, one can use various file types including CSV, XML, and databases like ODBC/JDBC compatible SQL databases. The documentation provided includes a demonstration of creating a CSV wizard for handling CSV files as an example. This document appears to be related to the configuration and usage of a software tool called "FlexConnector" within a larger system or documentation set likely pertaining to network management or security monitoring tools from Hewlett-Packard (HP). The content covers various aspects including installation, parsing mechanisms, properties file configurations, and troubleshooting tips. Here's a summarized breakdown: 1. **SmartConnector Installer**: Begin with the SmartConnector Installer for ArcSight FlexConnector. 2. **Regex Wizard**: Utilize the Regex Wizard to choose ArcSight FlexConnector Regex File as instructed. 3. **Logging Issues**: Notable log entries indicate issues such as SNMP login failures and decoding errors which are critical for understanding system health and connectivity problems. 4. **Parser Types**: Describes different types of parsers used in FlexConnectors including Delimited Log Parser, Regex Parser, Key Value parser, Database parser, SNMP parser, and XML parser. 5. **Choosing a FlexConnector**: Discusses factors to consider when selecting a FlexConnector based on characteristics like the number of files (single or multiple), file naming patterns, accessibility, and whether data is static or dynamic. 6. **FlexConnectors Development Wizard**: Provides guidance on starting the development wizard for creating custom FlexConnectors, including necessary locations and log samples required. 7. **FlexConnector Properties File**: Details how to configure properties in a properties file, including token declarations, sub-messages declaration, and mapping configurations such as ArcSight Agent Severity Mapping and Tokens mapping. This document is likely intended for internal use within HP or by partners who utilize HP's security and network management solutions, providing detailed guidance on configuring and troubleshooting the FlexConnector feature to parse and interpret log files from various sources effectively. This document provides an overview of various tools and configurations related to FlexConnectors, including their purpose, installation processes, and troubleshooting techniques. The primary focus is on the configuration file properties and syntax, tool descriptions such as Regex Buddy and SQuirreL SQL Client, and methods for troubleshooting issues within the system. The document provides a detailed guide on how to manually replay events using the ArcSight Logger, which is part of HP's ArcSight security solution. It outlines two methods for generating replay files: through the ArcSight Console and by creating CSV files for event replaying. Key steps include specifying the application type (ARCSIGHT_HOME\current\bin arcsightagents), ensuring the service is running, setting default event settings, sending events to the ESM Manager, and managing file content in the Replay Connector directory. The document also covers the main objectives of HP ArcSight Logger, including its architecture, graphical user interface, storage group, searches, and parameters, highlighting how it effectively tackles complex threats with integrated security features. HP ArcSight Logger is a scalable log management solution designed for organizations looking to unify their logging across various data types and sources, providing efficient investigations with advanced real-time correlation capabilities. It supports multiple deployment options and can be used anywhere. The product offers several features including sensitive data activity monitoring, threat and fraud detection, transaction security, and more. HP ArcSight ESM and Express are part of the broader portfolio, focusing on sophisticated correlation for complex threats and mitigating modern threats to prevent breaches and loss. Integration with other HP products like HP Fortify, DVLabs, and BSM enhances its capabilities in application security, reputation data handling, and bi-directional integration. The solution supports a pay as you grow licensing model and offers a trial version, making it versatile for different business needs and stages of adoption. The HP ArcSight Logger is a network audit and security tool designed to manage and analyze large volumes of log data from various sources, including networks, systems, applications, and devices. It has evolved through several versions since its inception: 1. **Logger 1.x**: Developed as a low-cost solution to compete with more expensive log management tools like ArcSight ESM by providing high event rates at a lower cost. This version was not originally intended for large enterprises but proved successful in this role due to its affordability and competitive performance compared to higher-priced competitors. 2. **Logger 2.x**: Introduced reports, though these were slow due to the absence of indexing. 3. **Logger 3.x**: Added field-based indexing, significantly improving report generation speed and enabling faster search capabilities. 4. **Logger 4.x**: Integrated full-text indexing for even quicker data retrieval and analysis. 5. **Logger 5.x**: Extended functionality to include reporting on unstructured (unparsed/non-normalized) data through pipeline searches, charts, and dashboards. 6. **Logger 6.x**: Features enhancements such as super indexes, storage size optimization, map file improvements, and augmented peering capabilities, further enhancing its performance and scalability. The ArcSight Logger is available in multiple deployment options including a software version and hardware appliances (data center, SAN compatible, SMB or remote site). It supports various use cases including cyber security, compliance, IT operations, GRC, and log analytics. The appliance can collect logs from any source, consolidate them, correlate related data, collaborate across teams, and perform quick searches on years' worth of data for incident investigation. Key benefits include flexible storage options with up to 80 TB per local instance (with RAID enabled) or expandable to 1.6 PB via network configuration, high compression ratios enabling the archiving of IT data over extended periods, and efficient communication capabilities including HTTPS connections and built-in connectors for event forwarding. The Logger's ability to categorize events into groups based on devices helps in limiting the scope of search queries, thereby enhancing retrieval speed. This document discusses the architecture and functionalities of HP ArcSight Logger, a software used for event data management in security information and event management (SIEM) systems. Key aspects covered include storage rules, license enforcement, connector raw event statistics, and logger processes monitoring. **Storage Rules:** The system allows up to 40 storage rules and six storage groups with customizable retention periods and sizes. Each storage group has a predefined internal default events group. **License Enforcement:** License measurement is based on actual event size and total disk storage usage. Violations exceeding five in a 30-day window can lead to disabling search, reporting, and forwarding capabilities. For raw events, Logger uses the length of the event for daily rate monitoring. For CEF (Common Event Format) events, original event size is preserved through translation by an agent:050 event. **Connector Raw Event Statistics:** Logger relies on a specific ArcSight internal event named “Connector raw event statistics” which contains information about the total size of all incoming events from connectors since the last check. This event is generated by SmartConnectors and sent every five minutes to each configured destination. **Logger Architecture:** The document describes processes for monitoring Logger processes, including handling corner cases such as filtering, aggregation, and truncation in data transmission through multiple connectors. The document provides an overview of the HP ArcSight Logger, a security information and event management (SIEM) tool used in network architecture for threat detection and response. Key features include data storage and retention, proactive threat mitigation, long-term storage, and integration with other products like the Threat Response Manager and connectors such as the HP ArcSight Connector and Express. The HP ArcSight Logger can act as a funnel, forwarding selected events to an Enterprise Security Manager (ESM), utilizing both raw event data from network devices and CEF (Common Event Format) messages from ArcSight Connectors. It supports various deployment models including standalone, with ESM, in parallel with ESM, hierarchical deployments with multiple Loggers working together in a peer network for scalable log management. In summary, the HP ArcSight Logger is a versatile tool designed to collect, parse, normalize, categorize, enrich context, correlate events in real-time, and alert on threats across various network devices, integrating seamlessly with other security products and systems for an integrated threat detection and response architecture. This document provides detailed information about ArcSight Logger, a software tool developed by HP for managing and analyzing large volumes of enterprise logs. The summary below highlights key features and specifications: 1. **Hardware Requirements**: The system supports varying disk capacities including 160GB (L160GB), 40GB (L40GB), and 15GB (L15GB). It can handle data ingestion rates up to 180 GB/day, with indexing at 120 K eps. The online search speed ranges from 1-3 million events per second (eps) to 12-36 million eps, depending on the license size. Bloom filters scan speeds range from 500 million eps to 6 billion eps. 2. **Software Capabilities**: ArcSight Logger includes a user interface for local or external authentication and storage management features such as Storage Groups, Storage Policy, Devices, Device Groups, and up to 40 Storage Rules. It supports Google-like search interfaces with regulatory content pre-packaged for PCI, SOX, and ISO/NIST compliance. 3. **Performance**: The search functionality includes auto text typing, advanced search windows, and the ability to search through scanned events count. Search elapsed times are displayed alongside hit counts, offering a comprehensive view of search performance. 4. **User Experience**: The system offers a Live Event Viewer with simple controls allowing users to adjust the display buffer size between 20 to 5000 events (with a default setting of 1000) and reset the session after 15 minutes. 5. **Scalability and Durability**: With maximum online storage capacities ranging from 80TB to 960TB, ArcSight Logger can handle significant data volumes effectively over extended periods, as indicated by operational times up to 220 days or more at a sustained query rate of 5K eps. This document is intended for HP and partner internal use, providing detailed specifications on the hardware and software capabilities of the ArcSight Logger system, aimed at enhancing business intelligence through efficient log management and analysis. This document provides an overview of the capabilities and functionalities of a system used for conducting comprehensive searches across various types of data, particularly in scenarios involving internal espionage or other cyber threats. The system is capable of quickly gathering information about compromised systems, individuals involved, and the scope of exposure by searching both structured (e.g., login logs, file uploads) and unstructured data (emails, IM sessions). It can access a wide range of sources including applications accessed, system logins, websites visited, database edits, emails sent/received, CEF data, and raw syslog data from network devices. The search functionality is versatile, supporting keyword searches, field searches, regular expression-defined pattern searches (regex), and the use of pipeline operators to refine results. Boolean operators such as AND, OR, and NOT are used to connect various conditions in queries for more precise matching. Bloom filters may be employed based on query fields for performance optimization. Search results are displayed in a table with a self-ranging histogram, showing default entries per page. Users can expand or collapse raw event data as needed. The system supports indexing of both full-text keywords and specific fields to enhance search efficiency. Events that match the criteria specified (in terms of time range and constraints) appear in the Search Results table and are included in the event histogram. In addition to searching, this system offers powerful reporting capabilities, including extensive default content such as security, compliance, and operations reports, which can be scheduled or generated on demand for long-term analysis. Reports come in multiple formats (PDF, HTML, RTF, XLS, CSV, Email) allowing flexibility in distribution and presentation. Over 120 predefined report templates are available to meet various analytical needs. This document outlines the implementation process for HP ArcSight Logger, which is designed to search and analyze unstructured and structured data from various sources. The logger appliance requires a valid IP address for setup and has multiple network interface cards (NICs) that can be configured for different subnets. Access to the logger can be obtained through command line interface (CLI) or web browser, using default credentials. The initial setup involves setting up licenses, configuring storage volumes and groups, time zone settings, index fields for full-text indexing, and system locale. The process also covers configuration details such as setting up receivers, devices, device groups, storage rules, and forwarders if needed. It emphasizes the importance of understanding customer needs during the deployment to ensure effective use of the logger in various scenarios. The provided document provides an overview of HP ArcSight Logger, a software tool used for real-time and scheduled alerting from various sources such as e-mail, SNMP, Syslog, or directly to the HP ArcSight Logger. It includes features like regular expression queries for real-time alerts, database searches with up to 50 concurrent connections, and support for different development languages through a SOAP API for communication. Key points include: 1. The system is designed for internal use only within Hewlett Packard and its partners, with information subject to change without notice. 2. HP ArcSight Logger supports real-time alerts using regular expression queries and scheduled alerts based on specific event criteria, excluding internal events but including authorization/authentication delete operations. 3. Alerts can be notified via e-mail, SNMP, Syslog, or sent directly to the logger. 4. The system offers SOAP API for communication with various development languages like Java, Perl, Python, and Ruby. It includes three Web Services: LoginServices, SearchService, and ReportService. 5. New features in Logger 5.5 and 6.0 include scalability enhancements (up to 1000x faster than previous versions), enforcement of search capabilities without overpayment, simplified licensing for trials, and support for a wide range of OS and browser versions. The document also mentions an upcoming boot camp on HP ArcSight Proof of Concept focusing on the new features of HP ArcSight Logger V5.5 & V6.0, with objectives to understand these updates and new functionalities upon completion of the lab session. The article discusses updates in HP ArcSight Logger version 5.5, focusing on simplification in product structure and licensing changes. Key points include: 1. **Performance Enhancements**: Algorithmic changes are made for faster search performance, particularly useful for complex searches like "needle-in-the-haystack" scenarios. The software now supports updated operating systems (RHEL 6.5) and browsers (Firefox 24 ESR, IE 9 and 10), enhancing compatibility and usability. 2. **Security Updates**: The version includes fixes for defects found in previous versions to enhance security and reliability. 3. **Trial Logger Size Reduction**: A smaller trial version of the Logger is introduced to enable faster download, installation, and initial value realization. This reduction also applies to the number of SKUs available, simplifying product ordering from 115 to 26. 4. **New Licensing Model**: A new license and pricing model are implemented for the software logger SKU, which now comes with an entitlement to collect data at a rate up to 5GB/day without additional capacity being required. This simplifies licensing and usage by reducing complexity in storage requirements. 5. **Product Structure Simplification**: The appliance models (L3500, L7500s, etc.) have been simplified with fewer SKUs available, reflecting the streamlined approach to product management. Capacity can be added incrementally in 5GB/day increments up to a maximum of 500GB/day per instance. 6. **Scale and User Interface**: The Logger 6.0 version introduces features for scaling data storage and ingestion capabilities, supporting up to 20 instances with 8 TB per instance, suitable for handling large volumes of data up to 1.6 PB. A new simplified Web 2.0 user interface aids in easier deployment and management of extensive datasets. The product also includes static correlation capabilities for forensic investigations that are now more contextual and dynamic. Furthermore, a mobile app is introduced for continuous monitoring on the go. In summary, HP ArcSight Logger version 5.5 focuses on enhancing performance, simplifying licensing and purchasing processes, improving user interface and operational efficiency through technological advancements in data handling and correlation. The release of Logger 6.0 further emphasizes scalability, ease of use, and integration capabilities to manage large-scale digital investigations efficiently. This text appears to be a documentation or specification sheet for a product related to logging and data management, likely from Hewlett-Packard (HP). It provides detailed information about different SKUs (stock keeping units) of the Logger software, including its features, storage capacities, and performance enhancements. Here's a summarized breakdown of what is discussed: 1. **Logger Software Versions**: The document outlines two versions - Logger 5.5 and an unspecified future version labeled as Logger 6.0 (though no specific details are provided about this newer version). 2. **SKU Details**:

  • **Logger SKUs**: Various models like L3505, L7505S, L7505X, and L7505-SAN are listed with their respective capacities and data limits for online storage and physical space.

  • **Data Storage Limitations**: Physical space limitations range from 800 MB to 8 TB compressed. The daily data limit varies significantly among the models, reflecting different capabilities in handling log volumes.

3. **Enhanced User Experience for Trial Download**: A detailed user journey is described starting with a search for a logging solution through organic or search efforts. Users are guided through registration and software download processes which culminate in an invitation to join a protection service (Protect724.hp.com). The process involves 8 steps to nurture the user experience, providing touch points throughout the trial period. 4. **Technical Enhancements**:

  • **Bloom Filters**: Introduced as a new data structure for rapidly determining membership in a set with high efficiency and low overhead. This helps in querying rare events more efficiently than traditional methods.

  • **CORR-engine Storage**: Utilizes this storage system to rule out ranges of log data on disk, improving performance by reducing the scope of search operations significantly.

  • **Search Enhancements**: Bloom filters are used not only for chunk creation and search but also for optimizing queries where results might be scarce, offering constant time results for zero-match queries and significantly speeding up searches for rare events.

5. **Benefits**: The implementation of bloom filters in Logger provides benefits such as immediate feedback with near-zero overhead for queries that return no matches, effectively handling "needle in haystack" scenarios where the sought information is scarce. 6. **Logger 6.0 Speculation**: Although not explicitly named or detailed, the document implies a progression from Logger 5.5 to an improved version (Logger 6.0) with enhanced features and capabilities as seen through its implementation of advanced data structures like bloom filters for faster querying and more efficient handling of large volumes of log data. This documentation is likely used internally by HP engineers and partners, detailing the specifications and advancements in their logging software solutions to manage and analyze log data efficiently. Logger 6.0 is a software update from Hewlett-Packard that introduces several new features and improvements aimed at enhancing scalability, accessibility, and usability. Some key highlights include: 1. **Scalability**: The system has been upgraded to allow for up to 19 instances of Logger (totaling 20), which equates to "8x more storage" as each instance can now handle up to 8TB, compared to the previous limit of 4.2 TB per logger or 5 loggers totaling 21 TB. 2. **Data Accessibility**: The system boasts a search capability that expands from handling 1.6 petabytes (PB) of data to potentially over 10 terabytes (TB), thanks to enhanced indexing and superindexing capabilities, which enable rapid searches even in large datasets. 3. **User Interface Enhancements**: A new Web 2.0 User Interface is introduced for simplified navigation, with improved aesthetics that are consistent across the platform's solutions. This UI redesign also includes more compact menus and additional screen real estate to accommodate more data visualizations and configurations. 4. **Mobile Connectivity**: Users can access their Logger data through a free mobile app (available for iOS devices) which allows them to view alerts, reports, and dashboards on the go. 5. **Security Features**: The system includes integrity checks via hash validation to ensure that stored data has not been altered, enhancing security and reliability. 6. **Reporting Enhancements**: Reporting functionality is now more accessible by allowing users to drag-and-drop report components without extensive training, thanks to a user interface upgrade. 7. **Technical Support**: Improved support for the latest operating systems (e.g., RHEL 6.5) and browsers (Firefox 30 ESR, IE 10, and IE 11) ensures compatibility and performance across diverse technological environments. 8. **Resource Optimization**: Indexing improvements reduce the need for routine maintenance tasks such as defragging, which is now required about four times less frequently than before. 9. **Hardware Support**: Logger 6.0 supports specific models of its hardware appliances (L3505, L7505S, L7505X, and L7505-SAN), detailing the maximum data processing rates and storage capacities for each model. 10. **Software Licensing**: The software update requires no additional licensing fees, with options to upgrade from previous versions (either directly from 5.5 to Logger 6.0 or via a patch set from 5.5 Patch 1). However, not all older appliance models are supported for the upgrade. Logger 6.0 also introduces simplified SKUs and pricing, maintaining the same price point as its predecessor while offering more features at no extra cost. There's an option to download Logger 6.0 without licensing for users who don't need peering or reporting capabilities but still wish to benefit from community support resources. Logger 6.0 introduces several enhancements and new features, including faster fieldset retrieval, six new system-level fieldsets, the ability to import and export fieldsets, two new dashboards (Intrusion and Configuration Events, Login and Connection Activity), and a LOOKUP feature that allows augmenting search results with external tables by integrating threat and intel data. The LOOKUP feature enables users to pull columns from CSV files for use in Logger searches, supporting static correlation and joins within searches. The number of rows x fields supported is up to 5 million, and the results can be charted, redirected, or exported. However, lookup tables must be manually updated and are currently limited to CSV format with field names defined in the first row. The feature has size constraints: filenames may only contain alphanumeric characters and underscores; maximum file size is 50 MB (compressed or uncompressed); maximum disk space for storing Lookup files is 1 GB; and a maximum of 5 million entries per table. The number of rows loaded for lookup depends on the number of columns in the Lookup file, with up to 10,000 rows allowed if there are 500 columns and up to 1,250,000 rows if there are only four columns. This document outlines the use of lookup files in HP products for internal users, specifically focusing on searches and data validation processes. The main functions include performing simple and advanced searches with look-up tables to correlate events based on specific fields. Users can also download external CSV files such as Tor Exit Nodes from specified websites. Additionally, there is a discussion about combining multiple lookup tables in a search, using custom fields, and the Data Validation (DV) feature for Logger data file integrity checking. This text discusses several aspects related to data validation processes, system upgrades, and new features introduced in Logger 6.0. Key points include: 1. **Data Validation**: The process cannot be canceled once started and can take a long time for large datasets. It is recommended to schedule the process during off-peak hours and limit the timeframe to focus on specific data of interest. 2. **Data Validation Results**: In Logger 6.0, if the system is upgraded from an earlier version, legacy data will have a status of "N/A" because no hash validation data was stored at creation time. Future upgrades will retain hash validation data for integrity checking. 3. **New Features in Logger 6.0**:

  • **Shortcuts and Searches**: New search functionalities include using the term "$filter" to bring up saved filters and accessing saved searches with the term "$ss".

  • **User Interface Improvements**: The user interface has a new look, improved performance, and additional features such as 10K EPS static correlation, digital widgets for better visualization, increased peer search speeds, and more intuitive dashboards.

  • **Technical Enhancements**: Support for the latest operating systems (RHEL 6.5), browsers (Firefox 30 ESR, IE 10 & 11), and enhancements in data handling capabilities like SSH without challenge/response, RESTful APIs, integrity checks via hash validation, and options to retrieve oldest or newest events first in searches.

  • **Mobile App**: The mobile app now allows connecting to Loggers and viewing new dashboards.

4. **Upgrade Paths**: Existing customers can access the upgrade from a SSO site for Logger 5.5/5.5 P1, supporting upgrades from Logger Lx500, Lx400 series, Lx200, Lx100, and Lx000 to Logger 6.0. 5. **System Requirements**: The system is capable of handling a significant amount of data ingested per day, reflecting its capability as a "Big Data" solution in the field of security information and event management (SIEM). This summary highlights key changes and features introduced with Logger 6.0, emphasizing improvements in user experience, performance, and technical capabilities. This document outlines several key points for transitioning from the Logger Appliance to Logger SW, highlighting customer benefits such as future-proof investment options, scalability, and keeping up with technological advancements. It also provides details on a free trial of Logger 6.0, which allows users to evaluate enterprise features before purchasing. Additionally, it introduces CloudShare, a self-service cloud platform enabling HP ESP partners for various uses like sales demos, virtual training, and testing environments within the HP ArcSight ecosystem. The document concludes with instructions on how to log in and use CloudShare for creating HP ArcSight environments. This document outlines steps for creating an HP ArcSight Logger environment using HP's cloud service, CloudShare. It begins by selecting HP 'My Projects', then choosing the appropriate environment template (HP ESP), and finally selecting a Proof of Concept (POC) lasting one day. The next step involves adding an HP ArcSight Logger POC from the drop-down list, clicking 'ADD' to add it, and then clicking 'Add'. After saving the project, the document details preparing the VM (Virtual Machine) environments for deployment, viewing detailed VM information, and starting the ArcSight Logger Online POC. The document explains that Cloud Folders are private FTP folders within CloudShare which can be mapped as local drives on each machine in the environment. They allow users to upload and download files between their laptops and the CloudShare environment, move files between machines or environments, and access uploaded files across all CloudShare machines. Instructions for using Cloud Folders include clicking "Show my Cloud Folders" on the RDP (Remote Desktop Protocol) desktop to map it as a local drive. Overall, this document provides a step-by-step guide to setting up an HP ArcSight Logger environment within CloudShare, including details about how to use Cloud Folders for file management within the virtual machines. This document outlines the steps for uploading and downloading files using Cloud Folders from HP, including external access through Logger Online Proof of Concept (PoC). It also covers how to reactivate suspended environments. Additionally, there's a section on preparing content for a Log Management Solution, focusing on customer requirements and considerations when evaluating loggers. The document is intended for internal use by HP and its partners and contains information that may be subject to change without notice. This document discusses the capabilities and functionalities of Logger, a tool used for searching and analyzing data from various sources. The content covers familiarization with search functionality, specifically focusing on free text and structured searches, as well as pipeline query functionality to visualize results. Additionally, it introduces the use of look-up files containing malicious IP addresses and DNS domains, which are integrated into Logger's functionalities. Key points include: 1. Understanding how to perform both free text and structured searches within Logger. 2. Utilizing field sets to customize search results by selecting specific fields based on requirements. 3. Employing pipeline queries for visualizing data in real-time and within dashboards, as well as exporting the findings into reports. 4. Introduction to look-up files that can be used with Logger 6.0 VM to analyze malicious IP addresses and DNS domains. 5. Guidance on how to build custom queries using a demo script for structured search and advanced search functionalities. 6. Demonstration of how to visualize data quickly through field summaries, drill down into specific fields, and utilize field sets for flexibility in displaying information tailored to user needs. This text primarily focuses on the capabilities of a search tool or platform within an organization's data management system, likely related to logging or monitoring systems used by Hewlett-Packard (HP). The document outlines various functionalities and tips for improving search performance. Here’s a summary of key points from the provided content: 1. **Search Combinations with Boolean Operators**: Searches can be combined using boolean operators such as AND and OR. For example, searching for "sourceUserName= Mike AND deviceVendor= Microsoft" is valid but optional; if you omit "AND," it implies a combination of these criteria. Multiple search terms can also be chained together using the pipe operator (|). 2. **Search Capabilities**: Within this system:

  • Users can isolate specific values or sets of data from structured or unstructured sources.

  • Data can be aggregated and counted, either by combining different fields or simply counting occurrences.

  • Searches can generate graphs or charts that display trends over time or according to another specified field.

3. **Search Optimization Tips**:

  • If searches are slow, consider examining the search term, breaking it down into smaller parts, and checking event rates to see if they are reasonable given the data volume.

  • Avoiding overly complex regex patterns for IP address filtering as mentioned in the text (deviceAddress not within specific ranges) is advised because practical limits have not been found but might be unnecessary due to limited immediate need.

4. **Dashboard Integration**: The document ends with a note about building a dashboard using previous searches, implying that these functionalities can be visually represented for easier access and analysis by users. Overall, the text provides detailed guidance on how to effectively use search tools within an enterprise environment for data management and visualization purposes. The document outlines a guide for creating and modifying dashboards using HP software tools. Here’s a step-by-step breakdown of the process: 1. **Creating a Dashboard:**

  • Open the application and click on "Create Dashboard."

  • Select "Dashboard Panel Button" to add new panels.

  • Provide a "Panel Title" and select "New Saved Search."

  • Give your saved search a name, then create a new dashboard and provide a "Dashboard name."

  • Choose the "Panel type" (chart or table) and specify the "Chart type" among options like column, bar, pie, etc.

  • Set the chart limit and click "Save." A new panel is created on the dashboard.

2. **Displaying a Dashboard:**

  • Select "Dashboard" from the main menu and choose your dashboard name from the dropdown list to view it.

3. **Modifying a Dashboard:**

  • Click on "Tools" > "Change Layout" to reposition panels. Save changes after modifications.

  • If there are null data fields affecting visualizations, adjust them by clicking "Configuration" > "Settings" and selecting "Saved Search." Edit the query to exclude null values: `AND NOT (destinationUserName IS NULL)`. Load the modified search for updated results.

4. **Advanced Search UI:**

  • Use the "Analyze Tab" in advanced search settings to determine correct search syntax, as shown with examples like `AND NOT (DestinationUserName IS NULL)`.

This document is intended for internal use within HP and partner companies, emphasizing control and customization of data visualization tools through detailed steps. This document outlines the creation of a dashboard in ArcSight Logger using specific queries related to login activities, including successful logins by user name, failed logins by user name, and logins by device product. The goal is to populate a Login Activity Dashboard with four different panels, each displaying search results from predefined queries. Here’s how it breaks down: 1. **Create the First Search**: This involves creating a query under the category "Behavior" set to "/Authentication/Verify" followed by applying a pipeline operator that charts count based on the outcome of authentication. Save this search as a new dashboard. 2. **Second Search**: Here, filter the results further using "categoryOutcome = /Success". Apply another pipeline operator to chart counts by destination user name and sort them in descending order based on count. Save this query into the same dashboard. 3. **Third Search**: Create a search for failed logins by setting "Behavior" to "/Authentication/Verify" and filtering where categoryOutcome does not equal /Success/. Again, apply the pipeline operator and save. 4. **Fourth Search**: This one involves querying based on device product during login attempts, using the same behavior but adding a filter for successful outcomes or failures as needed. Save this search into the dashboard. 5. **Dashboard Population**: Add each of these four searches (each represented by a panel displaying chart results) to the newly created dashboard. 6. **Export Capability**: Ensure that after creating the searches, you verify if exporting the results is possible and follow the steps provided under "Simple Reports" for export settings and options. 7. **Understanding Existing Reports**: For custom reports or when required, consider using available out-of-the-box reports within ArcSight Logger instead of reinventing the wheel. Focus on creating only if it adds significant value and time allows. 8. **Reporting Capabilities**: Recognize that Logger Reporting works best with structured data and understand how to navigate through reports built upon report queries, which can include parameters for dynamic runtime values. 9. **Report Management**: Know where to manage all types of reports, including defining types, uploading packages, and setting up report parameters within the administration section of ArcSight Logger. This helps in effectively organizing and using reporting capabilities. This document outlines how to customize an existing report using a Logger tool. The process involves several steps including saving the original report under a new name, editing the display fields, providing filter criteria, grouping and sorting options, and modifying queries if necessary. Here's a summarized version of the instructions provided: 1. **Save the Original Report**: Choose an existing report (e.g., 'User Investigation') and save it under a different name to work on a modified version. 2. **Customize the Report**:

  • Open the saved report and click "Customize Report."

  • Select the "Display Fields" to edit, then provide "Filter Criteria," Grouping, and Sort Orders as needed.

  • Remember to save your changes frequently.

3. **Preview and Run the Report**: Preview the customized report before finalizing it by selecting "Preview" and then clicking "Run Now." Check that the settings meet requirements and confirm the report is working correctly. 4. **Modify Queries (if necessary)**: For more complex edits, open the query editor within the customization window:

  • Select "Data Source," then click on the existing SQL Query to edit it.

  • Remove or modify any unwanted fields such as selection criteria. Ensure that the modified SQL syntax is correct before saving.

5. **Finalizing**: After making necessary adjustments, preview and run the report again to ensure accuracy. The final report should deliver the required information without errors. 6. **Advanced Customization**: Reports can be customized further by altering formats (e.g., Chart/Table), adjusting queries with SQL Editor tools, adding prompts for user input, and enabling drill-down options for deeper analysis. 7. **Lab Exercise**: Follow a specific example to practice customizing the 'Event Details by HostName' report:

  • Save it under a new name, open it in edit mode, remove or edit the selection field related to hostname, test the modified report, and finalize changes.

This summary captures the main steps for effectively modifying existing reports using Logger tools, emphasizing the importance of saving backups, editing fields, and ensuring query syntax accuracy. This text appears to be a sequence of copyright notices for various documents related to internal use within Hewlett-Packard (HP) and with its partners. Each notice starts with the same boilerplate statement indicating that the information is copyrighted by HP, and it may change without notice. The notices are specifically restricted for use by HP employees and partner organizations involved in development or support activities related to HP's products and services. The documents seem to be part of a larger set focusing on user login activity, event monitoring (including alerts), configuring settings, and using lookup files for static correlation and enhancing event data with additional contextual information. The numbers at the end of each notice reference specific document identifiers or pages within internal documentation systems. This document provides information about the features and capabilities of HP's ArcSight Management Center (ArcMC), a system designed to manage various ArcSight products including connectors, loggers, and software connectors centrally. The ArcMC offers a single panel view for all managed ArcSight products, facilitating easier management and monitoring through functionalities like managing connector appliances, loggers, and other connected devices. Future updates plan to enhance the capabilities of the ArcMC by incorporating 100% connector application (ConApp) functionality, further expanding its capability to manage and monitor events effectively. The document also outlines the hardware specifications of the Management Server, which can support up to 1000 managed connectors depending on the form factor (either hardware or software). This document outlines the structure and functions of "ArcMC" (ArcSight Management Center), a management system developed by Hewlett-Packard for managing and monitoring various components such as ArcSight Loggers, ConApp connectors, and SmartConnectors. Key differences from previous versions like ConApp are highlighted, including enhanced capabilities to manage software upgrades on Loggers, handle user configuration changes, track configuration changes with an automated dashboard, and remotely configure BlueCoat and WUC SmartConnectors. The document also provides a glossary of terms related to ArcMC, such as "host," which refers to a computing device under management by the system. This document outlines the functionality of an ArcSight system that includes at least one ArcSight product, where nodes are products like Connectors (including Connector Server and Logger). These nodes can be in software or hardware formats. A configuration listed in ArcMC is considered a golden configuration, and subscribers receive this configuration to maintain compliance with standards set by higher management levels. On the operational side, ArcMC facilitates reducing OPEX through bulk configuration management, ensuring standard policy compliance, maintaining consistent configuration policies across all nodes, and backing up configurations for rapid recovery in case of failure. Additionally, it improves resource allocation efficiency by minimizing support resources required for managing ArcSight products. The document also compares ConApp (a software solution) with ArcMC (both a software solution and an appliance), highlighting the differences in features like Connector Management, configuration & bulk management, compliance check, user management, and more between the two platforms. It announces new form factors for both ConApp and ArcMC: the C65XX series for ArcMC as a new software appliance, and migration options from older ConAp styles (C640XM and C650XM) to newer versions. Finally, it provides technical requirements such as versioning for connectors and loggers, both needing to be at least v6.0.3 or later for ArcSight Management Center Agent 2.0 installation, along with the software Connector Appliance specifically requiring version v6.4 P3. The document concludes by detailing how to add a host into ArcMC management, emphasizing the dependency on whether the host is in software or hardware form factor and the need for ArcSight Management Center Agent 2.0 being installed and running. This document discusses the ArcMC (ArcSight Management Center) architecture and its components, focusing on the Agent's installation and management across different form factors—hardware appliances, software ArcMC, and software connectors. Key points include: 1. **Form Factor Definitions**:

  • **Hardware Form Factor**: Applies to hardware appliances where the agent is installed manually by transferring an installer to the application host. The appliance then automatically pushes or installs the agent, followed by starting a service once installation completes.

  • **Software Form Factor**: Refers to software ArcMC and connectors requiring manual installation of the ArcSight Management Center Agent, which starts as a process on the managed host after installation.

2. **Agent Installation**:

  • For hardware appliances, download the agent installer from HP's site, transfer it to the appliance, install manually, and then start the service.

  • For software ArcMC or connectors, download the specific agent installer (e.g., ArcSight-ArcMCAgent-2.0.0.1151.0.bin) and install it manually on the managed host using command line instructions provided in the document.

3. **Agent Management**:

  • Verify the status of the Agent by checking its process or service status on the GUI or command line interface. Ensure that is running.

  • To start, stop, or restart the Agent manually:

  • Via GUI (Setup > System Admin > Process Status) for hardware appliances and software ArcMC.

  • Command line instructions for stopping/starting the agent are provided in the document.

4. **Architecture Overview**:

  • The global architecture includes a web client, server, HTTPs connections, and various services including Apache httpd, PostgreSQL database, and process executor.

  • Specific processes include apache (httpd service), ArcMC aps (process management), Database (PostgreSQL), and Logger Web service.

5. **Documentation Conventions**:

  • The document uses specific URLs for the ArcSight Management Center interface and mentions configuration files located under /config/server.properties and logs at /logs/default/.

This summary highlights key aspects of how to install, configure, manage, and verify the status of the ArcMC Agent across different form factors as outlined in this document. This document outlines the process for installing ArcMC from a root user account, which requires specifying an existing non-root user account and port through which users will connect to the UI. Upon first login, the default credentials are Username: admin and Password: password. The installation involves managing configurations in a four-step process: create or import, add subscribers, push check, and compliance with ArcMC configuration subscribers. ArcMC supports various types of configurations including Connector Appliance Configurations (such as BlueCoat, FIPS, Map File, Parser Override, Syslog Connector, WUC Internal Parameters, WUC External Parameters), Logger Configurations (Logger Configuration Backup, Logger SmartMessageReceiver, Logger Transport Receiver), and System Admin Configurations (Authentication Local Password, Authentication Session, DNS, NTP, Network, SMTP, SNMP). ArcMC collects health data from managed products at 1-min, 5-min, and 1-hour intervals to support charting and alert generation. Data includes CPU, Memory, Disk, Network, EPS In/Out, Event and Queue Stats, Thread Count, and Hardware metrics like Fan, Voltage, Power Supply, Temperature, and RAID status. This document outlines the details of HP ArcSight's Extended Security Manager (ESM) version 6.8c, focusing on its components and features for internal use by Hewlett-Packard Development Company partners. The ESM provides real-time correlation capabilities with enterprise-class SIEM, universal log management, user/flow/app monitoring, quick investigations, behavioral detection, fraud detection, and compliance reporting to address business needs such as zero-day attacks and PCI DSS compliance. Key features of the ESM 6.8c include: 1. High Availability (HA) through Active Channel in Web UI. 2. Query speed improvements with Bloom Filters. 3. Support for RHEL and CentOS operating systems. 4. Enhanced capabilities from CFC connectors, correlation enhancements, larger storage capacity up to 12 TB, transition to Java 7, and upgrades from specific patch levels. The document also provides an overview of the ESM main components such as ArcSight Command Center, Console, and various user roles with associated content and package options like dashboards, cases, identity views, rules, reports, filters, and more. Additionally, it details the resources available in the system including actors, active users, channels, asset cases, connectors, stages, customers, and search parameters. Finally, a table lists the default content provided with ArcSight, which includes various compliance standards such as FISMA v5.0, HIPAA v2.0, SOX v4.0, NERC CIP v1.0, PCI DSS v4.0, and others, along with their respective active counts based on the version 6.5c SP2 release. This document outlines the structure and functionality of the ArcSight Event Management (EMS) system, including its components and processes for data collection, processing, and storage. The main elements discussed are: 1. **Event Schema**: A comprehensive schema designed to collect, normalize, enhance, and monitor event data through seven lifecycle phases. This includes data collection from various sources, initial population, network model lookup, correlation evaluation, monitoring, investigation, workflow handling, incident analysis, and storage/archiving. The schema is indexed by fields such as name, message, and time, enabling efficient querying and processing. 2. **CORR-Engine**: This engine processes all queries related to resources or events using MySQL, which then stores the events in a columnar format within the EVENTS STORE (ESM) and RESOURCES STORE, utilizing InnoDB storage for updates and deletes. Alternatively, it supports PostgreSQL for database management. 3. **Storage Management**: Events are retained for an active period and archived daily according to a schedule that ensures older data is progressively archived further back in time. This includes archiving data from the previous day every 24 hours, with events being grouped together based on their Manager Receipt Time (MRT). Overall, this system provides a robust framework for handling and managing large volumes of event data within an enterprise environment, facilitating efficient querying, correlation, and analysis to support IT governance compliance, such as PCI DSS v3.01, Sarbanes-Oxley Act (SOX) v4.0, and the Payment Card Industry Data Security Standard (PCI DSS). The provided text is a summary of various aspects related to storage and archiving in HP ArcSight ESM (Enterprise Security Manager). Here's a breakdown of the key points mentioned in the text: 1. **Active Jobs and Archives**:

  • Events are stored in active retention periods until they drop off after their set retention period ends, but corresponding archive copies remain indefinitely.

  • When an event reaches the end of its retention period, it drops off from active memory to be archived.

2. **Event Management**:

  • Events receive a time stamp at Manager Receipt Time (MRT).

  • The CORR-Engine processes events within their retention period, applying filters and rules for correlation.

3. **ArcSight Console**:

  • There are multiple interfaces available for managing ArcSight products: Web UI (ArcSight Command Center), Java Client, and a new user interface that unifies management across all ArcSight products.

  • Features of the ArcSight Command Center include a Logger-like GUI in ESM Console, allowing administration tasks similar to how one would use a logger.

4. **Java Console**:

  • The version of the Java console must match the version of the ESM server.

  • It requires the manager's name and IP address be added to the host file for access.

5. **User Interface Features**:

  • The new user interface simplifies management across all ArcSight products, providing a unified platform for operations.

  • Console menus include options like Inspect/Edit, Navigator, CCE (Common Condition Editor), and various views such as events details & resources editor, viewer, charts, reports, dashboards, and active channels.

  • Events are color-coded by priority level, from 0 to 10. A lightning symbol indicates a fired correlation rule.

This summary captures the main functions and features of HP ArcSight ESM related to storage, archiving, and management through various user interfaces. The provided text appears to be a technical documentation or guide related to HP's ArcSight Event Management System (ESM), which is used for event correlation, management, and analysis. Here’s a summary of the key points from the document: 1. **Documentation Notice**: All sections are copyrighted by Hewlett-Packard Development Company, L.P., and subject to change without notice. The information is restricted for internal use by HP and its partners. 2. **Event Sequence and Correlation**: Describes how to drill down correlated alerts by accessing base events and trigger rules. These can be found in the Inspect/Edit panel within the Integrated Workflow Case Editor. 3. **ESM Architecture Sizing**: When sizing an architecture for ESM, gather information including event throughput requirements (EPS / EPD), event type requirements, log retention requirements, high availability/failover requirements, and additional customer-specific requirements such as bandwidth considerations, NAT considerations, MSSP considerations, regional/global considerations, compliance requirements, and use case requirements. 4. **ESM Manager Architecture**:

  • The system is Java-based with a server that communicates only with the ESM database.

  • Workflow includes CORR-e, Dashboards, User Interface (UI), ArcSight Rapports for storing events on a daily basis, and Web Notifications with indexed fields.

  • Connectors are available in different forms like SmartConnectors, Hardware Appliance, On board Connector, DVLabs FlexConnectors, LOGS SOURCES covering Security Cloud Application Network Physical Server CCTV Database IDENTITY.

5. **Protocols and Ports**: Lists various protocols and ports used by the ArcSight ESM system. 6. **ArcSight ESM Process Management**: Details a single script to manage all ArcSight services, controlling their dependence and startup sequence. The command line application for checking service status includes commands like start, stop, and status for all services as well as specific components like manager, web, execprocsvc. This documentation provides detailed information on how the ArcSight ESM system is set up, managed, and used to handle various types of events and data inputs efficiently within an organization's IT infrastructure. This document provides an overview of several key components within a HA (High Availability) architecture for the ESM (Extended System Management) system, which includes both software configurations and hardware details specific to Red Hat Enterprise Linux (RHEL) or SUSE Linux environments. The information is intended for internal use by HP and partner organizations. ### 1. ES Software Installation and Configuration:

  • **ESM Software**: Installed in the `/opt` partition, this directory contains all ArcSight software and data under a single umbrella path (`/opt/arcsight`). This partition is owned by the `arcsight` user and should be operated on using this account rather than through `root`.

  • **Event Storage and Archiving**: The event storage is located at `/opt/arcsight/logger/data/logger`, while archived events are stored in `/opt/arcsight/logger/data/archives`. Each day's logs have a dedicated directory for archiving.

### 2. File System Support:

  • **RHEL**: Supports XFS and EXT4 file systems.

  • **SUSE Linux**: Supports the EXT3 file system format.

### 3. Network Interface Configuration in HA Setup:

  • The primary ESM (host name `esm`) has an IP address of `16.103.74.24` and a service IP of `192.168.145.24`. It also uses eth0 for internal communication, while the secondary node (`host name esm1`), with an IP of `16.103.74.224`, has an eth0 address as well and uses eth1 for its service IP at `192.168.145.224`.

  • Cluster communication is facilitated through the cluster service IP (`16.103.74.23`).

### 4. HA with iPDU (Intelligent Power Distribution Unit):

  • **iPDU Functionality**: The iPDU module within HA can disable a machine if both nodes believe they are primary, ensuring smooth failover between the two ESM instances. This is facilitated by the Pacemaker PACEMAKER managing the service IP and interfaces for communication.

  • **Supported Hardware**: Only HP iPDU products are supported in the HA architecture setup. The Pacemaker STONITH agent via iPDU can remotely power on/off machines, providing a mechanism independent of primary hardware or software issues.

### 5. STONITH (Shoot the Other Node in the Head):

  • **Purpose**: Enables failover when the primary system is incapacitated and unable to release resources due to communication problems or software issues.

  • **Implementation**: The ideal scenario involves a power control mechanism like iPDU for independence from the primary system's hardware/software state, with network isolation (I/O fencing) as an alternative in some clusters.

  • **Fallback Method**: Default SSH based reboot control is used if direct communication fails, though this method requires that SSH access and a possible server reboot be feasible.

This documentation summarizes key aspects of configuring High Availability for the ESM system, including software installations, network configurations, hardware interaction (like iPDU management), and failover mechanisms like STONITH to ensure business continuity and fault tolerance in complex IT environments. The document describes an architecture for an Enterprise Security Module (ESM) using High Availability (HA) clustering with a primary and secondary node setup. Key points include: 1. **Primary and Secondary Nodes**: ESM runs on the primary node, while the secondary node is in standby mode. Both nodes have two interfaces - eth0 for intranet connection and eth1 for disk mirroring. 2. **Data Replication**: Data from the primary node is replicated at the block level using DRBD (Distributed Replicated Block Device). If the primary node fails, the HA on the secondary detects this failure, takes over the IP cluster alias address, starts up ESM, and adds the service IP to eth0. 3. **Failover Process**: Upon detecting a failure of the primary node, the secondary node automatically assumes its role by detecting the primary's failure, starting the ESM application, and replicating data from the failed primary disk if available. This results in minimal interruption of services. 4. **Managing ESM**: The document outlines steps for managing ESM, including starting/stopping ArcSight services with commands like `./arcsight setupmanager`, installing a license key using `./arcsight deploylicense`, and configuring the ESM Manager. 5. **Backup Overview**: It covers configuration parameter backup (with certain essential configurations backed up in configs.tar.gz) and database dump/import for restoring data from backups. Overall, this architecture ensures high availability and reliability of the ESM system through failover mechanisms and detailed management procedures. The provided text is a documentation guide for using CloudShare, a self-service cloud-based platform designed for HP ESP partners to conduct sales demos, virtual training, and testing environments with HP ArcSight solutions. To access the platform, partners need credentials and specific access rights granted by HP. Inside CloudShare, there are virtual environments dedicated to various HP ArcSight solution products such as Connector Appliance, Logger, Express, ESM, as well as for HP Fortify server and HP TippingPoint vSMS. The guide outlines the process of creating an environment for conducting proof-of-concept (PoC) tests with HP ArcSight by selecting from pre-defined templates provided under "My Projects" in the platform's interface. This document outlines the process of setting up and using an HP ArcSight ESM 6.5c environment in a cloud-based setup, known as CloudShare. The steps include preparing virtual machines, launching the software, accessing the console, adjusting settings for optimal performance, and utilizing Cloud Folders for file management between local and cloud environments. 1. **Preparation**: Two virtual machines are set up: one running Windows 7 (for use with RDP) and another running RHEL server configured to run ESM. 2. **Launching ESM**: After setting up, access the ArcSight environment through the 'Environments' or 'My Environments' menu in the software interface. The main dashboard should display "Environment is ready/running" indicating successful setup. 3. **Cloud Base Environment Details**: This section provides a detailed overview of the components included within the CloudShare environment, such as the Windows 7 console for RDP access and RHEL server with ESM installed. Additional features include ArcSight Command Center, CORR-engine, Web Link Replay, Events Connector, and EMS 6.5c. 4. **Accessing CloudShare Environment**: To view or interact with the environment, navigate to 'Environment Status' where you can click on 'View' to access detailed settings and functionalities. 5. **Cloud Folders**: These are private FTP folders within CloudShare that function like local drives for file transfers between your laptop and CloudShare machines, as well as among different environments. Users can map these folders as local drives and upload or download files using an FTP server address accessed from the main Web page under the Cloud Folders menu option. This document provides a comprehensive guide on how to set up and manage HP ArcSight ESM 6.5c in a cloud environment, emphasizing ease of use and accessibility for both technical and non-technical users through detailed steps and visual cues. This document provides a step-by-step guide on how to set up and test the Replay Connector for an FTP server (Cloud Folders) in ESM 6.5c, along with instructions for logging into the ESM Console and enabling the admin account if necessary. The process involves testing Alert events, setting up event replay from a Boot Camp environment, and ensuring that the Demo/Test environment is ready for use. Key steps include: 1. **Testing the Replay Connector**: This includes opening the Demo Replay Connector Desktop Icon, selecting the Test Alert tab, choosing the number of "Test Alert Event" to send out, clicking SEND, and checking if the same event(s) appear in the Viewer Panel. 2. **Logging into ESM Console**: The default credentials are provided (Username: admin, Password: password). If necessary, enable the admin account using specific commands. 3. **Testing Alert Events**: After logging in, select "Demo Live" Active Channel under the admin's Active Channels to display Test Alert Event(s) and confirm that ESM 6.5c is up and running. 4. **Setting Up Event Replay for Boot Camp**: Return to the Desktop RDP, open the Replay Connector window, select the REPLAY tab, choose event sets (e.g., arcexpressdemo.events, demo.events, etc.), set Max Rate to 50 events/mn, and click CONTINUE to start sending events to ESM. Note that event sending does not loop by default. This document is intended for HP and Partner Internal Use and contains information subject to change without notice, as per the copyright notice provided. This document provides a guide for configuring and using CloudShare with ArcSight ESM (Extended Security Manager) environment. The lab will help users to run ESM environments, start the Replay Connector, select demo events, use FTP for data upload/download, extend CloudShare environments, and explain ArcSight Default Content using the Java Console. To begin, follow these steps: 1. Connect to CloudShare at https://use.cloudshare.com using your credentials (typically your email address as username and a chosen password). 2. Navigate to 'My Environments' and run the HP ArcSight ESM 6.5c environment by selecting it from the blueprint list. Click on 'View' then select ArcSight Windows 7 Console to access it. 3. Start the Replay connector: Go to the Replay Connector window, choose the REPLAY tab, and select appropriate event sets such as arcexpressdemo.events, demo.events, demoexpress-sp1.events, and osLogging.events. Set the max rate to 50 events per minute and click CONTINUE to begin sending events to ESM. 4. The lab also covers using FTP for data management within CloudShare environments, as well as extending these environments according to specific requirements. Additionally, it provides instructions on how to use the ArcSight Java Console to explain default content. 5. Finally, ensure that event sets are selected correctly and understand that they do not loop by default. This lab is designed to be completed after logging into CloudShare with your credentials and navigating through the provided steps for successful environment setup. This document outlines a training session for technical personnel on HP ArcSight Solution Products, providing an overview of the tools available and how to utilize them effectively. The content covers starting up the Console, logging in as 'admin', navigating through resources, inspecting base events, using smart/flex connectors for data normalization and enrichment, and understanding the architecture and features of ArcSight solutions. The session targets security professionals with a background in event management who are comfortable working within console environments. It provides skills to enable technical personnel to effectively demonstrate proof-of-concepts or use HP ArcSight Solution Products in their professional roles. This document outlines a training session for HP ArcSight Proof of Concept (PoC) Boot Camp, held from March 31 to April 2, 2015, in Vienna. The boot camp is designed to introduce participants to HP ArcSight Solution Products and includes hands-on labs focusing on various components such as Logger, CloudShare, ArcMC, ESM, Rules, Variables, Active Lists, Session Lists, Dashboard's, Data Monitors, Workflow and Case Management, among others. Participants are encouraged to share their names, company roles, experiences with the ArcSight product, and expectations for the training. This text provides a step-by-step guide on how to create, prepare, and run an HP ArcSight environment, including the specific steps for setting up and using the software as well as accessing its features. It covers aspects such as creating environments, adding details, preparing VMs, launching the ES (Event Server) Mgr 6.5c, adjusting settings, and using event replay connectors. The document provides a guide for testing the Replay Connector, an alert replay tool used in HP's Event Support Manager (ESM) 6.8c platform for Windows 7-based servers. Here's a summary of the key steps and information provided: 1. **Testing the Demo Replay Connector**:

  • Open the desktop icon named "Demo Replay Connector".

  • Select the "Test Alert" tab.

  • Choose the number of "Test Alert Event" to send out.

  • Click "SEND" to initiate event transmission.

  • Monitor the ESM Console and check if the events appear in the Viewer Panel. Ensure that both the command window and the dialog stay open to maintain connector operation.

2. **Logging into ESM Console**:

  • Use credentials: Username: admin, Password: password.

3. **Testing Alert Events**:

  • After logging in, navigate to "Active Channel resource" under the Navigator panel, then go to "Shared folder", expand "ArcNet Active Channels", and double-click "Demo Live". If test alerts are displayed, it confirms that ESM 6.8C is operational and the demo environment is ready.

4. **Using Replay Connector for Events**:

  • In the RDP session of the desktop, select the Replay Connector window.

  • Go to the "REPLAY" tab and choose event sets (e.g., arcexpressdemo.events, demo.events, demoexpress-sp1.events, osLogging.events).

  • Set the maximum rate to 50 events per minute and click "CONTINUE". Events will start flowing after a few seconds. Note that they do not loop by default.

5. **Installing Syslog SmartConnector**:

  • For local VM installations, use either SSH connection or console connection from your host machine (alternative #1).

  • Install the connector in a new folder to avoid overwriting existing configurations.

  • Choose "Syslog File" and point it to the connector's file path (/var/log/messages).

  • Use the Logger utility in the terminal window to type a message, which will be written to /var/log/messages and collected by the Syslog File Connector, sending data to the ESM manager on the same machine.

This guide provides detailed steps for setting up and testing the Replay Connector as well as installing a Syslog SmartConnector in an HP environment using Linux. This document outlines the process of installing and configuring a Syslog Daemon SmartConnector on both Windows and Linux systems to forward events to an ArcSight Console for monitoring. Here's a summarized version of the key steps: 1. **Windows VM-1 (VM-2)**: Install the Syslog File SmartConnector as a standalone application, starting it manually from the command line in the connector bin directory. 2. **Linux VM-2 (ESM 6.8c)**: Modify the Rsyslog configuration to include an entry that points to the IP address of the Windows VM's SmartConnector on port 514. This is done by opening an SSH session with Putty from a Windows Virtual Desktop, editing the /etc/rsyslog.conf file to include the connector's IP and port, then restarting the Rsyslog service. 3. **Syslog Daemon SmartConnector** forwards events to the ESM manager (running on the same Linux VM) which is part of the ArcSight Console. The modified messages are then displayed in the Active Channel within the console for monitoring purposes. Key points include:

  • Connector installation and setup instructions for both Windows and Linux environments.

  • Modifications required in Rsyslog configuration files to enable communication between the Syslog Daemon SmartConnector and the ESM manager on the ArcSight Console.

  • The standalone application nature of the connector requires continuous running, which is achieved by leaving the window open or managing it as a service if available for the Linux environment.

This is a training document about HP ArcSight, a security information and event management (SIEM) tool. It explains the concept of a "Network Model" which helps in understanding where events are happening within a network. The model collects data on IP addresses, open ports, operating systems, known vulnerabilities, and applications running on devices in the network. This data is used to prioritize events based on the Threat Level Formula (TLF), which considers factors such as agent severity, mapping of reporting from security devices, and dynamic index of threats. The goal is to assess how important each event is for business decisions. This document discusses the importance of a network model in cybersecurity management using ArcSight ESM (Enterprise Security Manager). Key components include assets, zones, networks, customers, and locations, which are enriched through the ArcSight ESM to provide event priority. The network model helps identify vulnerabilities, compromised applications, and open ports by mapping out the physical characteristics and logical categories of assets within a network. This information is crucial for assessing asset criticality and determining log source connector relevance in a business environment. The document outlines how ArcSight ESM enriches events with additional data to improve event priority ranking, which is displayed visually using a geographical location map. The system automatically scans for vulnerabilities and assigns assets based on IP address or range, categorizing them according to their criticality levels and other defined characteristics. This prioritization allows easy identification of high-priority issues that require immediate attention, as indicated by color-coded numerical labels from 0 (very low) to 10 (very high). In summary, the network model is a fundamental tool in cybersecurity management for efficiently monitoring and responding to potential threats. It provides a structured approach to managing assets across networks, identifying vulnerabilities, and prioritizing events based on criticality levels, all of which are essential for maintaining overall security posture in an organization. The importance of asset modeling lies in its ability to provide detailed insights into an organization's assets, which are crucial for effective risk management and security operations. This is particularly relevant when aiming to enhance the precision of threat assessments (Threat Level Formula or TLF) and ensure that all assets within the network can be accurately identified, categorized, and monitored based on their relevance and criticality. The Threat Level Formula (TLF) evaluates the priority of events affecting an asset by considering several factors including agent Severity, Model Confidence, Relevance, and Asset Criticality. This formula helps to determine how important each event is within the network context, which is essential for prioritizing responses and investments in security measures. However, there are limitations to asset resolution, such as the inability to re-resolve assets that have already been processed due to changes in the asset model, which can lead to inefficiencies in managing references to assets on endpoints. Additionally, populating a network model involves several methods including console-based and smart connector-based approaches, each serving different purposes from individual configuration to automated scanning of vulnerability reports. Overall, effective asset modeling is critical for ensuring that security measures are both appropriate and efficient across an organization's infrastructure, enabling better protection against potential threats and vulnerabilities. This text outlines several methods for populating a network model within HP's ArcSight software. The main methods discussed are SmartConnector-based methods and an ArcSight-assisted method using an archive file from an existing configuration database. The SmartConnector-based methods involve the use of CSV files or data generated by vulnerability scanner reports to import asset, location, and asset category information into the network model. This method automatically updates new assets and modifies existing ones in the model without creating asset ranges, assuming that zones, networks, customers, and locations are already established. The ArcSight-assisted method involves exporting a network model from an existing configuration database, translating its format to match the schema of the ArcSight resource archive, and importing it into the Manager as such. Additionally, there's information about training sessions for HP ArcSight Proof of Concept Boot Camp focusing on Active Channels, Filters, Field Sets, and Query Viewers used in monitoring and investigations within network security systems. These tools help display real-time event information patterns, track problems, transform complex data into graphics, extract events from multiple systems, and provide high-level summaries for investigation purposes. This passage provides a detailed overview of active channels within the ArcSight ESM/Express system, which are designed for monitoring and investigating event data effectively. The lifecycle is divided into seven phases, each with specific objectives aimed at enhancing the management and analysis process of large volumes of collected information from various devices or sensors. **Active Channels Main Goal:** The primary function of active channels in ArcSight ESM/Express is to allow users to view event data for monitoring and detailed analysis purposes. They serve as a flexible means to display resources, aiding in the identification of events that are relevant to specific investigations. It's important to note that these channels are not intended for results presentation but rather for direct involvement in the investigation process. **Creating an Active Channel:** The creation of active channels involves navigating through the Navigator panel and selecting "Active Channels." This opens a window where you can define parameters based on your specific requirements, such as event types or rules testing with delayed events. Once defined, these channels provide a dynamic view into real-time events stored in the database or access to historical data for rule testing. The passage also highlights that active channels are instrumental in "finding" events of interest by providing a platform where relevant information can be filtered and displayed according to customizable parameters such as date & time stamps, device agent manager, event receipt times, and more. This capability aids users in pinpointing specific incidents for further analysis and reporting. Overall, the passage provides clear instructions on how to set up and utilize active channels within ArcSight ESM/Express for efficient monitoring and investigation of IT security events, ensuring that critical information can be accessed promptly and effectively. This document outlines the process of creating an "Active Channel" within a management tool, specifically for use by HP employees and partners. Active Channels allow users to filter and display specific events or data based on user-defined criteria. The steps include creating a new channel named "Investigations," adding columns such as "Category Device Group," dynamically filtering events related to a selected device group (e.g., Firewall), and saving the channel for future reference. Throughout this process, filters are used to define what events or data will be displayed in the active channel. This document provides a detailed guide on creating filters for events within ArcSight using the ESM Manager and Connectors. Filters are used to match specific conditions with event details, allowing for greater consistency and reduced errors in content building. The process involves selecting CEF fields, logical operators, evaluated conditions, and saving the filter once the condition is met. Some tips for creating filters include ignoring case sensitivity when evaluating certain conditions (e.g., usernames) by unchecking a specific box in the Insp,ect/Edit Panel. Adding a NOT operator before a logical operator can be necessary to exclude unwanted parts of an event, such as excluding IP addresses that are part of a subnet. The document provides step-by-step instructions for creating filters specifically for Firewall events using the Navigator panel. By following these detailed steps and utilizing the provided tips, users can effectively create filters tailored to their specific needs within ArcSight. This guide provides detailed steps for creating filters using the "Category Device Group" field within a firewall event log, as well as instructions for attaching these filters to active channels. The process involves selecting the filter tab, adding conditions, defining logical operators and terms, applying the filter, and saving it in the admin's content or attaching it to an active channel. The guide starts by instructing users on how to create a new condition using "Category Device Group" as the field for filtering, with examples of setting up filters related to firewall events. It then explains the process of creating a new filter and adding conditions based on authentication challenges being verified versus failing outcomes. The steps are reiterated in different sections for clarity and completeness. Furthermore, the guide mentions that after creating a filter, it should be saved or attached to an active channel for continuous evaluation. This is particularly useful for monitoring specific events like user login failures. The final part of the document touches on field sets, which are collections of fields meant to be displayed in various views such as Active Channels, providing flexibility and organization across different tools and interfaces. This document provides a guide on how to manage "Field Sets" in a system. Field Sets allow users to select a subset of up to 400 possible fields from predefined or user-defined sets, which can be shared and modified. Two types of fields are available: sortable and unsortable. Users can add specific fields by selecting them from the Available Fields window, reordering them using arrows, and then setting the Field Set as current for display in a grid view. The document also explains how to create "Query Viewers" which are used to define and run SQL queries on various resources such as trends, assets, cases, connectors, and events. These Query Viewers generate high-level summaries, simple reports, and baselines. The objectives of this lab include understanding the purpose of Query Viewers, defining their advantages, identifying their elements, creating Base Queries and Query Viewers for detailed analysis. Query Viewers are tools within ArcSight that allow users to run SQL queries, analyze trends, and perform drill-down investigations on network activity. They provide high-level summaries and non-event based summaries for monitoring system health and revealing trends. These viewers can display results in tables or charts formats and work with trend tables rather than event tables for faster result processing. Query Viewers need base queries as their foundation, which are SQL queries defined within the ArcSight Console. Once created, these base queries serve as the basis for reports, trends, and query viewers. The primary attribute of any query viewer is its reliance on a core SQL query, which must be first created in the Reports resource Queries tab before being referenced in Query Viewers. The provided text outlines the process of creating and using query viewers in ArcSight for network security monitoring. Here's a summarized breakdown of the key points: 1. **Query Creation**: Users can create base queries by selecting data fields, specifying functions to run on them (like sum or average), and adding sort or group-by conditions such as source address, zone, or priority. The query is created using an Inspect/Edit panel within the ArcSight Console Navigator under Report resources. 2. **Query Components**: Inside a base query, various components are available:

  • **Fields Tab**: This tab allows defining the main options for querying data including selecting fields to summarize and applying filters for consistency and error reduction through re-use of content.

  • **Group By & Order By**: These features help in dividing results into separate buckets and sorting them according to specified criteria using functions like count, max, min, average, sum, and time frame grouping.

3. **Query Viewers**: Pre-built query viewers are available within the ArcSight system. Users can access these by opening the QUERY VIEWERS resource from the Navigator panel, right-clicking on a specific folder (like ArcSight Administration), and selecting "New Query Viewer". This action opens a new query viewer in the edit/inspect panel where attributes can be selected and configured. 4. **Benefits**: Query viewers provide high-level summaries like number of events per day or virus infections by host, aiding in efficient monitoring and analysis of network security incidents. 5. **Steps to Create a Query Viewer**:

  • Navigate to the QUERY VIEWERS resource using the Navigator panel.

  • Right-click on the desired folder (e.g., "'s Query Viewers").

  • Select "New Query Viewer" to initiate creation, which opens in the edit/inspect panel where attributes can be configured.

This summary captures the essential steps and functionalities for creating and utilizing query viewers within the ArcSight platform for effective network security monitoring and analysis. This text provides instructions for creating and using Query Viewer (QV) tools in the context of IT security monitoring. It outlines steps for naming, setting up base queries, selecting fields for sorting, and configuring display formats such as tables or pie charts. The document also includes specific use cases demonstrating how to set up QVs for monitoring failed logins and firewall blocked IP addresses, with instructions on data collection methods and field selections. Additionally, it covers the concept of baselines in QVs, which are used to compare query results against a predefined standard to identify significant variations or deviations. The document provides an explanation and guidance on how to use baselines for comparing data in a query viewer tool. Baselines are used to compare the count of events over time using specific key fields that should be unique. Data comparison is done between the baseline and the displayed results, with positive values indicating more events than in the baseline and negative values meaning fewer events. Key fields include event names or priorities, counts, and connector information's. Baselines are stored as file resources within the query viewer and can be viewed by expanding the Attachments folder in Navigator. They are applicable to table views only and require key fields for matching between baselines and displayed data. The document outlines several features of baseline comparisons such as retrieving snapshot data, comparing multiple baselines, viewing meta-information about the baseline, filtering on delta columns, adding or removing baselines, and editing meta-information like descriptions. To create a baseline: 1. Define a goal or set questions/goals for what you want to compare (e.g., event traffic at different times of day). 2. Choose key fields that are unique (like event names or priorities) which will be used for matching with the data results. 3. Run comparisons against the baseline and adjust as needed. 4. Store baselines as file resources within the query viewer, accessible through the Attachments folder in Navigator. 5. Use these baselines to understand trends over time and improve performance of queries by comparing current result data against predefined baselines. This document outlines how to add and use baselines in query viewers for monitoring specific types of data. Here's a summary of the steps involved: 1. **Identify the Query Viewer**: Choose or develop a query viewer that returns relevant data (e.g., "Top Event Counts by Hour of Day") which can be used as a baseline. 2. **Monitor and Establish Baseline**: Run the query viewer to capture typical results, which will serve as the baseline for comparison. This involves setting up snapshots or record-keeping based on observed data throughout the day. 3. **Add Baseline Data**: Use the Navigator panel in the system to open the QUERY VIEWERS resource. Select the desired query viewer and right-click to view the data as a table. Right-click any row in this table, choose "Investigate > Add as Baseline" to add it as a baseline for comparison. 4. **Compare Results with Baseline**: To compare displayed results with the baseline, right-click anywhere on the table and select "Investigate > Compare with: ". You can then specify columns to include in the comparison. The result of this comparison will show differences between current values and the baseline, which may be positive (more events), negative (fewer events), or null if no baseline value is available for that key. This method helps in identifying anomalies or changes in data behavior by comparing it against a known good state, providing insights into performance and usage trends over time. This document outlines a lab procedure for creating a baseline in HP ArcSight ESM (Extended Systems Management) using Query Viewer (QV). The purpose is to analyze event priority data over the last 24 hours, broken down by vendor and product. Here’s a step-by-step breakdown of what's needed: 1. **Data Selection**: Select events from Replay Connector for three demo environments - Arcexpressdemo.events, demo.events, and demoexpress-SP1.events. Click "Continue" after selection. 2. **Navigate to Query Viewer Resource**: Go to the Navigator and select the Query Viewer resource. 3. **Run QV from Specific Path**: Use the path "/Shared/ArcSight Administration/ESM/Event Analysis Overview/by Priority/Event Breakdown by Priority for Vendor and Product" to run the pre-built QV titled "Events Breakdown by Event Priority for Vendor and Product." 4. **Create Baseline in QV**: When the QV appears, right-click on the table and select 'Investigate > Add as baseline'. Name your baseline. 5. **Compare with Baseline**: Right-click on the table again and choose "Compare with :" to see the added column representing the delta changes compared to the baseline. 6. **Refresh and Review Delta**: Click on refresh to view the updated data, which will reflect the differences from your newly created baseline. The document also provides an overview of seven phases in the event lifecycle:

  • Data Collection and Event Processing (Phase 1)

  • Network Model Lookup and Priority Evaluation (Phase 2)

  • Correlation Evaluation (Phase 3)

  • Monitoring and Investigation (Phase 4)

  • Workflow (Phase 5)

  • Incident Analysis and Reporting (Phase 6)

  • Storage and Archive (Phase 7)

Additionally, it discusses tools used in the correlation evaluation such as filters, rules, pre-persistent, lightweight data monitors, event-based correlations, non-event-based correlations, and software add-ons like ArcSight Pattern Discovery and Interactive Discovery. The objectives of this lab are to:

  • Describe different rule types

  • Configure Rules using Conditions

This document is intended for HP and partner internal use only, with the copyright held by Hewlett-Packard Development Company, L.P., subject to change without notice, marked as HP Restricted. This text discusses the use of "rules" in detecting specific events and situations, with actions taken as a response. Rules are defined by conditions that determine when they should trigger, thresholds for when these conditions are considered met, and the specific actions to be executed upon rule activation. The types of rules can vary from Standard (with full features), Lightweight (simplified version with fewer features), and Pre-persistence (for basic analysis before events are processed). These rules work with Active Lists & Session Lists and can interface with Case Management & Workflow systems. There are two main categories for these rules: Real-time and Scheduled, both of which operate differently based on the type of event or time interval they are set to monitor. Additionally, there are three types of rule features explained in detail: Standard, Lightweight, and Pre-persistence. The differences between these types include feature inclusion, action aggregation capabilities, and trigger conditions. This document provides an overview of different rule types in HP ArcSight for internal use, explaining their features and differences. The main categories discussed include Standard Rules, Lightweight Rules, and Pre-Persistent Rules. Here's a summary of each type: 1. **Standard Rules**: These are more comprehensive and allow the unrestricted correlation of multiple events based on specific conditions, which can be set to trigger various actions such as audit or processing flow conversion. They aggregate multiple event fields but do not have strict limitations on data fields included in their creation. 2. **Lightweight Rules**: Designed for faster analysis with a simpler rule creation process, these rules include only a small set of features and are processed earlier in the event flow than standard rules. They can perform basic event analysis and allow setting various event conditions without extensive aggregation or joining multiple events. 3. **Pre-Persistent Rules**: These rules operate during the post-persistence processing phase where they can only set specific fields on incoming base events, but these modified field values are available for subsequent real-time rules that run during the persistence process. Each rule type has its own characteristics and usage scenarios, with Standard Rules offering robust correlation capabilities and Lightweight Rules providing a simpler, faster analysis option without extensive data aggregation or joining features. Pre-Persistent Rules serve as an intermediary step in the event processing pipeline to facilitate further actions based on modified field values post-persistence. "ngflow" (likely a placeholder for "Event Flow") is a system designed for real-time rule-based event analysis in cybersecurity. It does not aggregate data fields but focuses on basic event analysis, de-duplicating events to enrich base events through correlation rules. The system uses the CORR engine to persist base events and enhance them with correlated events. The software supports several types of correlations: 1. **Simple Correlation (a.k.a. Events Aggregation Rules)**: This involves single event types or categories, basic conditions for de-duplicating events, and accumulating events in memory from a single source to a single target, effectively flattening event bursts. SmartConnectors in ArcSight also perform this type of correlation. 2. **Cross Device Rules (a.k.a. Joining Events)**: This involves correlating diverse events from different devices using common field values like IP addresses, ports, protocols, usernames, etc., with flexible Boolean logic for comparison. It's useful for matching complete end-to-end sessions across various devices and networks. 3. **Complex Scenarios Engineering (a.k.a. Chaining Rules)**: This involves using active lists to track logical sequences of events such as reconnaissance, attack formation, progression, and conclusion. These rules can involve long and short-term state machines, making them useful for identifying unauthorized system accesses and attacks. For enhanced efficiency and performance in rule evaluation:

  • **Rule/Data Monitor Enhancements**: This includes auto-reordering of conditions for better efficiency and a 25% enhancement in evaluation time.

  • **Pattern Discovery Enhancements**: These improvements focus on processing transactions faster, reducing complexity from O(nlog(n)) to O(n), supporting up to 15,000 events per second (EPS) with significant execution time reductions.

  • **Additional List Look-Up Functions**: This includes various functions for managing lists, such as getting the current time, distinct values, intersections, unions, and more.

This system is designed to help in detecting threats and anomalies by correlating disparate events within a network or system, providing a comprehensive view of potential security breaches and unauthorized activities. The article discusses internal use of ESM 6.8, focusing on correlation enhancements and automating condition optimization. It explains that to optimize the evaluation of event conditions, rules must be deployed in the Console's Real-time Rules with the property setting "rule.dm.optimize.evaluation=true" added to the server.properties file. This enables condition optimization by re-sequencing evaluations for better performance. Tracing is optional and provides information on how rules are optimized in the server.log. The article also outlines a three-step process for creating standard rules: defining conditions, aggregation, and actions. It describes the attributes tab as containing typically the Rule Name & defines the Rule type; the Conditions tab where filters for matching events are defined; the Aggregation tab where event aggregation and thresholds are specified; and the Actions tab which contains defining appropriate actions to be taken after a rule is fired. Overall, this article provides guidance on how to optimize rules using ESM 6.8 through condition optimization and outlines the process of creating standard rules with detailed steps for each part of the rule creation process. This document outlines various features and functionalities related to event aggregation and actions in a software tool, presumably used internally by Hewlett-Packard (HP) and its partners. The main points include: 1. **Filters**: These are designed for reusing content, maintaining best practices, ensuring greater consistency, reducing errors, and saving time. They can be applied to different aspects such as Active Lists, allowing chaining of rules and setting conditions in InActiveList. 2. **Assets**: The assets feature aims to increase the accuracy of rules by adhering to best practices. 3. **Vulnerabilities**: This section likely refers to the management or display of system vulnerabilities using the software tool. 4. **Active Lists**: Includes conditions for inaction and allows chaining of rules, which means it can link multiple events from different sources together based on specific criteria. 5. **Aggregation Tab**: Describes how aggregation mechanisms work within the software:

  • They use parameters such as the number of base event occurrences and time windows during which events must occur to define groups for aggregation.

  • Aggregation rules are applied using either identical or unique fields from selected Common Event Format (CEF) fields between events, depending on whether they should be grouped by similar values or different ones.

6. **Actions Sets**: These are triggered when a rule is activated and can perform various tasks such as setting event fields in correlated events display, sending associated events to HP OpenView, notifying specific user groups via the ArcSight platform, or executing predefined scripts based on the trigger conditions of the rules. The provided information outlines a method for detecting multiple login attempts (failures) on the same user account by using accumulated similar events with specific criteria. This process involves setting up rules within a system to aggregate certain fields like source IP, destination IP, and login username from authentication failure events over a set timeframe. The steps include: 1. Filtering for authentication failure events using ArcSight CEF categories. 2. Aggregating these events until a threshold of three occurrences is reached. 3. Setting up rules with specific conditions to trigger when the aggregated events meet the criteria. 4. Assigning meaningful names to the rules and managing them through actions sets, which allows for customization based on alerts or correlation events generated by the system. This method helps in detecting patterns of repeated login failures that could indicate potential security breaches or unauthorized attempts. This document outlines the process of creating a rule for event aggregation in a security system using CEF (Common Event Format) fields. The steps include selecting a time frame, defining aggregate fields, setting up actions, activating the rule, and ensuring it triggers successfully. It also emphasizes best practices such as using a "Proof of Concept" (PoC) folder for creating content rather than an admin's resource. 1. **Select Time Frame**: Set the time frame to 2 minutes. 2. **Define Aggregate Fields**:

  • Navigate to the Inspect/Edit panel.

  • In the "Aggregate only if these fields are identical" section, select "Add".

  • Add the CEF fields: AttackerAddress, TargetAddress, and TargetUserName.

3. **Set Actions**:

  • Select the "Actions" tab.

  • By default, set the rule to "On First Event", but change it by right-clicking and selecting "Deactivate Trigger". Then, activate "On First Threshold" by right-clicking and selecting "Activate Trigger".

4. **Add Action for On First Threshold**:

  • Right-click on "On First Threshold" and select "Add".

  • Choose "Set Event Field", then set the CEF field to display in a correlated event, e.g., Name with text "Repeated Login Failure on same user account".

5. **Test the Rule**: Use the test button to verify the rule's performance with persisted events. 6. **Activate and Deploy the Rule**: Drag the completed rule into the "Real-Time Rules" folder, choosing "Link" for best practice deployment. 7. **Best Practice**: Create a dedicated "Proof of Concept" (PoC) folder for storing content rather than using admin resources. 8. **Verify Success**: Ensure that the rule fires and sets the expected text in the correlated event when the threshold is crossed successfully. This document provides detailed instructions on how to configure, test, and deploy a correlation rule to effectively handle repeated login failures across multiple events. This document provides useful tips for creating rules in a system, focusing on how to handle conditions that require ignoring case sensitivity or adding a NOT operator before logical operators. It also explains the "Inline Filter" feature for easier event visualization within large datasets. Lastly, it gives an example of setting up a rule to detect root login failures as the target user account, with specific requirements for three conditions: authentication activity, failure status, and using the Root user account as the target username. This document discusses cross-device correlation using a tool from HP for detecting and analyzing events across different devices. The objectives are to understand advanced correlation rules and how they work in conjunction with other systems, as well as the Joint Operator feature that allows diverse events to be correlated based on common field values such as source IP, target IP, port, protocol, username, domain, location, zone, etc. The document provides several use case examples where cross-device correlation is applied: 1. Detecting external login onto an IP camera behind a firewall by correlating events from the firewall and the IP camera within a specific time window. 2. Detecting FTP passive sessions opened from any external IP address, requiring two distinct events - one from a firewall (allowing incoming passive FTP connections) and another from a NAS device (with FTP service enabled). The rule fires when two different events occur with matching conditions within a defined time frame. The document outlines the steps to create such rules, including specifying conditions for both devices involved in the correlation. It explains that joining multiple events requires common field values and flexible Boolean logic (AND, OR, NOT) for comparing event fields. The examples provided demonstrate how this feature can be used effectively within a network security framework. The document outlines the process for creating an advanced rule in HP ArcSight to detect communication attempts crossing a firewall and being blocked by an IDS (Intrusion Detection System). Here’s a summary of the steps involved: 1. **Rule Name**: Assign a name to the rule, such as "FW+IDS". 2. **Conditions**:

  • Two events are required: one from a Firewall and one from an IDS. The conditions include successful network access (Firewall) and communication request attempts (IDS).

  • Joined conditions should be set for fields like attacker address, target address, and transport protocol to ensure both events match these criteria.

3. **Aggregation**: Set a threshold with 1 match within 2 minutes using the aggregate tab. Specify that the aggregation should consider identical fields such as attacker address and target address. 4. **Actions**:

  • Upon triggering the rule, set event fields including the name of the rule (FW+IDS).

  • Send notifications internally to the CERT team via email or workflow messages.

  • Define a category for affected assets.

  • Add source IP addresses and usernames used in FTP sessions to active lists and session lists respectively.

5. **Results**: The rule will fire if it meets the specified conditions, with an aggregation threshold of 1 occurrence within 2 minutes. 6. **Use Case Details**: This rule is designed to detect unauthorized communication attempts that bypass a firewall but are subsequently blocked by an IDS. It involves joining events from a Firewall and an IDS based on specific criteria related to successful access, IP addresses, and transport protocols. 7. **Additional Notes**: The document includes instructions for setting up variables (not detailed in the summary) and emphasizes the importance of selecting appropriate event sources as well as configuring notifications correctly within the workflow system. This document outlines the concepts of global and local variables in a context related to cybersecurity tools used by Hewlett-Packard (HP). It covers various objectives, definitions, types, functions, benefits, examples of usage, and detailed instructions on managing global variables. The purpose is to provide users with a comprehensive guide to effectively utilize these variables within rules for enhanced performance and data management in security analysis processes. This document provides guidance on creating global variables (GVs) and local variables within various ArcSight resources, such as active channels, field sets, filters, rules, and data monitors. The process involves selecting the appropriate resource from the Navigator Panel, navigating to the "Fields & Global Variables" tab, and using the right-click menu to create a new global variable or manage existing ones. For creating global variables: 1. Navigate to the Field Sets Resource. 2. Select the "Fields & Global Variables" tab. 3. Right-click on 'admin's Fields' and choose "New Global Variable." 4. In the Attributes tab, provide a name for the GV. 5. Switch to the Parameters tab and select a category from the Function drop-down list, then choose a function within that category. For creating local variables: 1. Select Local Variables in the parent resource (e.g., active channels, field sets, etc.). 2. Click "Add" to create a new local variable. 3. Provide a name for the local variable and select the desired function, supplying any necessary arguments as prompted. 4. Click OK to confirm the creation. An example is provided where a global variable is created in an active channel to display network latency values by using the time stamp difference function. A field set with columns showing the time difference is also created. The result should be that the active channel displays the time difference in seconds between End Time and Manager Receipt Time, which helps in visualizing the data flow from devices through the ESM system. The document includes explanations of key terms such as "Start Time," "End Time," "Device Time," "Connector Time," and "Manager Time," which are essential for understanding the context of variable creation within ArcSight systems. This document outlines procedures for analyzing network latency using Hewlett-Packard's (HP) software tools. The steps include setting up field sets, creating filters to highlight significant latency events, and configuring rules based on local variables. 1. **Setting Up Field Sets:**

  • Navigate to the 'Field Sets' resource in the Navigator.

  • Right-click on '’s Fields Sets', then select "New Field Set".

  • Assign a name to the field set under the 'Attributes' tab and add fields such as 'Manager Receipt Time' and 'Name'.

  • Select these fields from the 'Fields' tab.

2. **Displaying Network Latency:**

  • Open an Active Channel, go to Navigator > Field Sets, select your previously created field set, right-click, and choose "Set as ‘Current Field Set’". The active channel will now show latency in the 'Time difference' column.

3. **Highlighting Events with Significant Latency:**

  • Create a new filter by giving it a name under the 'Attributes' tab.

  • In the 'Filter' tab, add global variables and check your previously created global variable (GV). Define the condition (>= 2) to highlight significant latency events.

4. **Creating Rules with Local Variables:**

  • For detecting user login activity outside office hours (8pm to 8am), use a rule that evaluates the time of occurrence in conditions such as successful authentication. Set the aggregation threshold to match within 1 minute.

  • This involves creating a new rule, adding conditions for 'categoryBehavior' and 'categoryOutcome', and defining a local variable using the 'Timestamp' function.

This document provides detailed steps for setting up and analyzing network latency in HP software tools, including creating filters and rules to highlight significant events. To create a rule for detecting root user login in a specified time range using CloudShare, follow these steps: 1. **Open the Function List**: Scroll down and select "Timestamp." In the right panel, choose the "GetHour" function. Click OK. 2. **Map Event Field**: Select the "End Time" event field to get the hour from it. Click OK. This creates a local variable for time extraction. 3. **Define Office Hours Range**: Double-click on the newly created local variable (Time_of_Day). Define the office hours range by selecting "Between" and entering 8 AM to 20 PM. To filter out office hours, check the NOT column and click Apply. 4. **Aggregation**: Go to the Aggregate tab and set # of matches to 1 and Time Frame to 2 minutes. Aggregate AttackerAddress, TargetAddress, and TargetUsername. 5. **Define Action**: In the Actions tab, add "Set Event Field" for the first event, naming it "Unauthorized Login After Office Hours." Click OK then Apply. 6. **Activate Rule**: Link the rule to the Real-Time Rules folder in your project or system. Ensure the rule is activated by moving or copying it there. 7. **Verification**: Test the rule by clicking calculate, which should display the result and fire the event if conditions are met. This process involves creating local variables for time extraction and defining specific conditions using these variables to detect unauthorized root user login within a specified time range during non-office hours. The document outlines a training program named "HP ArcSight Proof of Concept Boot Camp" held in Wien/Vienna on March 31 - April 2, 2015. It is designed for HP ArcSight partners to enhance their skills related to HP ArcSight solutions. The agenda includes sessions covering various aspects such as ESM (Extended Security Manager) functionalities like Active Lists & Session Lists, Dashboard's & Data Monitors, Workflow and Case Management, Managing users and permissions, and an overview of HP ArcSight Proof of Concept. The training focuses on teaching participants about Active Lists and Session Lists in detail. These lists serve as "memory tables" that can be dynamically updated by rules with data derived from events or other sources. They are useful for tracking suspicious activities like malicious IP addresses and compromised systems. Participants learn to create, populate, view entries, and chain rules using active lists. The training also covers the architecture of HP ArcSight solutions and provides a Q/A session to clarify doubts about the ArcSight technology. This document provides a detailed guide on creating and managing "Active Lists" and "Session Lists" within Hewlett-Packard's (HP) ArcSight software for internal use. The content includes instructions for configuring these lists, populating them with data, and understanding their anatomy. **Creating Active Lists:** 1. Navigate to the "Active Lists" tab in the Navigator Panel. 2. Select desired list settings including name, optimize data using hashes, capacity (number of entries), time-to-live (TTL) for entries, and whether multiple instances are allowed. 3. Once configured, click APPLY. Note that once saved, parameters cannot be modified. 4. Populate the Active List by importing CSV files or manually adding entries. **Anatomy of Active Lists:**

  • NAME: Specifies the name of the Active List.

  • OPTIMIZE DATA using hashes to reduce memory usage.

  • CAPACITY: Determines how many entries can be stored in the list.

  • TTL (Time To Live): Dictates how long an entry remains in the list.

  • ALLOWMULTI-MAPPING: Allows multiple instances of key pairings, including events or fields included in the Active List.

  • KEY FIELD: Enables rules to look up value fields within the list.

**Session Lists:** Differ from Active Lists by always including Start Time, End Time, and Creation Time fields. Entries are "terminated" instead of removed, data is partitioned into weekly partitions due to potential growth, and they do not need to fit entirely in memory. Session Lists are optimized for efficient time-based queries. **Creating Session Lists:** 1. From the Navigator Panel, select "Lists" then navigate to "Session Lists". 2. Select the "Session Lists" tab and right-click on 'admin's Session Lists', then choose 'New Session Lists'. **Anatomy of Session Lists:**

  • NAME: Identifies the session list in ArcSight pick lists; spaces and special characters are allowed.

  • OVERLAPPING: Allows multiple instances of key pairings, which can be helpful for more flexible data handling within the software.

This document outlines how to create an active list in ArcSight using specific examples and steps. The process involves creating a field-based active list with predefined fields such as Source IP address, Firewall IP address, FW Product Name, and FW Vendor. These lists are maintained for a specified time (TTL) known as the "Time To Live" period, which is set to 2 hours in this example. To create an active list: 1. Navigate to the Lists section within ArcSight. 2. Right-click on the existing admin's Active Lists and select 'New Active List'. 3. Provide a name for the list (e.g., "External Firewall Blocked IP address"). 4. Set the Time To Live period to 2 hours, meaning data will be held in the list until it is explicitly terminated or expires after 2 hours. 5. Select 'Fields Based' as the type of active list and define key fields including Source IP address and Firewall IP address. Additional fields like FW Product Name and FW Vendor are also defined. 6. Save the changes once all fields are specified. The active list is now ready for use, holding each entry (IP address in this case) for 2 hours before it can be removed or updated. This document provides a guide for creating and configuring rules within HP software products (presumably related to network security or event management) to populate active lists with specific aggregated data from firewall events. The process involves selecting the "Rules" option in the Navigator Panel, creating a new rule by right-clicking on an existing rule entry labeled "'s Rules", then providing a name and configuring conditions using filters such as Firewall filter that were previously set up. The guide explains how to add rules for filtering specific events like blocked accesses by setting appropriate logical operators and terms in the filter settings, which should be done carefully to ensure accuracy. After defining these conditions, users proceed to configure aggregation on selected event fields including Attacker Address (blocked IP address), Device Address (firewall IP used for blocking external IPs), Device Product (specific firewall model) and Device Vendor (brand name of the firewall). The rule is then applied with options like "On First Event" which will trigger whenever an event matches these criteria. To finalize, the guide instructs users to map aggregated event fields to predefined fields in the active list and confirm changes. The process concludes by reiterating that using filters for consistency and prior testing ensures effectiveness before finalizing rule setup and deployment. This document outlines the process of creating and activating rules to populate active lists in a system, along with instructions on how to chain multiple rules together for more complex scenarios. The information is intended for use within Hewlett-Packard (HP) and its partners, and is subject to change without notice. 1. **Creating and Activating Rules**: To create a rule:

  • Click "OK" to save the rule.

  • Confirm that the rule is complete.

  • Activate the rule by selecting it and dragging it to the "Real-Time Rules" folder.

2. **Linking Rules**: You can choose to copy, link, or move the rule. Selecting "Link" is a best practice as it allows the rule to be active under a project folder while also being functional in the system. 3. **Verifying Rule Activation**: After linking the rule, you should see it listed in both the project and real-time folders. 4. **Checking If the Rule Fired**: To determine if the rule has fired, go to "Lists" -> "Active Lists" under the navigator panel, right-click on the active list created, and select "Show Entries." 5. **Refreshing the Active List**: The active list should now contain data. However, refreshing is not automatic; you must manually click the recycle icon to refresh it. 6. **Lab 11-1: Create an Active List Populated by a Rule**: This lab involves creating a rule to detect interactive logins after a specific time. Conditions include detecting successful logins and creating a local variable for evaluation, with aggregation triggering on one match within two minutes. Actions involve adding selected event fields (Username, Source IP, Destination IP, Event Time) to an active list with a TTL of five days. 7. **Chaining Rules**: For more complex scenarios like potentially compromised user accounts:

  • Create a single active list and pair it with two rules.

  • Rule #1 detects repeated login failures and adds the IP and username to the active list after three failures.

  • Rule #2 detects successful logins and checks if the IP and username are already in the active list before adding them.

The document provides detailed steps and examples to guide users through creating effective rules for detecting various scenarios, emphasizing the importance of proper rule chaining for complex detections. This document outlines a process for creating and configuring rules using Active Lists in a security system. The primary focus is on defining two rules (Rule #1 and Rule #2) that are designed to detect unauthorized access attempts, track logical sequences of events, and trigger alerts based on predefined conditions. **Rule #1:** This rule involves monitoring failed login attempts over specific time frames, defined by the user through fields like AttackerAddress, TargetAddress, and TargetUserName. The rule is configured to be triggered if three occurrences of identical events (in this case, unauthorized access attempts) occur within two minutes, as specified in the Aggregate tab settings. Once activated, it adds these events to an Active List named "Repeated Login Failures." **Rule #2:** This rule specifically detects successful logins where the username is already listed in the "Repeated Login Failures" Active List. It triggers an alarm if three conditions are met: (1) categoryBehavior matches /Authentication/Verify, (2) categoryOutcome equals /Success, and (3) the condition that checks whether the username is in the list ("InActiveList"). The rules are configured to perform specific actions based on the events they detect. For Rule #1, once triggered, it adds new entries to the Active List "Repeated Login Failures" and maps these fields accordingly. Rule #2 activates a trigger upon detecting successful logins of users already listed in the same Active List, which then fires an alarm if all conditions are met. The document provides detailed steps for defining thresholds, configuring actions, and mapping event fields to ensure that rules are effectively implemented within the system. The document outlines the process of creating a rule in HP ArcSight for detecting potential network breaches by chaining two rules using an active list. The first rule identifies denied IP addresses as they are seen on different parts of the network, adding them to an active list. The second rule detects permitted IP addresses and searches if these IP addresses are present in the previously created active list. If a match is found, it sets the event field to "Potential Breach !!!". Here's the step-by-step process: 1. **Create Rule 1**: This rule will detect denied IP addresses seen on different parts of the network and add them to an active list based on specific CEF fields (AttackerAddress, TargetAddress, and TargetUserName).

  • Go to "Aggregate" tab in the rule creation interface.

  • Set "#of Match" to 1, "Time Frame" to 2 minutes, and ensure the "Add only if these fields are identical" condition is checked.

  • Add the required CEF fields: AttackerAddress, TargetAddress, and TargetUserName.

  • Click "OK" then "Apply".

  • Set the action to change the event field name to "Compromised User Account?".

2. **Create Rule 2**: This rule will detect permitted IP addresses by the firewall and check if these IPs are present in the active list created by Rule 1.

  • Similarly, configure this rule with aggregation settings.

  • Add the required CEF fields: AttackerAddress, TargetAddress, and TargetUserName.

  • Click "OK" then "Apply".

  • Set the action to change the event field name to "Potential Breach !!!".

3. **Testing**: After creating both rules, they should be copied to the real-time rules directory for testing. The document states that the rule fired successfully when tested, indicating a potential breach. The document also includes information on different scenarios of how data flows through HP ArcSight components like Logger and Connector, as well as explanations about proactive threat mitigation and long-term storage solutions provided by HP ArcSight. The text discusses HP ArcSight Logger, a system designed for log management in various scenarios. It covers the architecture, deployment options, and scalability features of the logger. Key points include: 1. **Common Event Format Logger with ESM**: Events are processed by both ESM (Extended Security Module) and Logger using a common format. The connector remotely manages distributed connectors to send events between ArcMCConnector Server or ArcMCManager Server and ESM. 2. **ESM-Logger Hybrid Deployment**: In this setup, Logger can store and forward filtered events in a hierarchical structure where upstream ESM Managers correlate and prioritize events by sending higher priority events to the Logger for storage. The Logger then sends these prioritized events downstream to a global ESM Manager for further correlation and case workflow assignment. 3. **Universal Log Management That Scales**: HP ArcSight Logger supports peer network configurations, allowing multiple loggers to work together in a peer relationship to handle high sustained input rates and distribute parallel queries across configured loggers. This setup enhances performance with features like higher online search speed, bloom filters for scanning speeds, and increased max online storage capacity. 4. **Scalability Metrics**: The text provides detailed comparisons of scalability metrics such as ingest license capacities, index rates, online search speeds, bloom filter scan speeds, and maximum online storage capabilities across different configurations (e.g., 1 x L160GB vs. 4 x L40GB vs. 12 x L15GB). 5. **Basic Architecture Scenarios**: The text outlines two basic architecture scenarios:

  • **Scenario #1**: A Logger acting as a standalone log management solution, suitable for environments with limited structured data and focused on event archiving and search within IT operations.

  • **Scenario #2**: A Connector Logger setup where the logger manages universal log management including structured data use via smart connectors, albeit with potential limitations in inbound EPS due to indexing and event processing requirements.

Overall, HP ArcSight Logger is designed to be flexible and scalable for diverse environments requiring robust log management capabilities. The document discusses two main types of SIEM solutions provided by HP for log management and correlation. These are the Appliance-based SIEM Solution (Set menu) and the Software-based SIEM Solution ('a la Carte') known as Scalable SIEM only Solution. Here's a summary of each type along with their advantages, disadvantages, and specific features: ### Appliance-based SIEM Solution

  • **Advantages**: Simple deployment, license based upgrade (same hardware on all models), significant log retention capability, and provides easy configuration for the Connector (Express).

  • **Disadvantages**: Limited search capabilities to structured fields only, log retention is limited by onboard storage, and price increases as more data needs to be processed.

### Software-based SIEM Solution ('a la Carte' or Scalable SIEM)

  • **Advantages**: Very versatile, modular options, highly scalable, enterprise class SIEM capabilities, and much more cost-effective for typical customers compared to the appliance-based solution.

  • **Disadvantages**: Inappropriate for log management if raw event data is essential, Logger forwarding speed limitations (typically 3-5Keps), and raw events might not be available in Express directly unless right-clicked search is used.

### Connector Types: There are several connector types described: 1. **Connector Express** - Simplifies forwarding to ESM with better analysis and reporting capabilities, but has limited search functionality. 2. **Connector Logger ESM** - Collects all data and sends only a subset (actionable events) to Express for correlation, which is more cost-effective for typical customers. 3. **Connector Server Scenario # 5 & # 6** - These scenarios involve the Connector Logger Express or Connector Express Logger, focusing on correlating everything through ESM before being stored in Logger as an 'online archive'. 4. **ESM Connector Logger** - Similar to above but with specific data handling and forwarding details optimized for different use cases. Each scenario has its unique setup and application based on the user's requirements, balancing between cost efficiency, scalability, raw event preservation, and search capabilities. The ArcSight connector is flexible but has higher overhead. It allows for independent configuration of each connector feed, including raw/hash/aggregation/batching, which can be beneficial depending on the use case (e.g., non-repudiation). However, this flexibility also means there's a higher administrative burden when making widespread changes due to the need to manage twice as many destinations for connectors. The OS compatibility of ArcSight connectors includes Windows XP Professional, Windows Server 2003 R2/2008 SP1/SP2/R2, Red Hat Enterprise Linux (versions 5.7, 6.1, and 6.2), Novell SUSE Linux 11, Sun Solaris 10, and IBM AIX versions up to 7.1. The connectors are supported on these platforms for basic use but certified for more rigorous testing in a production environment. Logger dimensioning depends on factors like performance (with best practices ranging from 10Keps with no full text indexing to max 6Keps with it turned on), storage requirements, and retention policies. For example, at 90 days of retention with structured data, configurations can range from 0.7TB for 500 eps up to 136TB for 100,000 eps. High-throughput environments may require multiple Loggers. Communication modes include SSL (uncompressed), console, certificate-based, and XML via SSL. The connector is FIPS 140.2 compliant, which means it meets certain security standards related to encryption for government use in the United States. When considering a possible architecture, you should gather information about event throughput requirements, event type requirements, log retention requirements, high availability/failover requirements, and any additional customer-specific needs. This data will help inform decisions on how best to deploy ArcSight connectors for optimal performance and functionality. This document outlines considerations for deploying ArcSight solutions in an enterprise environment, covering various aspects such as bandwidth usage, network address translation (NAT), use of managed security service providers (MSSP), regional and global deployment strategies, compliance requirements, and specific use case requirements. The information is divided into sections detailing how to gather necessary information, dimension the architecture for optimal performance, and present basic architectures with optional components like the Connector Appliance (CA) or Logger. For information gathering, key points include understanding primary use cases related to IT operations, compliance policies, real-time monitoring, event correlation, and data storage strategies. Dimensioning considerations involve assessing the number of devices, desktop endpoints, event rates, log volume, and source types like firewalls, proxies, Windows systems, and applications. Architecture options are introduced with a basic configuration that includes the ArcSight Event Server Manager (ESM) along with optional components such as the CA or Logger. Typical customer requirements for this architecture include handling low event throughput, having a small number of unique events sources, and implementing short to medium term retention policies. The document also highlights some limitations in terms of performance based on CPU cores and database input/output usage. Overall, this document is intended for internal use by HP partners and customers to ensure proper deployment and configuration of ArcSight solutions tailored to specific enterprise needs. The document outlines three different architectural setups for deploying ArcSight, a security information and event management (SIEM) solution, focusing on scalability, performance, and cost-effectiveness based on specific customer requirements and limitations. **Architecture 3 (Basic): ESM, Logger, and Mandatory CA** This setup includes an ArcSight ESM instance managed by analysts who use the console or web browser to access events for real-time correlation. Events are forwarded from the ESM to a Logger or Correlation Appliance (CA) for long-term storage. The primary limitation is that connectors must be managed remotely through a Connector Appliance, and performance is limited by the Manager's CPU cores and Database SAN I/O utilization. The advantage is cost-effective long-term storage via the Logger compared to ESM. **Architecture 4 (Basic): ESM, Multiple Loggers, and Mandatory CA** In this setup, multiple logger appliances are configured in a peer network for load balancing purposes. All SmartConnectors are managed remotely through a Connector Appliance. The main limitation is that performance is limited by the number of Logger Appliances times their maximum EPS capacity (up to 2,500 EPS per Logger). Advantages include scalability and cost-effective long-term storage via Loggers. Overall, these architectures cater to varying levels of event throughput and source diversity, emphasizing either immediate correlation capabilities with limited long-term storage or scalable, distributed systems for more extensive retention policies. The primary goal is to provide a balance between real-time performance and cost efficiency based on the specific requirements and constraints defined by each customer. This document outlines various aspects of High-Availability configurations, connectors, and appliances used in Hewlett-Packard's ESM (Extended System Management) solutions. The information is intended for internal use within HP and partner companies. Key points include: 1. **High Availability**: The effectiveness depends on the mechanism - push via load balancer or cluster software, or pull via cluster software. Connector HA is less common due to its complexity and cost implications. Failover and multiple destinations are often preferred alternatives. 2. **Appliances for High Availability**:

  • **Logger**: Utilizes connectors to send events to two loggers. Each logger can be considered as a connector, such as Express or ESM on Oracle.

  • **Express**: Features four onboard connectors and can be upgraded within the same hardware platform. It does not support direct upgrade to ESM but can act as an appliance or virtual appliance.

3. **Connector Management**:

  • For devices like Logger L3500, no connector management is required.

  • For Logger L7x00/LxGB and Logger to ESM scenarios, it is recommended to use connectors.

4. **Number of Connectors**:

  • The L3505 has 4 onboard connectors.

  • The L7505x does not have physical connectors but includes software connectors as part of the Connector Appliance (Manager) recommendation.

  • Express versions 4.0 and above come with 4 on-boarded connectors.

5. **Express Features**: Provides 50 User IDView, connector support, and one console user/unlimited web users. Upgrades are possible within the same hardware platform for Express but not directly to ESM. 6. **Sizing Scenario**:

  • A large national organization with varying systems (2000 servers, 30,000 desktops) was described without detailed information. The SIEM and log retention requirement recommended a 90-day period for data storage. Total estimated points per second (eps) were calculated at 7500 (peak at 15,000 eps), requiring further sizing calculations to determine the exact system requirements based on specific details of the systems involved.

This document provides guidelines and recommendations for implementing High-Availability features in HP's ESM solutions, as well as considerations for connector usage and system sizing based on varying workloads and data retention needs. The text provides information about a SIEM project requirement for a small bank, focusing on the collection and retention of security events across multiple locations. It outlines three layers in the SIEM solution architecture—Data Collection Layer, Data Storage Layer, and Data Processing Layer—and specifies retention requirements including at least 60 days online storage, 120 days offline archive storage, and 7 years for correlated events. The document also includes sizing recommendations for specific hardware components:

  • For a small bank with about 380 EPS (peak) from Windows servers and network devices, the recommendation is to use an AE7410 for event collection and a C3500 switch as part of the infrastructure setup.

Additionally, it provides details on how many connectors and loggers are needed based on the system's capability to support contingency (6x Connectors with 24K EPS raw event rate, suggesting models like C5500 for up to 12k EPS per data center) and retention requirements. The document discusses HP ArcSight Proof of Concept Boot Camp Training, focusing on the use and creation of dashboards within the ESM (Enterprise Security Manager) platform. It provides an overview of what dashboards are and their purpose in visualizing IT environment states through indicators from SmartConnectors. The objectives of the training include understanding how to define data monitors and dashboards, list their functions and characteristics, identify different types of data monitors, list components in the report workflow, and build custom reports with scheduled jobs. The document also mentions that dashboards serve as a car's instrument panel, displaying information about an IT environment's state through graphical or tabular formats. They can summarize event flow, communicate the impact of events on networks, or display the status of ESM components. The content for the Proof of Concept (PoC) should concentrate on creating and reviewing dashboards using existing templates within ESM, with a focus on graphical representation and drill-down capabilities to identify patterns and anomalies in event data. Overall, this document serves as an educational resource for users looking to enhance their understanding of HP ArcSight ESM features through practical training sessions focusing on dashboard creation and utilization. This document outlines a three-step process to create and manage "Data Monitors" within a system, which are designed to analyze event streams and provide summary data for display in Dashboards. Data monitors can be categorized into several types including Event-based, Correlation, and Non-event based, each serving different use cases. They function by filtering events according to specific criteria, aggregating them, and presenting the summarized information graphically within a dashboard. Key components such as filters, time criteria, aggregate fields, and value fields are used across all monitor types to ensure accurate data analysis and visualization. This process involves creating a "Top Value Counts" Data Monitor for firewall events in a dashboard system. Here’s the step-by-step breakdown: 1. **Open the Dashboards resource** from the Navigator panel and right-click on "'s Dashboards" to select "New Dashboard". A new dashboard opens in the Viewer panel. 2. **Create a new Data Monitor**: Right-click on "'s Data Monitors" and choose "New Data Monitor". This will open a new Data Monitor in the Inspect/Edit panel. 3. **Select the Data Monitor type** as "Top Value Counts". The Data Monitor will populate with fields relevant to this type. 4. **Name the Data Monitor**, enable it, select a filter (e.g., Firewall events), and configure aggregate and time fields:

  • Aggregate Field: Choose "Target Address" (multiple selections allowed).

  • Time Field: Select "Manager Receipt Time". Ensure all relevant events are included by checking End Times if there are sync issues.

5. **Save the Data Monitor** and add it to the dashboard by right-clicking in the Navigator panel, selecting "Add to Dashboard As", and choosing a format like Bar Chart. The data will only display if matching events have arrived since enabling the monitor. Allow some time for the data to appear normally. 6. **Finalize the Dashboard**: Right-click on the "Untitled - Dashboard" tab to save it with the newly added Data Monitor. This document provides guidelines and tips for creating dashboards using HP's ArcSight platform, which includes instructions on configuring event graphs and handling data monitor restarts. It also outlines specific use cases for creating login and firewall activity summaries, including how to set up filters and configure panels within the dashboard. Additionally, it covers a lab session focused on workflow and case management, aimed at helping users understand the basics of ArcSight workflows, resource functions, and event sequences. The document is intended for internal use by HP partners and employees only, with information subject to change without notice. The document discusses the ESM (Enterprise Security Manager) Workflow, which is used in a SOC (Security Operations Center). This workflow involves various processes such as informing people about incidents, escalating incidents, and tracking responses. ESM provides resources like Annotations and Cases to facilitate collaboration among users while dealing with incident or event tracking. Annotations are collaborative tools for ArcSight users that help track individual events, assign them to users/groups for escalation, flag events for follow-up, compare incoming events to annotated events, and identify events with similar attributes. They can be used as a triage tool before escalating events to Cases. SOC operators can mark events as reviewed to distinguish new events from those part of an ongoing investigation. Cases contain information about specific incidents, tracking individual or multiple related events, and allow for the attachment of additional troubleshooting information for detailed investigations. Cases can be created and assigned to another user for resolution, with rules being configured to automatically open cases when certain conditions are met. A case must be locked before it is assigned to users. Stages refer to the various steps that an event passes through in Annotations or Cases, ranging from not inspected (queued) to investigation concluded (final). Notifications are mechanisms used to send information about certain conditions in ESM, initiated as automatic actions in rules, system alerts, or when a case is opened or modified. Notification elements include groups, destinations, escalation levels, and acknowledgment. This document outlines the process for creating cases within a system at HP, detailing how notifications can be automatically initiated based on rules or triggered by specific events such as alerts, case openings, or modifications. It also provides step-by-step instructions for manually and automatically generating cases, including details on notification mechanisms like email, pager, text message, and the ESM Console. The document includes examples of creating a case through rule modification to detect unauthorized logins after 'office hours' and assigning attributes, follow-up actions, and selecting a case group. This document outlines a training session titled "HP ArcSight Proof of Concept Boot Camp Training" held in Vienna on March 31-April 2, 2015. The content is intended for HP and partner internal use only and subject to change without notice. The focus of the session is on managing users and permissions within the HP ArcSight system. The objectives of the lab are as follows: Upon completion, participants will be able to understand user types, create user groups, manage permissions using Access Control Lists (ACLs), and more about how users interact with resources in the ArcSight environment. Key points include:

  • Users are organized into groups based on roles or other logical groupings, and managing them involves setting their permissions and passwords, as well as enabling or disabling login functionality.

  • Permissions to access specific ArcSight resources are granted through ACLs for specific user groups.

  • Each user type has distinct privileges such as normal users with full access, management tool users limited to management tools, forwarding connectors for data processing, archive utility users restricted to certain operations, and connector installers for adding SmartConnectors.

  • The ACL Editor within the ArcSight console allows viewing or editing of permissions on resources, operations, user groups, events, and sortable field sets. It provides a way to manage which user groups have inspect or edit access to different parts of the system.

This training manual is copyright 2015 by Hewlett-Packard Development Company, L.P., and is restricted for HP and partner internal use. Updates and changes in information may occur without prior notice. This document provides instructions for creating a new user group and user account with restricted access in the ArcSight Console or an ArcSight Web client. The steps include: 1. Navigate to the Users resource from Navigator. 2. Create a user group under 'Shared' | 'Custom User Groups', for example, 'My User Group'. 3. Edit the new group and add required ACLs:

  • View events labeled as 'Non ArcSight Internal events'.

  • Use (Read Only) the Demo Live Active Channel from the 'ARCNET' folder.

4. Create a new user account within this group with a password. 5. Log in using the credentials of the new user account and open the Demo Live Active Channel. Note that parameters in the 'Attributes' tab are greyed out due to restricted access. Additionally, there is information about managing users and permissions, creating custom user groups, and setting up an ArcSight Proof of Concept (PoC) which involves demonstrating how a solution meets customer needs through activities such as product install, event gathering, content creation, and presenting the solution. The PoC requires pre-defined scope, technical considerations, and adherence to legal agreements like the ArcSight Evaluation Agreement and Non-Disclosure Agreement. The "PoC Scoping Document" is a crucial document used in Hewlett-Packard (HP) for planning and executing Proof of Concept (PoC) projects. This document provides general information about the project, including point of contact details, IP addresses, rack space, dates, business requirements, success criteria, scope definition, device types, event rates, tasks, typical schedule, and more. Key elements include: 1. **General Information**: Includes details such as point of contact/IP addresses/rack space/dates. 2. **Business Requirements**: Outlines the interest/project outline/problem being solved. 3. **Success Criteria**: Defines clear, measurable goals to determine PoC success. 4. **Scope**: Specifies device types and numbers, flex connectors, event rates, and more. 5. **Tasks**: Lists prerequisites like paperwork approval, business and technical approval, availability of people and equipment, and flex connector events. 6. **Typical Schedule**: A standard timeline for the PoC process with specific milestones (e.g., days 1-2 for setup, days 3-4 for use cases, day 5 to present results). This document is crucial for any trial involving HP SE and is recommended for other scenarios as well. It acts as a guide that asks essential questions and encourages thorough preparation. The scoping document helps define and agree upon deliverables required for the project. This document outlines a proof of concept (PoC) process for integrating custom event sources into ArcSight, emphasizing simplicity and efficiency. The PoC should be kept short and focused, ideally 4-5 days long at most. To execute the PoC effectively, ensure the customer's infrastructure is ready with IP addresses, subnet mask, default gateway, hostname, domain, SMTP servers, NTP servers, firewall permissions (if applicable), and that all necessary licenses for HP products like ESM, Logger, and Express are in place. During the PoC, device changes should be made through a defined change control process, with access to Device Administrators ensured. The number of use cases and event feeds directly influences the length of the PoC; starting with 1-2 days for minimal setups up to 4-5 days when including FlexConnectors is planned. Useful tools such as Winscp, Putty, Baretail, MD5, Wireshark, and JDBC drivers should be utilized. Preparation before the PoC includes having all binary files required for installation on a portable device due to potential network access issues. It's crucial to have comprehensive documentation available, including release notes, installation guides, administration guides, getting started documents, user guides, and specific documentation for ArcSight features like ESM 101 and SmartConnector configuration. During the PoC itself, reconfirm with the client that all device feeds are operational and ensure there is access to Device Administrators when needed. The document emphasizes that FlexConnectors should be restricted to one during initial testing due to potential complexity in development. During a Proof of Concept (POC) for HP ArcSight, several key steps are crucial to ensure success. These include reviewing available hardware specifications, setting up network access, configuring OS and system software prerequisites such as CPU, RAM, and checking Unix RPMs. Storage rules should be followed by setting the event data retention period to 5 or 7 days instead of the default 30-day retention. Testing alert connectors is essential for showcasing content and generating ad hoc events, while taking screen shots and keeping track of deliverables are important for documentation purposes. After completing a POC, it's crucial to follow up with the client regarding purchase orders (PO), including checking if the PO is signed or there's a technical reason for delay. Lastly, be prepared before going on-site by having defined deliverables, smart connectors, device administrators ready, and all necessary IP information and licenses in place. This document outlines a training program for HP ArcSight Proof of Concept Boot Camp focused on use cases in software and systems engineering, specifically within the field of network security. The objectives are to help participants understand why use cases are prioritized, walk through vertical use cases, ensure feasibility and achievability, implement best practices with ArcSight ESM/Express, check results against requirements, and build a decision tree for determining how to approach use case development. The document begins by explaining the importance of understanding use cases in software engineering as they define interactions between roles and systems to achieve specific goals. It then lists various business requirements that can be addressed through ArcSight solutions such as perimeter security, insider threats, real-time event visualization, intellectual property protection, regulatory compliance triggers for actions, and more. The document includes a decision tree diagram to guide the process of determining how to approach use case development based on data sources, events, content refinement, and analysis. It also categorizes these use cases into groups such as network security perspective, business risks, application/service monitoring, insider threats, corporate policy compliance, regulatory compliance, and environmental specific uses. Overall, this training program aims to equip participants with the knowledge and skills necessary to effectively implement ArcSight solutions tailored to specific business requirements and regulatory needs. This document outlines various technical approaches and use cases for monitoring and managing cybersecurity risks in different sectors such as finance, insider threats, and perimeter security. Key areas covered include system inventory, vulnerability assessments, IDS/AV signature updates, malware outbreaks, brute force attacks, and specific threat detection related to financial data access and compliance requirements. The document also details operational aspects like configuration changes, protocol violations, critical systems restarts, and user activity monitoring that are crucial for maintaining security posture in various IT environments. This document outlines various use cases for monitoring and compliance in IT environments using the ArcSight ESM (Enterprise Security Manager) product. The use cases cover different aspects of corporate policy, regulatory compliance, data leakage prevention, insider threats, and more. Here's a summary of each section: 1. **Regulatory Compliance - examples**: These include actions such as administrator login/logoff to regulated systems, activity on regulated systems, configuration changes, attacks targeting systems, vulnerabilities on systems, account management, local admin/root use, database/application authentications, and system restarts. 2. **Corporate Policy Compliance - examples**: This includes policy violations like unauthorized usage of a "jump server" in the golden host policy, detection of P2P software, inappropriate software configuration, direct attempts bypassing proxy or email gateways using insecure protocols, direct logins as root, instant messaging usage, browsing inappropriate content (email and web), shared user accounts, and disabling security software. 3. **Governance/Compliance Use Cases**: These are generic use cases that demonstrate the product's capabilities in network traffic monitoring, firewall traffic analysis, user account management, VPN traffic monitoring, and more. Specific examples include inbound connections accessing internal protected address space, VPN Address Space, all firewall traffic, and created, deleted, or modified user accounts. The document is intended for HP and partner internal use only and the information contained may be subject to change without notice. Copyright © 2015 Hewlett-Packard Development Company, L.P., with subsequent numbers indicating continuous copyright increments. This document discusses the use case of monitoring outbound connections to a public address space using HP's system features, specifically focusing on creating an active channel with filters. The goal is to monitor all outbound connections from assets in the "Protected" category to the "Public Address Space" zone. The content required includes setting up an active channel with a filter that continuously evaluates events over the last 30 minutes, applying two conditions: source zone evaluating assets in the "Protected" category and destination zone evaluating zones in the "Public Address Space". It is important to select specific data sets for this evaluation. The document also covers interesting generic cases related to user activity patterns such as simultaneous logins from different IP addresses using the same account, tracking activities when a generic service account is used for all users on a target, and handling cases where a user logs in from different geographical locations with the same account. These scenarios include creating rules based on country codes, aggregating events by time proximity, and maintaining an active list to track user activity over extended periods. The document provides detailed steps and conditions for each rule creation, ensuring that specific fields are checked for aggregation or differentiation. This text appears to be related to a technical document or guide about using rules with active lists and variables in software, possibly from Hewlett-Packard (HP) for their ArcSight product. The document is part of a series numbered consecutively, each section discussing different aspects of rule creation and usage. Here's a summarized breakdown of the content: 1. **Introduction to Rules and Variables**: The first few sections introduce basic concepts about rules in the system, including what they are used for (detecting user actions like successful logins) and how variables can be utilized locally or globally for optimization purposes. 2. **Chaining Multiple Rules**: The document explains how to chain multiple rules together using active lists and variables. It outlines that the first rule adds a username and source country code to an active list. A second, more specific rule is created as a lightweight version of the first, which not only detects user logins but also checks values in the active list against conditions (username presence and country codes). 3. **Rule Conditions**: The rules are designed to fire under certain conditions: when successful logging is detected, along with other criteria like whether the username already exists in the active list and if the country code differs from previous records for that same username. This involves using a variable extracted from the list (username and country code) and checking these against event data. 4. **Variable Usage**: The variables used here are local but can be optimized as global based on performance needs, extracting specific values from lists relevant to the rule's conditions. 5. **Conclusion and Footnotes**: The document ends with a series of copyright notices at the bottom that indicate this information is restricted for HP and partner internal use, likely part of a larger educational or training material related to advanced malware detection through threat intelligence techniques. This text appears to be related to creating content for detecting badge swipes by terminated employees in a company setting. It outlines the process of generating two active lists using provided .csv files containing badge ID and employee information. Additionally, it introduces two global variables, getUserByBadgeID and UserInfoByBadgeID, which are used to retrieve user details based on badge IDs from the previously created active lists. The content is copyrighted by Hewlett-Packard Development Company, L.P., indicating that this document serves as a guideline or template for internal company use in managing employee access through badge swipes. This document provides an overview of different architecture scenarios for HP ArcSight solutions, designed to address various log management, SIEM, and security information event management needs. The scenarios range from basic logging with limited capabilities to advanced SIEM solutions capable of scaling up to enterprise-level requirements. Each scenario features a combination of Logger, Connector, Express, and ESM components that support different functionalities: 1. **Scenario #1 - Logger**: This is a log management only solution, suitable for environments where structured data search or pipeline reporting are not required. It has high inbound EPS performance but lacks the capability to handle unstructured data effectively. 2. **Scenario #2 - Connector & Logger**: Offers universal log management with support for various data sources and improved analysis through indexed event processing. However, it may have limitations in terms of inbound EPS performance due to its structured data handling capabilities. 3. **Scenario #3 - Express (Log Management & SIEM)**: A set menu SIEM solution that provides an appliance-based platform with limited log retention capability beyond the onboard storage. It is suitable for environments requiring a simple deployment and license-based upgrades. 4. **Scenario #4 - ESM (Scalable SIEM)**: An 'à la carte' solution, offering flexibility in customization based on customer requirements. It provides a software-based platform with significant scalability but may be less appropriate for log management without specific enterprise class needs. 5 to 7 cover various combinations of Connector, Express, and Logger components, each tailored for different use cases from 'focused customers' to scalable SIEM solutions, emphasizing either correlation through the ESM or direct logging functionality. The architecture scenarios highlight a range of options that balance between simplicity and complexity based on specific customer needs and technical requirements. This document discusses two different methods for data collection in a network model using HP's ArcSight software. The first method involves the use of an Express configuration with a Connector Server, and the second uses a Logger configuration without a Connector Server. Each method has its own advantages and disadvantages, as outlined below: 1. **Express Configuration with Connector Server:**

  • **Advantage**: This setup is easier to configure than the Logger method and provides cost-effectiveness for typical customers by focusing on "Actionable Events."

  • **Disadvantage**: If Express fails, data collection stops completely since it relies on ArcMCConnector Server. Additionally, raw event information might not be available in Express.

2. **Logger Configuration:**

  • **Advantage**: This setup is more flexible and allows for independent configuration of each connector feed to Logger and Express independently, providing better control over what data is collected and sent where.

  • **Disadvantage**: It requires double bandwidth due to the need to forward data from Logger to Express, which can be less cost-effective compared to the Express method. Additionally, scalability might be limited by the Logger's forwarding speed (typically up to 5 keps).

Furthermore, the document provides technical details about HP ArcSight Connectors, including their supported operating systems and architectures:

  • **Supported Operating Systems**: The connectors are compatible with various Windows and Linux distributions as well as Solaris and AIX. They support both 32-bit and 64-bit architectures.

  • **Certifications and Support**: The document specifies that ArcSight supports these platforms, addresses bugs, and has conducted sanity tests on them. Certified status means additional regression testing has been performed to ensure compatibility with the software.

Overall, the choice between using Express with a Connector Server or solely relying on Logger depends on specific needs such as cost efficiency, flexibility, scalability, and data handling requirements. This document outlines guidelines for Logger Dimensioning within the context of HP's products and solutions. The key points include performance metrics, storage requirements, communication modes, information gathering for architecture decisions, and primary use case considerations. Performance is discussed with reference to event per second (EPS) rates; without full text indexing, 'best practice' rates are around 10Keps, but this drops to a maximum of 5 to 6 Keps when full text indexing is enabled. Forwarding to Express/ESM has a limit of up to 6,000 eps. A conservative storage estimate for each event is given as 1765 bytes, and examples are provided for different EPS scenarios over a 90-day retention period with structured data. Storage requirements vary significantly based on the number of EPS; for example, storing logs from 500 to 250,000 eps would require storage capacities ranging from 0.7TB to 34TB and beyond respectively. High EPS solutions may necessitate multiple Loggers. The document also details various communication modes including SSL (uncompressed), console, Cert, XML via SSL possible for all, Express/ESM Connector, SmartMessage, and Logger. For information gathering during the dimensioning phase, key questions include event throughput requirements, event type requirements, log retention requirements, high availability / fail-over requirements, additional customer requirements, bandwidth considerations, NAT considerations, MSP considerations, regional or global considerations, and compliance requirements. Use case requirements specific to IT operations, compliance, real-time monitoring, correlation, and data storage options are also highlighted. Dimensioning involves assessing device count, event rates, log volume, source types (e.g., firewall, proxy, Windows, applications), and considering factors such as the number of datacenters, remote sites, high availability / disaster recovery configurations, global considerations including legal issues, encryption requirements for data held across states, and WAN data transmission optimization methods like compression or encryption. The document concludes with a call to action for questions, offering contact details for further information. The document outlines several enhancements and updates to HP's Sizing Tool, including new features such as cost comparison, growth requirements, multiple instances, logger with RAW and CEF modes, army of logger support, ArcMC management sizing with cost compare, and customer output changes. It also provides a detailed discovery process for understanding client compliance mandates, identity management preferences, reporting needs, third-party integrations, and device information. The document includes specific guidelines for designing systems using N-Tier architecture, including topology scenarios, best practices for sizing and capacity planning, and existing data usage analysis from Logger. The document discusses various aspects related to logging and event management systems used in network security solutions. It covers planning for different components such as connectors, loggers, and Event Stream Processing (ESP) units within the ESM (Enterprise Security Manager). Key points include: 1. **ESM 6.5 Data Usage and Compression:**

  • To monitor data usage of 'agent:050' over the last 24 hours, query the ESM command center with specific parameters to get daily totals of custom fields `deviceCustomNumber3` (for event counts) and `deviceCustomString4` (for raw size in bytes).

  • Calculate average compression ratio by dividing total raw size by the number of events. Convert this into MB for better understanding.

2. **Connectors:**

  • SW Connectors typically have a ~300MB footprint when installed on systems or VMs.

  • They are subject to EPS (Events Per Second) limitations, which vary based on device types:

  • Cisco PIX has a limit of 2,500 EPS per container or appliance.

  • Windows systems have limits at 1,500 EPS per container or appliance.

  • Oracle Audit DB has a limit of 150 EPS per container or appliance.

  • Specific complexities are noted for the Windows Unified Connector (WUC), which is complex due to polling and threads. Network latency greater than 30ms negatively impacts WUC performance, as does having too many hosts per connector.

3. **Loggers:**

  • Software Logger licensing is based on increments of 5 GB/day. Typical customers have online storage for up to 90 days and offline storage for an additional 275 days.

  • A conservative estimate suggests that a 250 GB/day license would fill the logger in about 32 days, with indexing rates typically between 5,000 to 8,000 EPS for CEF data.

  • The output (forwarders) is limited to around 2,500-3,000 EPS.

4. **ESM/Express Planning:**

  • ESM E7400 appliances are end of life (EOL), and the software licensing for ESM 6.x scales with GB/day usage.

  • Storage is recommended to be configured in RAID 1+0 with 15K RPM drives, and base instances can expand with additional 5 GB/day add-ons.

  • Express Virtual Appliance has a maximum of 1,250 EPS and supports various add-on modules for threat detection and management functions.

The document provides technical details and guidelines to plan and configure logging solutions within network security environments, ensuring optimal performance and efficient resource usage. The document discusses various factors affecting system performance in HP ArcSight Event Manager (ESM) and Logger deployments, focusing on Input/Output Per Second (IOPs), storage, memory, CPU, and operations. It explains that IOPs do not directly dictate EPS but rather determine the performance of EPS. Factors such as storage type, capacity, RAM, CPU usage, and read/seek operations influence IOPs. The document also mentions using tools like "iostat" to measure transactions per second for IOPs measurement. Additionally, the document provides information on licensing options for log messages (RAW vs. CEF), emphasizing that RAW is used for original message storage while CEF is normalized and used for sizing purposes. It clarifies the differences between these two formats and discusses how "Preserve RAW" can affect the size of CEF events. The document introduces a new, customizable SIEM sizing and capacity planning tool called the *NEW* SIEM Worksheet, which assists in unifying product sizing across different devices. The worksheet includes various tabs for inputting device information, calculating event rates, storage sizing, bandwidth, peak EPS, online/offline storage, and more. It also offers recommendations for specific ArcMC and ESM products based on hardware suitability assessments. Overall, the document provides valuable insights into how to evaluate and optimize system performance in an SIEM environment by considering multiple factors that can impact overall performance. The document provides a detailed worksheet for SIEM (Security Information and Event Management) system design, aimed at summarizing various aspects including storage, bandwidth requirements, peak event processing, and device inputs. Here's a breakdown of its key features and sections: 1. **SIEM Worksheet - Device Inputs**: This section includes calculations such as Total EPS, Total EPD, Moderate Peak EPD, High Peak EPD, Total EPD with Peaks, Total Daily Raw, Total Daily CEF, etc., which are used to determine the storage and bandwidth requirements based on device inputs like quantity, preservation of raw data, turbo mode status, and peak event processing. 2. **Logger and Connectors**: These features provide specific details for different types of devices:

  • Logger: Includes parameters for reduction, compression level, online/offline storage, model selector, hardware suggestions, and multi-site storage calculations.

  • Connectors: Determines the best ArcMC based model according to container number requirements and allows event reduction at the connector device type level.

3. **Customer Output**: Consolidates sizing details from previous sections for sharing with sales reps and prospects/customers, detailing required online/offline storage, bandwidth, etc. 4. **Considerations**: Contains design notes and configuration details for sizing considerations. 5. **About**: Provides a revision history and change log including bugs fixed and features added. The document also includes specific walk-thru instructions for using the worksheet, detailing how to use dynamic content displays, tooltips with explanations, and settings like peak EPS, turbo mode, quantity of devices, etc., which can be adjusted manually or automatically based on predefined distributions. Finally, there are sample device inputs provided to demonstrate how data should be entered into the system. The calculations within this document help in determining storage needs, event processing capacities, and bandwidth requirements for each device, providing a comprehensive overview for designing an effective SIEM setup. This document provides a detailed overview of the parameters and settings within a tool called SIEM Worksheet, which is designed to assist in sizing various products for tasks such as event management (ESM), Express (Express and Logger tabs), and device input configurations. Key features include: 1. **Device Inputs**: Users can select between Turbo mode and Preserve Raw data options, with default settings determining the size of CEF files. The tool also allows users to view comparisons between preliminary and express outputs after parameter modifications. 2. **Compression Level**: This is a pre-defined setting that helps in managing the amount of data stored by allowing comparison between different modes. 3. **Filtering and Aggregation**: Users can input the percentage reduction in total events, with default set to 0%. This feature aids in optimizing capacity and storage requirements. 4. **Additional Growth**: Allows users to specify growth percentages for events, also set at a default of 0%. 5. **Retention Periods**: Defines online and offline retention periods based on the type of data being stored. For example, online retention requires up to 12TB per instance with a requirement for customer-managed SAN storage, while offline retention may use SAN/NAS/DAS for archive storage if required. 6. **Model Selection**: The tool provides various models (such as Express and Logger tabs) where users can select desired models which will calculate add-ons based on the product selected. Reference models are provided to evaluate hardware or software options effectively. 7. **Output Types**: Different worksheets, such as ESM, Express, and Logger, offer distinct EPS coverage with different model selections. The tool also distinguishes between NSS (Networked Storage System) and Desktops for more specific sizing needs. 8. **ESM SW GB/day License Model**: Users must select a preferred ESM software model from a dropdown list, which affects the storage requirements based on usage patterns. In summary, this tool is designed to facilitate the process of determining appropriate hardware or software configurations for different types of SIEM and data management tasks by providing detailed inputs, settings adjustments, and predictive models that can be fine-tuned according to specific user needs. This document outlines the methodology for selecting and deploying Enhanced Security Manager (ESM) models based on various factors such as growth, filtering, add-ons, cost comparison, online retention, and reference hardware. The selection criteria include EPS (Enhanced Performance System), storage requirements in GB/day, number of instances, deployment sites, and CEF data storage. The ESM model selection process involves: 1. Growth Impacts: This includes the effects on Total EPS, Total Raw GB/day, and Total NSS and Desktop device count due to growth. 2. Filtering/Aggregation Impacts: These impacts are observed in the Total EPS after Growth and Total Raw GB/day after Growth. 3. Add-ons: Add-ons for each model are calculated based on the number of instances used in a multi-instance deployment, with specific add-ons like 5 GB/day, NSS, and Desktop devices being cost-effectively calculated. 4. Cost Compare: This involves comparing various ESM models based on their costs to find the most cost-effective option. The color coding indicates whether an option is cheaper or more expensive compared to the current selected model. 5. Online Retention: The number of ESM instances is determined by the storage required for online CEF data, with 12TB per instance necessary for storing data over a specified period. 6. Reference Hardware: This refers to the hardware selection based on total EPS and the number of instances, with maximum recommendations varying according to the instance requirements. The suggested models are conservative estimates that may vary based on system configurations. 7. Deployment Site: The required number of ESM instances or deployment sites impacts the calculation of instances needed for a specific setup. 8. Number of Instances: This is set based on the highest requirement from CEF online storage and deployment site requirements, with up to 23,000 Sustained EPS per instance possible. The suggested models are updated according to these settings. Overall, this document provides a detailed framework for selecting the appropriate ESM model considering various operational parameters and system constraints. The provided document outlines various aspects related to Express systems, focusing on sizing, licensing, deployment, and cost comparison. Key points include: 1. **Express Sizing**: This section discusses how the total EPS (events per second) requirement determines the number of instances needed for an Express appliance. For example, if a system requires 30,000 EPS, it would require 2 instances based on maximum capacity of 23,000 EPS per instance. However, if the required EPS is higher than this limit, multiple instances are set to the highest number required. 2. **Desktop Licenses**: The document details how desktop license calculations are automatically performed depending on whether the required NSS (network security services) devices exceed or fall short of bundled device licenses. If fewer devices are required, the calculation adjusts accordingly: "Desktop license is equal to required

." 3. **Common Features**: This section covers several features that impact the system's performance and capacity, including growth and reduction which can affect total EPS, NSS, and desktop devices; online and archive retention settings; add-ons for expansion capabilities; and deployment sites influencing instance numbers. 4. **Express Cost Compare**: The cost comparison feature allows users to select models based on the total EPS and number of devices without cost comparison or through a cost comparison that considers various Express model options, with color coding indicating which is cheaper relative to others. 5. **Logger Sizing**: This part deals with sizing loggers for data collection, where storage requirements are calculated based on retention periods affecting instance numbers. It also differentiates between CEF (Common Event Format) and RAW operation modes for receiving logs in respective formats. 6. **Multiple Instances of Express and Logger Sizing**: Specific guidelines are provided when the EPS exceeds 7500 or is less than 1250, necessitating calculation of required instances and cost comparison across different models. Overall, this document provides detailed procedures for sizing Express systems based on performance requirements, handling desktop licensing according to device counts, and managing various operational aspects including storage and deployment configurations. This document provides a detailed comparison of two logging solutions, the Logger Appliance (L7505) and software logger options. The primary focus is on cost analysis where the L7505 model has been selected as a baseline for comparison with both the L7505s (a lower-end version) and L7505x (a higher-end version). Key findings from the cost comparison include: 1. The Logger Appliance, specifically the L7505 model, is found to be more cost-effective when compared with both software logger versions (L7505s at 85% cheaper and L7505x at around 69% cheaper). Despite this, the appliance requires fewer instances (only 4 vs. 3 for software) but has a larger device footprint. 2. The color-coding in the cost comparison tool helps visually identify which model is cheaper or more expensive relative to the baseline L7505. A negative sign ("-ve") indicates that the selected model is cheaper than the current L7505 model. 3. Additional benefits of using the Logger Appliance, referred to as "Army of Logger," include improved search performance and distributed storage capabilities, which support increased online retention beyond the initial 180 days of raw data storage. 4. The ArcMC Sizing and Connector Sizing sections provide recommendations for optimal server configurations based on specific requirements and costs. This includes calculating the number of required connectors and containers to manage the logger appliances or software loggers effectively. 5. The Customer Output features a simple, two-page layout with options to hide or display sections as needed. It also supports customization of notes according to customer preferences, enhancing user flexibility in accessing relevant information. 6. The final output is available in PDF format, ensuring that hidden sections are not included, and each page includes a header for easy navigation. These findings and features collectively demonstrate the comprehensive nature of the document, which serves as a valuable tool for cost analysis and system configuration decisions within an organization utilizing HP's logging solutions. This document provides detailed information about the SIEM Worksheet tool, including its features, considerations for use, documentation templates, and an agenda for a Proof of Concept (PoC) boot camp related to HP ArcSight. The content is copyrighted by Hewlett-Packard Development Company, L.P., with restricted access for internal use by HP and partner companies. The SIEM Worksheet tool has a "Considerations" tab that includes change logs, fixed issues, new features, formulas used in calculations, versions timeline, and caveats. It also outlines considerations related to the Bill of Materials (BoM), which is crucial for sizing and capacity planning but requires adding additional components like consoles, CIPs, and other add-ons not automatically provided by the tool. The tool may use customer-specific variables to accommodate complex requirements such as multi-tenancy, geo-distributed sites, lab environments, HA/DR planning, etc. Future releases are expected to provide performance data for specific products like CIPs, AppView, and Risk Insight. Documentation includes a Proposal Solution Design Template that is suitable for customer proposal presentations. It covers slides such as solution title, overview, assumptions, diagram, Bill of Materials (should match quoted components), customer requirements, benefits, capacity assertions, and worksheets for each product. Additionally, there are Visio Diagram Templates provided for visual representation of the solutions. The agenda for the PoC boot camp on ArcSight includes sessions on ESM (Event & Session Management) features like Active Lists, Dashboard's, Data Monitors, Reports, Workflow and Case Management, Managing Users and Permissions, Outlining HP ArcSight Proof of Concept, Use Cases, Sizing and Architecture, and Q/A. The session is conducted by Philippe Jouvellier from HP ESP | Global Partner Enablement. Overall, the document provides detailed information for internal use about HP's SIEM tool capabilities, documentation templates, and training resources for their ArcSight solution. This document outlines the objectives for a training session on "HP ArcSight Proof of Concept Boot Camp Training," focusing on active lists and session lists within the software platform. The main objective is to provide attendees with knowledge about active lists, their usage in maintaining information tracking, querying data through rules, and managing data capacity using TTL (Time-to-Live). Upon successful completion of this lab: 1. Participants will be able to describe what active lists are and distinguish them from session lists. 2. They will understand how to create, configure, populate, view entries in, and chain rules with active lists. 3. The training covers the various ways active lists can be used, such as in malicious IP and domain watchlists, rule throttling, enriching events with additional information, user profiling, and generating reports. 4. It details how to access and manipulate active lists through the software interface by selecting from the Navigator Panel. 5. The anatomy of an active list is explained, including its name, capacity governed by TTL, allowing multiple mappings for data entries, and a key field that allows rules to query specific fields. 6. Additional information about optimizing data within active lists to reduce memory usage through hashing and managing entry limits (up to 5 million entries with high-end machines) is provided. This document discusses the creation, population, and characteristics of two types of lists in ArcSight: Active Lists and Session Lists. Active Lists are configurable data structures that can be used to store events or fields which need quick access for rule lookups. They use hashes to reduce memory usage by allowing only a subset of entries to remain in memory based on capacity settings (TTL - Time To Live). Entries can be added manually or imported from CSV files, and once saved, their parameters cannot be modified. ArcSight creates audit events related to statistics of Active Lists. Session Lists are similar but have specific differences such as always including Start Time, End Time, and Creation Time fields in entries, and using a "terminated" status instead of being removed. They partition data into weekly partitions which allows for efficient time-based queries since they don't need to fit entirely in memory. Session Lists are optimized for large datasets over time and create audit events related to their statistics as well. To create Active or Session Lists, navigate to the Navigator Panel, right-click on "Active Lists" or "Session Lists," and select 'New' from the menu. For Session Lists specifically, choose the "Session Lists" tab in the Navigator Panel after selecting "Lists." The naming of a session list follows similar rules as active lists, allowing spaces and special characters but checking the box for overlapping entries enables multiple instances based on key pairings. The article also briefly mentions an example scenario (Example #1) where a firewall blocked IP address is added to a list and maintained for 2 hours, which could be further elaborated upon in context with detection rules applied within ArcSight. This document outlines the process of creating an active list in a firewall system to block IP addresses that have been identified as attackers. The steps involve setting up a field-based active list, defining its fields, and using rules to populate it with blocked IP addresses. Here's a summary: 1. **Create Active List**: Navigate to the "Lists" section in the application, right-click on "admin’s Active Lists", select "New Active List", provide a name like "External Firewall Blocked IP address" and set the Time To Live (TTL) period to 2 hours (0 means it won't expire). 2. **Define Fields**: Create a field-based active list by selecting the appropriate radio button, then check the key fields which should include "Source IP address", "Firewall IP address", "FW Product Name", and "FW Vendor". Define these fields as columns in a table-like structure. 3. **Set Up Rules**: Navigate to "Rules" and create a new rule by right-clicking on "’s Rules" and selecting "New Rule". For the standard rule, provide a name and go to the conditions tab where you would normally set up filters (e.g., firewall filter) used in previous tests for consistency. 4. **Populate List**: Configure actions within this rule to add the attacker's IP address, device address, product, and vendor as entries into the active list each time a condition is met, ensuring that even if only one occurrence of such an event is needed, aggregation is performed properly to trigger the action. This method helps in maintaining a record of blocked IP addresses for future reference and analysis, helping in effective security management and response. This document outlines the steps for creating a rule to aggregate events where external IP addresses are blocked by a firewall. The process involves several tasks within a system, including defining filters, aggregating specific event fields, mapping these fields to an active list, and activating the rule. Here’s a summary of the steps: 1. **Define Filter**:

  • Right-click on "Event1".

  • Select "Category", then "CategoryBehavior".

  • Set the filter criteria as "StartsWith" with the term "/Access".

  • Add another line with categoryOutcome set to "/Failure".

  • Click OK.

2. **Aggregation**:

  • Go to the "Aggregation" tab.

  • Select "Add" for "Aggregate only if these fields are identical".

  • Aggregate on fields: Attacker Address (blocked IP Address), Device Address (firewall IP Address blocking external IP), Device Product (firewall model from vendor), and Device Vendor (firewall brand name).

  • Click Apply.

3. **Action Tab**:

  • Set the action to "On First Event" by default, meaning it triggers when an event occurs.

4. **Add to Active List**:

  • Right-click on "On First Event", select "Add", then choose "Active List" and "Add To Active List".

  • Map aggregated fields to the active list you created ("External Firewall Blocked Addresses").

5. **Save and Activate Rule**:

  • Click OK in the mapping pop-up, then save the rule by clicking "OK".

  • Drag the newly created rule from the current location to the "Real-Time Rules" folder to activate it.

  • Optionally, link the rule for better management under a project folder while keeping it active on the system.

This process ensures that events where external IP addresses are blocked by the firewall are efficiently aggregated and tracked according to defined criteria. This document outlines procedures for creating an active list in a security system, which is used to detect and track specific events such as interactive logins after a certain time. The process involves setting up rules that aggregate data based on specified conditions and then populating an active list with the relevant information. The steps include navigating to the "Lists" section within the software and creating local variables for evaluating time, including conditions like detecting successful logins and adding specific event fields such as Username, Source IP, Destination IP, and Event Time (using the End Time field) to the active list. The TTL is set to 5 days for these entries. For more complex scenarios, multiple rules can be created in sequence to detect potential threats like repeated login failures followed by successful logins from the same user account. Each rule involves setting conditions based on events such as login failures and successful logins, checking if an entry already exists in the active list before adding new data. Additionally, there are examples of chaining rules for more complex detections, where multiple rules work together to detect unauthorized access attempts, system compromises, and other malicious activities. The results from these rules are written into separate active lists which can be used for further analysis or alerting mechanisms based on the severity of the detected threats. This document outlines a process for creating two rules using active lists in an unspecified security tool or system. Here's a summary of the steps involved in defining these rules: **Rule #1:** 1. Navigate to the Aggregation tab, set a low threshold level by selecting Aggregate Tab and configuring it with specific parameters such as number of matches (3) and time frame (2 minutes). 2. Ensure that only identical fields are aggregated using the "Aggregate only if these fields are identical" option. Add the following CEF fields: AttackerAddress, TargetAddress, and TargetUserName. Click OK and Apply. 3. Move to the Actions tab, where it's necessary to change the default setting from "On First Event" to "Deactivate Trigger on First Event" and "Activate Trigger on First Threshold". 4. Add an Active List named "Repeated Login Failures", mapping required fields appropriately: Username maps to Target User Name, Target Host maps to Target Address, and Source IP maps to Attacker Address. Click OK and Apply. **Rule #2:** 1. This rule detects successful logins where the username is in the active list "Repeated Login Failures". It involves three conditions: categoryBehavior=/Authentication/Verify, categoryOutcome=/Success, and “InActiveList” set to true. 2. Select the previously created "Repeated Login Failures" active list from the dropdown menu, map event fields with defined fields, click OK, and apply in the "InActiveList" window. 3. Aggregate results by selecting Aggregate Tab, setting parameters for number of matches (1) and time frame (2 minutes), ensuring identical field aggregation, adding CEF fields: AttackerAddress, TargetAddress, and TargetUserName, then clicking OK and Apply. 4. The action set name field to "Compromised User Account?". Rule #2 is complete and should be copied to the Real Time Rules directory. These steps provide a detailed guide for creating and configuring rules using active lists in a security system, with specific instructions on how to aggregate data and map fields accordingly. The provided document outlines a use case for detecting potential network breaches by monitoring denied IP addresses across different parts of a network using HP ArcSight software. Here's a summary of the key points: 1. **Use Case Description**:

  • A firewall has denied access to an IP address, indicating that this IP is considered suspicious or potentially harmful.

  • Later on in the network, the same IP address reappears, suggesting a breach could have occurred despite being previously denied.

2. **Required Content**:

  • An Active List with fields such as attacker IP, Firewall IP, Firewall Type, and Vendor.

3. **Rules Implementation**:

  • Rule 1: Detects denied IP addresses by the firewall and adds them to an Active List.

  • Rule 2: Monitors permit IP addresses by the firewall and searches if the IP is in the Active List. If found, it sets an Event Field to "Potential Breach !!!".

4. **Objective of the Lab**:

  • Understand Data monitors and Dashboards.

  • Learn about functions, characteristics, types, components, different types of reports, building custom reports, and setting up scheduled report jobs.

5. **Dashboards Overview**:

  • Similar to a car's instrument panel, dashboards display indicators related to the state of an IT environment.

  • They consist of data monitors or query viewers that summarize event flow and communicate effects on specific systems.

  • Setting up a dashboard involves creating a "canvas", setting up data monitors, and associating them with the dashboard.

6. **Data Monitors**:

  • Function similarly to rules by evaluating the event stream and system health statistics.

  • Focus primarily on summarizing event data graphically and in some cases, correlating it.

In summary, this lab is designed to help users understand how to use HP ArcSight for detecting potential network breaches through denied IP addresses and setting up visual dashboards to monitor these activities effectively. Data Monitors are tools used in systems management to analyze and track event streams, generate correlation events, and provide summary data for visualization through dashboards. They come in three types: Event-based, Correlation, and Non-event-based, each tailored to different use cases. Event-based Data Monitors focus on evaluating the event stream with filters, aggregating data into graphical formats that can be displayed in dashboards. Key features include Top Value Counts (Bucketized), Asset Category Count, Last N Events, Hourly Counts, Hierarchy Map, Last State, Event Graph, Geographic Event Graph, and Rules Partial Match. Correlation Data Monitors perform advanced analysis by applying filters or conditions to compare data streams using aggregation methods. They also generate correlation events through tasks like Event Reconciliation and Session Reconciliation. Moving Average and Statistics are calculated based on the analysis results. Non-event-based Data Monitors display information about internal statistics of ESM resources, such as memory utilization by ESM Manager and free space in the ESM Database. They also monitor connector status among other system metrics. Common components used across most Data Monitors include filters that select events based on specific use cases, time criteria for defining data windows, aggregate fields to group data, and value fields which are numeric within the ArcSight Schema. Time buckets are used in some monitors to limit the amount of data considered by organizing events into fixed-size intervals. These tools help ensure efficient data handling and analysis for system management purposes. This text provides a guide for creating dashboards and data monitors in an unspecified software environment at Hewlett-Packard (HP). The process involves several steps, including right-clicking on the admin's dashboard or data monitors, selecting 'New Dashboard' or 'New Data Monitor', configuring settings like name, type, filters, aggregate fields, and time fields. After setting up a Data Monitor, it can be added to a dashboard as a bar chart. The guide also mentions considerations such as waiting for data to display normally after changes and the potential need to restart the Data Monitor to update displayed information. This document outlines procedures for creating event-based dashboards using Hewlett Packard (HP) tools. The focus is on building multiple paneled dashboards, specifically tailored for monitoring login activities and firewall events within an internal network environment. ### Main Objectives of the Lab The primary objectives of this lab are to: 1. Understand the components involved in the report workflow. 2. Identify different types of reports that can be generated. 3. Create a custom report using available data sources. 4. Set up a scheduled job for automated reporting. ### Creating Dashboards The document provides detailed steps on how to configure various dashboard panels: 1. **Top Failed Login by User**: A bar chart displaying the top failed login attempts, filtered by user name, aggregated from `osLogin.events` and `IdentityView_V2.0.events`. 2. **Login Activity by Result**: A pie chart showing the distribution of login outcomes (success, failure, attempt), using the category 'Outcome' from `demo.events`. 3. **Last 10 Failed Login**: A table listing the last 10 failed logins with fields including priority, name, user name, host name, and address. 4. **Login Statistics**: A tile displaying average login statistics grouped by outcome category. ### Additional Use Case: Firewall Dashboard For firewall activity monitoring: 1. **Top 10 Target Addresses**: A bar chart showing the most frequent target addresses from `demo.events` and `Arcexpressdemo.events`. 2. **Port Usage**: An event graph visualizing connections using identifiers such as attacker address, event target address, and target port from `express-SP1.events`. 3. **Firewall Connections Status**: A tile displaying average connection priority status grouped by outcome category. ### Technical Details and Guidelines Each dashboard setup involves selecting appropriate data monitor types (e.g., Top Value Count for charts) and configuring event fields according to the specific requirements of each panel. The document emphasizes using `ReplayConnector` for aggregated events, ensuring that relevant datasets are selected based on predefined field mappings and templates provided in the tool documentation. ### Conclusion and Training Acknowledgment The document concludes with a thank you note and information about further training opportunities related to HP ArcSight Proof of Concept Boot Camp. It is designed for users seeking to enhance their skills in generating custom reports using HP tools, providing valuable insights into internal network activities through interactive dashboards. This text provides a detailed guide on creating reports using specific software tools such as PDF, Excel, RTF, and CSV formats for data collection from an ESM (Extended Systems Management) database. The process involves gathering relevant data, developing it in predefined report templates, scheduling or running the report immediately based on user needs. Data can be collected by executing queries directly within the ESM Database or using pre-existing trends. Basic and custom report templates are available; while basic templates are standard, more complex requirements may necessitate creating a custom template with extended knowledge of the template editor. For proof of concept (POC) environments, modifying existing templates to include customer logos and creating relevant queries is recommended over full custom template creation. To build a specific example report showing top 10 firewall events, one must follow several steps: first, access the REPORTS resource through the Navigator panel; then, navigate the resources tree to select "ArcSight System," specifying a chart with a table in portrait format and copying this selection to admin's template library. Editing the template involves opening it in the Report Designer application where adjustments such as adding a customer logo can be made. The process requires at least two steps: minor editing of the template for the logo placement, followed by creation of a query that feeds data into the report engine using resource-based SQL logic. This document outlines steps for creating reports using a template in the software environment by Hewlett-Packard Development Company, including selecting a logo, building queries on an ESM database, and saving changes. Here’s a summary of the instructions provided: 1. **Selecting a Logo:**

  • Allow selection of another logo or use the "Browse" option to select from available logos.

  • PNG format is recommended for better quality; smaller images are preferred.

  • Use customer website logos if applicable.

  • Check the "Embed" option and click OK after selecting an appropriate logo.

2. **Saving Template Edits:**

  • When prompted to save edits, select "Yes" to save the template.

  • Apply changes in the Console Inspect/Edit panel by clicking "Apply." Changes are not saved until applied.

3. **Building Queries:**

  • Ensure you have a clear idea of what data will be reported on before building queries.

  • Under "Reports" in the Navigator Panel, select the "Queries" tab and right-click to create a new query.

  • Provide a name for the query, choose "Query on Events," and set start and end times.

  • Use SQL logic with options like Select, Group by, Order by, and functions such as Count, Max, Min, Average, Sum, Time.

  • Add desired fields to include in the report, ensuring a balance between field usage (not too many) for optimal A4 Portrait size reports.

  • Aggregate data using Event ID with "Count" aggregation.

These steps guide users through creating visually appealing and informative reports tailored to specific needs, based on clear understanding of required data and proper use of software functionalities. To create a report using a template and query in a software or system, follow these steps: 1. **Add Order By Columns**: Left click on "Add 'ORDER BY columns" and choose the field you want to order the report by (e.g., Event ID). 2. **Aggregation and Filtering**: Apply the same aggregation as used in the Select component. For ordering, select Ascend (ASC) or Descend (DESC). 3. **Adding Filters**: Go to the "Conditions" tab and add a new condition or reuse an existing filter from Admin's Filters, selecting "Firewall Events". Click OK to apply changes. 4. **Creating the Report**:

  • In the Navigator Panel under Reports, right-click and select "New Report".

  • Provide a name for the report which will appear in the title.

  • Select the "Template" tab and choose your previously created template.

  • On the "Data" tab, select your created query.

  • On the "Chart" tab, select a chart type from the drop-down box and move appropriate fields for X-Axis and Y-Axis.

  • Finally, on the "Parameters" tab, adjust any parameters as needed.

These steps outline the process of customizing a report using both template and query in the software or system, ensuring data is properly ordered, filtered, and visualized according to user requirements. This document provides instructions for creating a report in ArcSight, focusing on showing the top 10 IDS (Intrusion Detection System) events. The steps include setting preferences such as unselecting default options and adjusting row limits for tables and charts. The report should be run for data from today, with an option to adjust the time window based on performance considerations. It is suggested to schedule this report to run daily, considering appropriate times and factors that could affect its execution. Additionally, there's information about ArcSight workflows in a security context. These workflows involve processes like informing about incidents, escalating issues, and tracking responses. The document also mentions the roles of workflow resources such as Annotations and Cases, which are used to collaborate on incident or event tracking. This setup is relevant for Security Operations Centers (SOC) where policies and rules guide real-time processing with advanced alarms, notifications, and threat escalation mechanisms. This text discusses several aspects of event management within the ArcSight system, focusing on annotations, cases, stages, notifications, and creating cases manually. The content outlines how events can be tracked, assigned to users or groups for escalation, flagged for follow-up, compared with annotated events, used as a triage tool before being potentially elevated to cases, and managed through various stages until resolution. Notifications are mechanisms for sending information about certain conditions, initiated via automatic actions in rules, system alerts, or when cases are opened or modified. Creating a case manually involves opening the active channel, selecting relevant events, adding them to the case, providing details like name, stage, consequence, impact, and progressing through stages until resolution is achieved. This document provides a step-by-step guide on how to automatically create cases in a system. It starts with instructions on selecting the "Attachment" tab, adding content for investigation, and applying changes. The process involves creating a new case by opening an action trigger, assigning relevant details such as name, description, group, severity level, and user group. Once applied, the rule generates a case automatically when triggered. The document then moves on to demonstrate how to modify an existing rule to include automatic case creation based on specific events or thresholds, like unauthorized logins after office hours. It outlines the necessary details required for this modification, including previous rule information, timestamp variable usage, and attributes for follow-up actions. After explaining the process of creating a case automatically with rules, the document shifts focus to users and their permissions within ArcSight systems. It explains how users are grouped based on roles or other groupings, and discusses methods to manage these groups by setting permissions and passwords, enabling or disabling login functionality, and granting access through ACLs for specific resources. The provided text discusses various aspects of ArcSight's user management, including user groups, permissions, and user types, as well as how to create a new user group with restricted access for the "Demo Live" active channel. Here is a summarized version of the key points: 1. **User Management in ArcSight**: All user group memberships and permissions are stored in the ArcSight Database. When users log in, they can perform operations based on their granted permissions through their group memberships. 2. **Creating Users and Initial Permissions**: When an ArcSight user is created, they automatically gain access to a set of resource groups. They can store, create, edit, or delete resources within their groups without affecting others' resources. 3. **User Types Supported**: There are four main types of users supported: Normal User, Management Tool, Forwarding Connector, Archive Utility, and Connector Installer. Only Normal Users have full access to the ArcSight Console or Web client. 4. **Managing Access Control Lists (ACLs)**: ACL Editor allows user groups to view or edit permissions on resources, operations, other user groups, events, and sortable field sets. This helps in controlling what actions are allowed within a group for different types of data handling tasks. 5. **Creating a New User Group with Restricted Access**: To create a new user group with restricted access to the "Demo Live" active channel:

  • Navigate to the Users resource from Navigator.

  • Create a new user group under 'Shared' | 'Custom User Groups', naming it 'My_User_Group'.

  • Edit this newly created group to restrict its permissions according to specific needs for the "Demo Live" channel.

These steps and descriptions highlight how ArcSight manages user access and permissions, which is crucial for maintaining security and operational efficiency within the system. The text provided outlines a training session titled "HP ArcSight Proof of Concept Boot Camp Training" which covers the basics of setting up and demonstrating a Proof of Concept (PoC) with HP ArcSight Solution Products. It includes details on prerequisites, such as having a PoC scoping document, understanding the sales process, completing an ArcSight Evaluation Agreement and Non-Disclosure Agreement, and defining a clear scope of engagement. The session also covers the activities involved in setting up use cases that demonstrate how the solution meets customer needs, including product installation, event gathering, content creation, and presenting the solution. Additionally, it provides information on typical schedules for the PoC process and emphasizes the importance of thorough preparation through tools like a PoC scoping document. This text outlines guidelines and considerations for conducting a Proof of Concept (PoC) for HP's security solutions. The document covers several key areas including use cases, event sources, SmartConnectors, FlexConnectors, and basic infrastructure requirements before the PoC is initiated. Here’s a summary of each section: 1. **Use Cases**: Before starting the PoC, specific use cases need to be defined that align with business goals. These should demonstrate the ability to deliver high-level functionalities such as perimeter security, PCI compliance, and insider threat management. The use cases must be specific, deliverable, achievable, and relevant to the customer’s needs. 2. **Events**: Understanding what event sources will be available during the PoC is, crucial. This includes whether these events are from live or test systems, which device types they cover, and what product versions are being used. The number of events per second (EPS) should be considered to ensure that the system can handle sustained high EPS as it might affect performance in a production environment. 3. **SmartConnectors**: Reviewing SmartConnector Guides is important to understand if supported devices will provide event data necessary for the use cases. Configuration requirements such as providing administrator privileges and setting up configurations should be addressed before starting the PoC. 4. **FlexConnectors**: If deemed necessary, FlexConnectors can be used but they should only be considered after completing training and having a PoC lasting 4 to 5 days. A single custom event source is recommended to demonstrate capability in this proof of concept environment. 5. **Basic Infrastructure**: Ensure that all components have proper IP addresses, subnet masks, default gateways, hostnames, domain names, SMTP servers, and NTP servers configured correctly. Also, verify if the necessary licenses (ESM, Logger, Express, and Connector Manages) are in place to run these components during the PoC. 6. **Device Changes**: Configure devices in advance or plan for changes as per the organization's change control process. The device administrators should be available to assist without requiring extensive knowledge of each device type. This document is intended for internal use by HP and its partners, highlighting critical steps and considerations that must be taken into account when preparing and executing a PoC for HP security solutions. This document outlines the key considerations for conducting a Proof of Concept (POC) with Hewlett-Packard (HP) products, focusing on SmartConnectors and FlexConnectors. The POC duration can vary based on the number of use cases (1 to 3) and the complexity of the project. A minimum of two days is required for one use case without flex connectors, while three to four days are needed for projects with multiple use cases and no flex connectors. For complex projects involving more than three use cases or including a FlexConnector, POC durations can extend up to five days. Before starting the POC, essential tools such as Winscp for secure file transfer, Putty for secure sessions on hosts, Baretail for log file viewing/monitoring, MD5 for generating and verifying message digests, and Wireshark for network protocol analysis should be available. Additionally, having all necessary binary files including SmartConnectors, Express upgrades, patches, console applications, Logger software, and Connector Appliance is crucial. During the POC phase, initial meetings with clients are important to confirm availability of device feeds, device administrators, and configured network access. Server pre-requisite checks should include reviewing hardware specifications, OS, CPUs, RAM, as well as checking for necessary Unix RPMs. Storage rules dictate setting an event data retention period to five or seven days instead of the default 30 days to save space. Lastly, if specific events are missing or difficult to generate during testing, use the Test Alert Connector to address these issues. This text provides a comprehensive set of guidelines for conducting a Proof of Concept (POC) with the HP ArcSight product suite. Key points include preparation prior to deployment, event generation, troubleshooting during the POC phase, and post-POC follow-up procedures. The document emphasizes the importance of detailed planning and execution in order to effectively showcase content, create ad hoc events, specify field values for replay, and manage licenses. Preparation includes defining deliverables such as SmartConnectors and FlexConnectors, ensuring device administrators are ready, and having all necessary documentation and tools readily available. During the POC itself, it is crucial to take detailed notes on how content was built and any encountered problems, while also preparing screen shots for future reports. Special attention is given to managing licenses, particularly if a partner license is used; temporary licenses should be deleted before leaving the POC site. Post-POC activities involve following up with clients regarding signed Purchase Orders (POs), returning trial servers or other equipment, and summarizing lessons learned for potential future POCs. The document concludes with information on upcoming training sessions related to HP ArcSight sizing and architecture, underscoring the importance of thorough preparation and execution in successful POC engagements. The text discusses the architecture and functionality of HP ArcSight Logger, a network-based system designed for collecting, processing, and analyzing security event data. Key features include CEF (Common Event Format) to raw log conversion, real-time correlation and alerting, threat response management, and long-term storage solutions. The HP ArcSight Logger can operate in several modes: 1. **CEF to CORRe and Logger Archiving**: Events are first converted from CEF format to a more standardized format (CORRe) before being archived in the Logger. Subsets of these events may also be sent directly to CORRe for further processing or storage. 2. **Logger Archiving**: Raw logs are stored initially in the Logger, then processed and forwarded to the Connector, which in turn sends them to CORRe. 3. **Connector Dual-Homed**: The Connector is designed to communicate with both the Logger and CORRe, allowing for flexible data handling and forwarding options. 4. **Logger as a Repository for ESM (Event Management System)**: Events processed by ESM can be stored in the Logger, while ESM handles short-term storage and correlation. The Logger serves as a repository for long-term storage and unified search capabilities. 5. **ESM-Logger Hybrid Deployment**: In this setup, Loggers function as repositories that forward filtered events to upstream or global ESM Managers based on priority. This hierarchical deployment allows for efficient handling of high volumes of data. 6. **Universal Log Management That Scales**: The Logger can work in a peer network configuration with multiple Loggers collaborating to handle large input rates and distribute parallel queries, enabling scalability across various operational departments like security, IT operations, and applications. The system supports integration with other ArcSight components such as the ESM Express (ESM), providing comprehensive threat management and compliance solutions within an enterprise environment. The document outlines different architectures and scenarios for log management solutions, ranging from basic logger systems to complex scalable SIEM solutions. It compares various components like Loggers, Connectors (SmartConnectors), Express, and ESM (Enterprise Security Manager). Each scenario has its own advantages and disadvantages depending on the specific use case and data handling requirements:

  • **Scenario #1: Logger** is a basic log management system that handles syslog and file logs but lacks support for structured data. It offers high inbound EPS performance and can archive events, suitable for IT operations use cases and compliance tick box requirements.

  • **Scenario #2: Connector Logger** extends the functionality of the Logger by supporting structured data through SmartConnectors, enabling better analysis and reporting. However, it has limitations on indexing and processing speed due to its focus on both unstructured and structured data handling.

  • **Scenario #3: Connector Express** combines log management with basic SIEM capabilities, offering a set menu for deployment which is license-based and suitable for small use cases. It lacks significant log retention capability as it is limited by onboard storage.

  • **Scenario #4: Connector ESM** is the most versatile and scalable solution among the scenarios, featuring a modular architecture that can be tailored to customer requirements. However, it may not be ideal for traditional log management due to its software-based nature and focus on enterprise-class SIEM capabilities.

Each scenario represents a progression in functionality from basic logging to comprehensive SIEM solutions, with varying degrees of support for structured data, scalability, and ease of deployment. This document discusses the architecture of HP's ArcSight Connector Server, specifically focusing on two types of connectors: Logger and Express. The primary purpose of these connectors is to collect and manage data for security analysis, with a focus on actionable events. The Logger connector is more cost-effective but has limitations in terms of scalability and flexibility. It collects everything and sends only a subset of 'actionable events' to the Express connector, which can be configured independently but requires ArcMC Connector Server for management. The Logger is simpler in architecture and can scale better with multiple loggers forwarding data to one Express. However, it may not provide raw event data immediately, limiting its use in certain scenarios. In contrast, the Express connector offers more flexibility and configurability, allowing independent setup of feeds to both Express and Logger without additional management servers. This is advantageous for non-repudiation requirements but comes at a higher administrative cost due to increased bandwidth usage and complexity. The document also details the technical specifications for running ArcSight on various operating systems, including Windows Server versions, Red Hat Enterprise Linux (RHEL), Novell SUSE Linux, Sun Solaris, IBM AIX, all of which are supported by HP ArcSight. The support is certified or supported based on rigorous testing and certification to ensure compatibility with the security management software. Finally, there's a discussion about dimensioning for Logger connectors, suggesting that it may not be appropriate for high-volume environments like 10keps or higher due to its raw event limitation in Express mode. This implies a need for careful planning and setup based on specific data volume and requirements. This document outlines various aspects related to performance, storage, communication modes, information gathering, dimensioning, and architecture for a system involving Loggers and Express/ESM. Key points include:

  • **Performance**: Without full text indexing, the system can handle up to 10K events per second (eps), but this increases to around 5 to 6 Keps when full text indexing is enabled. For high event rates, up to 6,000 eps can be managed using logger forwarding to Express/ESM.

  • **Storage**: The system typically uses 1765 bytes per event for a conservative storage estimate. Example calculations show that at a retention period of 60 days and with structured data, the total storage needed varies significantly across different EPS configurations (e.g., from 0.7TB to 136TB).

  • **Communication Modes**: The system supports SSL (uncompressed), console, cert, FIPS 140.2, XML via SSL, Express/ESM connector, and SmartMessage. For all these modes, certificates are required.

  • **Information Gathering**: To make informed decisions on possible architectures, one needs to gather data including event throughput requirements, log retention requirements, high availability/failover requirements, additional customer requirements, bandwidth considerations, MSSP considerations, regional/global considerations, and compliance requirements.

  • **Dimensioning**: This involves determining the number of devices that should be covered by Express or ESM, how many endpoints are worth collecting from, whether events occur 24/7 or in diurnal traffic, log volume in GB per day, and source types such as firewalls, proxies, Windows systems, and applications.

  • **Architecture**: This includes considerations for multiple data centers (DCs), remote sites management with software connectors, high availability and disaster recovery (HA/DR) strategies, global site implications including legal issues related to data holding, and WAN data handling with compression or encryption based on reliability requirements.

The document provides detailed guidance on how to gather information and make architectural decisions for implementing a system like Express/ESM in an organization's infrastructure. The document outlines different architectures for a system called "ESM, Logger, and Optional/Mandatory ArcMC" which is part of the ArcSight product suite. Each architecture serves specific customer requirements and has its own limitations and advantages. Here's a summary of each architecture based on the provided information: 1. **Architecture 2 (Basic): ESM, Logger, and Optional ArcMC**

  • **Requirements**: Low event throughput, few unique sources, medium to long-term retention policy, single datacenter events, and correlation capabilities.

  • **Limitations**: High dependence on CPU cores and SAN I/O for high event processing rates; storage costs increase with longer retention periods.

  • **Advantages**: Provides cost-effective long-term storage through the Logger compared to ESM, packaged as a turn-key solution named ArcSight Express.

2. **Architecture 2 (Basic): ESM, Logger, and Optional ArcMC** (repeated for clarity)

  • Similar requirements but focuses on the same limitations and advantages as above.

3. **Architecture 3 (Basic): ESM, Logger, and Mandatory ArcMC**

  • **Requirements**: Same as previous architectures but with a mandatory ArcMC appliance for remote management of SmartConnectors.

  • **Limitations**: High dependency on Logger Forwarder and Manager CPU cores; maximum event processing rate is capped at 2,500 EPS.

  • **Advantages**: Still offers more cost-effective long-term storage than ESM but with added scalability features due to the mandatory inclusion of ArcMC.

4. **Architecture 4 (Basic): ESM, Multiple Loggers, and Mandatory ArcMC**

  • **Requirements**: Similar to previous architectures but includes multiple loggers for load balancing across a peer network.

  • **Limitations**: High dependency on Logger Forwarder and Manager CPU cores; maximum event processing rate is capped at 2,500 EPS without external SAN storage or other optimizations.

  • **Advantages**: Offers enhanced scalability through the use of multiple loggers but with similar limitations regarding high dependencies on system resources.

Overall, these architectures are designed to handle specific customer needs and constraints related to event processing rates, data retention, cost efficiency, and complexity in managing large volumes of security-related events from various sources within a single datacenter. This document outlines the architecture, typical customer requirements, limitations, advantages, high availability (HA) mechanisms, and the use of HP Intelligent Power Distribution Unit (iPDU) for failover in a complex system involving multiple components like ESM (Event Stream Manager), Loggers, Connectors, and Appliances. **Architecture 4 (Basic): ESM, Multiple Loggers, and Mandatory ArcMC**

  • **Typical Customer Requirements**:

  • Medium to High event throughput

  • Medium to High number of unique event sources

  • Medium to Long Term Retention Policy

  • Event Sources located in multiple datacenters

  • Correlation and Analysis Capabilities

  • **Limitations**:

  • Connector Appliance is mandatory for remote management of SmartConnectors.

  • Events Per Second (EPS) to Manager are limited by the number of Logger Appliances multiplied by the Logger Forwarder capacity.

  • CPU Cores and Database SAN I/O utilization can also limit EPS to the Manager.

  • **Advantages**:

  • Long term storage via Loggers is more cost effective than via ESM.

  • Scalable architecture through addition of additional Loggers.

**High-Availability (HA) Mechanisms:** 1. **Connector HA**:

  • Depend on mechanism: Push (via load balancer or cluster software), Pull (via cluster software).

  • Connector HA is unusual and can be complex and cost-prohibitive, but failover to multiple destinations typically suffices.

2. **Appliances HA**:

  • Via Connector sending events to two Connectors or Appliances.

  • For example, Logger via Connector sends events to two Loggers (Logger 1 and Logger 2), and Express via Connector sends events to two Express (Express 1 and Express 2).

3. **ESM on Oracle High Availability**:

  • Explained in detail with configurations for PRIMARY and SECONDARY ESM instances, including IP addresses, interfaces, file systems, interlink cables, and distributed replicated block devices.

4. **HA and iPDU (optional)**:

  • Uses the iPDU to disable one machine if both get into a mode where they each think they are the primary, ensuring smooth failover from one ESM instance to another.

  • Supports only HP iPDU products.

This document provides an overview of how HA is implemented in complex systems with multiple components, highlighting the use of specific technologies like connectors, appliances, and iPDU for maintaining system availability and reliability. This text provides an overview of a High Availability (HA) architecture used in a system with two nodes, referred to as primary and secondary. The purpose of this architecture is to ensure uninterrupted operation when the primary node becomes incapacitated. Key components include DRBD (Distributed Replicated Block Device), which mirrors disk data from the primary to the secondary node for fault tolerance, and iPDU (Intelligent Power Distribution Unit) or network-based mechanisms like SSH for failover in case of hardware or software issues preventing resource release by the primary. The HA architecture operates as follows: 1. ESM (Extended System Management) runs on the primary node, managing critical services. 2. The secondary node is in a standby mode and does not run the ESM application initially; it remains operational to take over if the primary fails. 3. Data from the primary node is replicated at the block level using DRBD, ensuring that the secondary can continue operations with up-to-date data upon failover. 4. Both nodes have network interfaces (eth0 for intranet connection and eth1 for disk mirroring), allowing communication and replication of data between them. 5. If a failure occurs on the primary node, the HA mechanism on the secondary detects this event automatically. The secondary then assumes the role of the new primary, restarts the ESM application, assigns the service IP address to eth0, and restores services after a brief interruption. 6. Components such as DRBD (to maintain data consistency), iPDU (for power redundancy or network-based failovers), and SSH for reboot control are used to ensure independence from the primary hardware/software, providing resilience in case of failure. 7. HA architecture ensures minimal downtime and continuity of services through automatic failover mechanisms that do not rely solely on manual intervention, thus enhancing system reliability and performance. This text appears to be documentation or a user guide related to High Availability (HA) features in an IT system, specifically within the context of Hewlett Packard's Enterprise Storage Module (ESM) with HA module capabilities. The document discusses several aspects including failover mechanisms, use of HP Intelligent Power Distribution Unit (iPDU), and software connectors for management purposes. 1. **High Availability (HA):** This section talks about a cluster manager called "Pacemaker" which manages processes and resources across multiple servers in a clustered environment. In case the primary server fails, Pacemaker can take over by promoting another server to be the new primary, ensuring continuous service operation. The HA module uses an HP iPDU for improved availability during failover scenarios. 2. **iPDU Usage:** It is recommended to use an HP Intelligent Power Distribution Unit (iPDU) which allows remote control of outlets through a network interface. This helps in managing power distribution in a data center, and it can be paired with STONITH iPDU agent commands for turning servers on or off based on the cluster management decisions. 3. **Common Questions Section:** This part answers various questions regarding connectivity options, number of connectors available, and what features are included with different versions (Express, ESM, etc.). For example, it clarifies that L7505x does not have physical connectors but includes software connectors recommended for management through ArcMC or other management tools. 4. **Sizing Scenario:** This scenario involves setting up a SIEM system on a large national organization with diverse systems (AD servers, Unix servers, firewalls, etc.). The text provides a sizing example based on varying units of endpoints per device (eps) and desktops, along with retention requirements for log data. In summary, the document serves as an internal technical guide or documentation to configure and manage High Availability features in HP's ESM systems using HA modules and iPDU tools. It also provides detailed information about hardware connectivity and software management components needed based on specific scenarios (like SIEM project sizing). The provided document outlines the setup and requirements for a SIEM project within a Small Bank, focusing on data collection, storage, and processing. Here's a summary of key points from each section: 1. **Scenario Description**: The scenario involves setting up a SIEM system for a small bank with various components such as 100 Windows servers, 150 network devices, 50 test systems, and a retention requirement for at least 60 days online storage, 120 days offline archive storage, and 7 years for correlated events. 2. **Sizing Recommendation**: Based on the provided sizing calculator results (380 EPS from 100 Windows servers), the recommendation includes:

  • Using AE7410 for data collection.

  • One C3500 model for network devices.

3. **SIEM Project Requirement**: The SIEM solution is designed to collect security-related events across the enterprise, store them for analysis, and provide local correlation capabilities. It covers three layers: Data Collection, Storage, and Processing. Key retention requirements are 60 days online storage, 120 days offline archive, and 7 years for correlated events. 4. **Sizing Calculation**: For Site 1 EPS estimation, specific models like C5500 (for up to 12k EPS per data center) and L7500-SAN (with specifications tailored to online storage of at least 10TB and offline archive of 20TB based on the model's capabilities). This document is aimed at internal use within HP for SIEM implementation in a bank environment, detailing requirements, sizing, and recommendations for achieving effective security event management. This document outlines various aspects of using HP ArcSight for different use cases, focusing on how to implement and achieve these through a proof-of-concept boot camp training. The main objectives include understanding the importance of use cases in software and systems engineering, walking through vertical use cases, ensuring feasibility and achievement, providing best practices, determining appropriate content, checking results against requirements, and more. It also discusses the definition and types of use cases, as well as how to refine them using a decision tree approach. The document is intended for HP and partner internal use and highlights the importance of perimeter security, insider threat management, real-time event visualization, intellectual property protection, regulatory compliance, trigger actions post rule firing, customized reporting, and baseline comparisons. This document outlines various technical approaches and use cases for network security within Hewlett-Packard, focusing on different aspects such as environmental usage, operations, perimeter defense, insider threats, and finance. Key areas include system targeting, vulnerability analysis, authentication breaches, configuration management, account management, data leakage detection, and privileged user monitoring. The information is intended for internal use by HP and its partners, with the aim of enhancing regulatory compliance and network security through detailed technical analysis. This document outlines various use cases demonstrating the capabilities of HP's ArcSight Enterprise Security Manager (ESM) in compliance monitoring for financial firms, corporate policy compliance, insider threat management, and regulatory compliance. The use cases include automated monitoring, reporting, log retention, and forensics to ensure adherence with specific regulations such as SOX, JSOX, Basel, or other country-specific laws applicable to financial institutions. The document covers multiple scenarios including: 1. **Compliance Monitoring**: This includes automated monitoring of transactions against a single account, which may be suspicious due to the large number of small transactions being processed. This is crucial for preventing fraud and ensuring compliance with regulations. 2. **Corporate Policy Compliance**: Examples include using a "jump server" (Golden Host Usage Policy), detecting P2P software usage, identifying inappropriate software configurations, bypassing proxy or email gateway restrictions, direct logins as root, instant messaging use, browsing of inappropriate content in emails or on the web, and management of shared user accounts. 3. **Regulatory Compliance**: Specific examples include monitoring administrator activities, configuration changes, attacks targeting systems, vulnerability assessments, account management, local admin/root usage, database/application authentications, system restarts, and disallowed port activity. 4. **Use Cases for Leveraging Zones, Assets, and Asset Categories**: This includes monitoring network traffic (e.g., inbound connections accessing internal protected address space), VPN traffic, all firewall traffic, and created user accounts with specific authentication behaviors. These use cases are designed to help financial firms manage compliance, prevent data leakage, detect insider threats, and ensure that corporate policies are upheld across various technological platforms. This document outlines several use cases related to monitoring user accounts, including the deletion of user accounts, modification of user account settings, and more. The main focus is on leveraging zones and assets for authentication purposes, creating active channels with filters to monitor outbound connections to public address spaces. The first section discusses how to create an Active Channel with a filter that continuously evaluates events during the last 30 minutes, filtering based on source and destination zones categorized as 'Protected' and 'Public Address Space', respectively. This is crucial for monitoring all outbound connections in a controlled manner. Next, there are sections dedicated to cases where multiple users might be using the same account from different geographical locations, which can be addressed through several strategies: 1. Rule creation if a user logs into different country codes within an hour. 2. Implementing active lists to track user activity and determine if they log in from the same country. Active lists automatically expire every 4 hours. 3. Rules for triggering when multiple users try to access systems using the same account or IP address, with considerations for network authentication versus application authentication. Overall, these use cases are aimed at enhancing security through detailed monitoring and proactive management of user accounts and activities across various platforms. This text appears to be part of a documentation or guide related to cyber security and threat intelligence tools, specifically for detecting and managing user activity in a network environment using the HP ArcSight platform. Here's a summary of the key points mentioned: 1. **Auto-Expire Function**: The system automatically expires an active list every 4 hours to manage user data freshness. 2. **Chaining Rules with Active List & Variable**:

  • **First Rule**: This rule detects when users successfully log in and adds their username, source country code to the active list.

  • **Second Rule**: As a lightweight rule, it not only detects user logging but also checks values in the active list using a variable. The variable can be local for performance or global optimization. It extracts two values from the list: the username and the country geo-code.

  • The second rule is triggered when successful login is detected, and both conditions are met: the username already exists in the active list and the country code differs from the previously recorded one for that same username.

3. **Variable Usage**: This variable can be used locally or globally depending on performance requirements. It extracts values related to the user (username) and their location (country code). The condition checks if the extracted value equals the target field from the event, and also ensures the country code is different from previous records for that username. 4. **Performance Optimization**: Depending on the scenario, the variable can be set as local or global to optimize performance by reducing redundant data processing. 5. **Threat Intelligence**: The application aims to enhance malware detection through threat intelligence by integrating these features into its platform. This summary is based on the context provided and assumes that similar functionalities are typical for tools aimed at detecting and managing cyber threats in a network setting, using HP ArcSight or similar technologies. This text appears to be related to copyright notices and instructions for internal use at Hewlett-Packard (HP) regarding the creation of content to detect badge swipes by terminated employees. It contains a series of numbers followed by "© Copyright 2015 Hewlett-Packard Development Company, L.P." which indicates that the information is copyrighted. The text also specifies that the information contained herein is subject to change without notice and is restricted for use within HP and its partner companies. The instructions provided are as follows: 1. Create two Active Lists (likely databases or files) containing badge ID and employee information from instructor-provided .csv files. 2. Import these .csv files into the created Active Lists. 3. Ensure that the information is accurate and up-to-date, as it may be subject to change without notice. The repetition of the instructions with numbers 1429 through 1448 suggests a series of similar tasks or notices related to internal procedures within HP regarding security measures for terminated employees' badge swipes. This document outlines several internal resources for HP partners related to the ArcSight product line, including training materials, self-learning resources, and partner portals. Key points include: 1. **ArcSight Proof of Concept Boot Camp Training**: Provides training on how to use ArcSight products through a boot camp format. It is recommended to register via an email or phone call with Philippe Jouvellier at the Global Channel Partner Management Office. 2. **Resources for Downloading Demo VMs and Learning Materials**:

  • **ArcSight Demo VMs**: Available for download via FTP, requiring specific passwords for unzipping.

  • **HP Arcsight Self Learning Resources**: Includes multiple resources such as partner portals, software support online, and HP ESP University with community features for learning and collaboration.

3. **Partner Portals**:

  • **HP Partner Portal**: A standard portal providing information on becoming a partner, accessing training and certification, and sales administration resources.

  • **HP Software Partner Central**: Focuses specifically on enabling partners for HP Software products by offering sales and marketing collateral, enablement tools, pricing guides, and demo software.

  • **HP ESP Partner Central**: A specific portal for HP Enterprise Security Products (ESP) partners, offering a wide range of resources including community forums, training materials, and support.

  • **Protect724**: An online community portal providing access to product documents, webinars, video tutorials, and more related to HP security solutions.

  • **CloudShare Virtual Demo/PoC Environment**: Offers a virtual environment for testing and demonstrating ArcSight capabilities.

These resources are intended for internal use by HP partners and should be accessed through the provided URLs or via registration as outlined in the document. This text appears to be a series of copyright notices for various internal tools and resources related to software distribution and support provided by Hewlett-Packard (HP). Each entry includes the year of copyright, the name of the company, a description of the content or service, and sometimes a link or reference number. The descriptions are as follows: 1. **Web Portal with Sales Content – Tools – Price Lists and Demo Licenses**: This suggests that there is an internal web portal within HP's network used for accessing sales-related materials such as price lists and demo licenses for their software products. 2. **Software Support Online - SSO (Software Support Online)**: Refers to a support system or online service provided by HP, possibly for troubleshooting or assistance with the company’s software products. 3. **New Flavor**: This could indicate that there is an update or new addition to one of their software offerings, though without more context it's unclear what this specifically refers to. 4. **HP Software – Software Distribution / Documentation Portal**: Indicates a portal for distributing and accessing documentation related to HP’s software products, likely used internally by the company and its partners for reference or training purposes. 5. **HP ESP - Software Evaluation Portal**: A portal designed for evaluating or testing HP's software products, possibly allowing internal users to test features, functionalities, or security measures before public release. 6. **HP Enterprise Security University**: Mentions a virtual learning environment focused on enterprise security training and education related to HP’s software offerings. 7. **CloudShare**: A platform for cloud-based services provided by HP, possibly used internally for collaboration, data sharing, or remote access within the company and its partners. Each entry ends with a copyright notice from Hewlett-Packard Development Company, L.P., reminding users that the information is subject to change without notice and restricted to internal use only by HP and their partner companies.

Disclaimer:
The content in this post is for informational and educational purposes only. It may reference technologies, configurations, or products that are outdated or no longer supported. If there are any comments or feedback, kindly leave a message and will be responded.

Recent Posts

See All
Zeus Bot Use Case

Summary: "Zeus Bot Version 5.0" is a document detailing ArcSight's enhancements to its Zeus botnet detection capabilities within the...

 
 
 
Windows Unified Connector

Summary: The document "iServe_Demo_System_Usage_for_HP_ESP_Canada_Solution_Architects_v1.1" outlines specific deployment guidelines for...

 
 
 

Comments


@2021 Copyrights reserved.

bottom of page