top of page

Logger 6.6 Demo Script

  • Writer: Pavan Raja
    Pavan Raja
  • Apr 8, 2025
  • 18 min read

Summary:

The provided text outlines a series of steps for running and customizing reports within a Logger system, specifically using an example related to user logins and events. Here's a summarized version of the process: 1. **Run a Basic Search**: Perform a search with specific fields (all fields) to gather data on user logins over the last 24 hours. Use the "Demo User Logins" filter in the saved filters section. Set the time frame to the last day and run the search. 2. **Save as Report**: Save this search directly as a report by navigating to the reporting interface, selecting the appropriate filter, and naming the report. Confirm overwriting any existing reports with the same name. 3. **Run and Modify the Report**: - Run the saved report from the "Reports" section in Logger. - In the Explorer, find and run the "Demo User Logins" report. Adjust settings like start time to last day and scan limit to 0. Apply these changes. - The report should now display a bar chart with counts of event tuples related to categoryOutcome, destinationUserName, and destinationAddress over the past day. 4. **Enhance the Report**: Modify an existing saved report named "Demo User Logins Enhanced" by right-clicking and selecting it for execution in smart format. Adjust settings similarly and apply them. This enhanced report includes value-based coloring, sparklines, and multiple levels of groupings. 5. **Use FPE (Format Preserving Encryption)**: Configure Logger to work with Connectors using specific parameters for encryption and decryption. Set up a SmartMessage Receiver in Logger, install a connector binary, configure the Logger for connectivity to a secure data demo server, and upload a connector binary. Run a test alert connector to replay events into Logger. 6. **Visualize Data**: - Use a smart report to visualize device types using fields like deviceVendor and deviceProduct. This report displays event volumes in different ways, including through "Tree Map" and "Packed Circles" visualizations. - Edit the mode of this visualization to enable customization and adjust settings as needed. This summary captures the main tasks involved in setting up, running, and customizing reports for user login activities within a Logger system, along with details about data encryption setup and visualization options.

Details:

**ArcSight Logger 6.6 Demonstration Script - June 2018** This script provides a comprehensive overview of ArcSight Logger version 6.6 features and functionalities through a series of demonstration scripts. It covers various use cases including installation of a demonstration license, handling lookup files, categorization for device-independent login failures, data analysis, security and compliance use cases, IT operations, application development, NetFlow analysis, raw events and regex usage, dynamic analysis with smart reports, format-preserving encryption, and visualizations. The script also includes additional use cases such as analyzing machine transactions, scheduled updates for logger lookup files, URL analysis, insubnet operator evaluation, decoding URLs, and creating a dashboard for 15 tables. Additionally, it discusses the partnership between FortiGate and ArcSight Logger, and showcases how to visualize device types using "Tree Map" and "Packed Circles." **Key Highlights:**

  • **Installation of Demonstration License**: The script guides through setting up a demonstration license for ArcSight Logger 6.6.

  • **Lookup Files Management**: Explains how to handle lookup files to enhance data categorization and analysis capabilities.

  • **Data Analysis**: Covers analyzing HTTPS traffic, identifying communication with web servers, and searching within large datasets using regex expressions.

  • **Security & Compliance Use Cases**: Demonstrates how Logger can be used for security and compliance monitoring, including detailed case studies.

  • **IT Operations**: Focuses on integrating Logger into IT operations to improve efficiency and decision making.

  • **Application Development**: Provides insights into developing applications that leverage ArcSight Logger functionalities.

  • **NetFlow Use Case**: Explores the use of NetFlow data analysis within the platform for network traffic monitoring.

  • **Raw Events & Regex Use Case**: Discusses how raw events can be analyzed using regex expressions to uncover specific patterns or threats.

  • **Dynamic Analysis with Smart Reports, Sparklines**: Demonstrates advanced analytics capabilities through smart reports and sparkline visualizations.

  • **Visualizations**: Features visual tools like "Tree Map" and "Packed Circles" for better understanding of device types and their interactions.

This script is designed to be a hands-on guide with screenshots indicating key strokes or text entries, making it suitable for both technical and non-technical audiences looking to understand and demonstrate the capabilities of ArcSight Logger 6.6. The provided text discusses how to install a demonstration license for ADP (Advanced Data Protection) 2.0+, ArcMC (ArcMap Collector), and Logger 6.3+. It explains that since there's no pre-installed license, you need to follow specific steps to install your own. Here are the key points: 1. **Required Items**: You will need a Logger ADP base license and a Logger capacity uplift license. 2. **Install Steps**:

  • Switch on ArcMC as an ADP license server.

  • In Logger, go to System Administration → License and Update and apply the ADP base license.

  • Apply the capacity uplift license in ArcMC by browsing to Administration → System Admin.

3. **Verification**: After applying the licenses, check if the "Reporting" menu item appears in the User Interface, indicating that the system is now configured for ADP licensing. 4. **Using Lookup Files**: This feature allows static correlation and event enrichment with contextual information. In a demonstration scenario, a Lookup File containing known malicious IP addresses is used to correlate events with web servers running on port 443. The process involves:

  • Importing the malicious IP addresses into a Lookup Table named Malicious_Addresses.

  • Running a search query (e.g., `destinationPort=443`) and applying the lookup operator (`lookup Malicious_Addresses ip as src output *`) to filter events by the specified IP addresses.

5. **Navigation**: Use the "Take me to…" navigation feature in Logger to quickly access specific features simply by typing their names. This text provides a detailed guide on how to configure and use ADP licensing, including managing licenses for multiple devices through ArcMC, and demonstrates the utility of static correlation with Lookup Files in enhancing event analysis. This summary discusses improvements to a Logger User Interface (UI) for easier searching and better visual representation using various colors in headers on columns to indicate indexing status of fields. It explains how to use Lookup Files to enhance search results by adding contextual data like deviceProduct, deviceVendor from the Lookup File and categorization details such as category type and score. The process involves exporting search results to a CSV file, creating a new Lookup File from it, and utilizing it for further focused searches. The document also addresses an issue with charting IP Address fields in Logger due to a regression bug until it is fixed, providing a workaround which allows running the search without the affected term 'ip_Malici'. Lastly, it demonstrates how to create a dashboard based on top malicious domains and IPs using Lookup Files, showing detailed information about specific events like SQL injection attacks. The text provides an overview and instructions for using ArcSight Logger, a tool designed to search across various devices like Windows, UNIX, and Mainframe, to categorize failed login attempts. It emphasizes the benefits of device independent categorization provided by ArcSight, which helps organizations maintain consistency in their security measures when adding or changing underlying device vendors. The text also covers how to create and use reports within Logger for auditing purposes and monitoring traffic analysis such as HTTPS communications with web servers. Key points include: 1. **Categorization Benefits**: Using ArcSight's categorization feature ensures uniform and consistent handling of events across different devices, avoiding the need to rely on third-party applications whose completeness or reliability may vary. 2. **Search and Reporting**: The text guides through setting up a search for failed logins using Logger, demonstrating how to create filters and run searches that aggregate data from various device types (e.g., firewalls, VPNs, Unix systems) into one view, simplifying auditing requirements and shared team access to important security information. 3. **Data Analysis**: The example provided shows how to use basic search functions within Logger to analyze traffic patterns related to HTTPS communications with web servers. This not only helps in identifying suspicious activity but also provides insights into which devices are engaging in such activities, enhancing overall network security posture. 4. **Reporting Flexibility**: ArcSight Logger allows for the customization of reports to suit specific time periods or changes in device types, ensuring that new events and categories automatically integrate with existing reporting structures without manual adjustments. Overall, this text serves as a guide on how to effectively use ArcSight Logger to manage and analyze security-related data across heterogeneous devices, providing valuable insights for both auditing and ongoing monitoring tasks. The provided text discusses the use of Logger in network traffic analysis, particularly focusing on the detection of least common events (using the 'tail' operation) and the summarization of data by devices or products ('sum' command). It highlights how Logger can be used to aggregate data from sources and destinations based on specific criteria such as IP addresses and ports. The text explains that while high-frequency events are easier to detect, less common events require careful monitoring for unusual activities. The article emphasizes the efficiency of Logger in handling large datasets by using commands like 'top', 'tail', and 'rare' which allow users to focus on specific patterns or outliers within vast amounts of data. It also mentions that Logger can normalize and categorize traffic, providing a comprehensive overview even when dealing with complex network setups. Furthermore, the text explains how Logger can be used to analyze events across different time intervals and devices, helping in understanding the normal behavior of network traffic over time. The importance of event normalization and categorization is highlighted, as well as the ability of ArcSight to normalize byte counts from varied products which simplifies the analysis process significantly. In summary, this text provides a comprehensive guide on how Logger can be effectively used for monitoring and analyzing network traffic focusing on least common events through 'tail' operations and summarizing data using 'sum' command. It also underscores the advantages of event normalization in handling complex datasets across varied devices and products. In this demo, we first load and close the system, then initiate a search for events like logins, which typically take about 10 seconds to return one event from seven years of data. The system's performance is impressive, with a scan rate reaching up to 100 million events per second in production environments. When searching for non-existent items such as an IP address (changing the last octet from .44 to .24), the response is "NO EVENTS" within a few seconds. In the context of security use cases, Logger provides features like customizable login banners and role-based access control that are useful in incident response and forensics investigations. The interface allows users to create custom dashboards for specific interests or roles, such as compliance, network operations, and more, which can be tailored based on organizational needs. For instance, the security dashboard displays information about failed logins and top destination ports, while the Network Operations Dashboard shows detailed views of NetFlow by destination port, firewall drops, and other network metrics. The system also tracks intrusion events like malicious code activity and configuration changes, providing a comprehensive view for compliance monitoring. The text provided outlines a series of steps related to conducting a search using a hypothetical tool, similar to Google or Logger (though not explicitly named). The user goes through several stages to refine and execute their search query. 1. **Initial Search**: The user starts with an unstructured search by typing "dgchung" into the search field and clicking 'Go!'. This is followed by changing settings like search time to last hour, which helps in focusing on recent activity logs. 2. **Device Vendor Search**: By clicking on deviceVendor under Field Summary, the user can see where the string "dgchung" occurs across different vendors such as ORACLE, Microsoft, and Vontu. The overview provides details like the number of events and their percentage distribution among these vendors. 3. **Customizing View**: User hides or minimizes the automatic Field Summary view if displayed, allowing more workspace for other views or specific searches. They can also customize the field set to include only relevant security fields based on personal investigation needs. 4. **Structured Search**: The user employs a structured search by clicking 'Advanced' and specifying detailed parameters like sourceHostName under Name with an operator "Contains". This is applied to find servers containing "finance" in their hostname, illustrating how structured queries can be used for more targeted searches. 5. **Adding Conditions**: To refine the search further, the user adds terms such as "ftp" (which does not yield results) and modifies the query using logical operators like OR and | (pipe). This helps explore different aspects of the data through suggestions provided by Logger's search helper feature. 6. **Charting and Export**: The tool allows for visual chart creation ('Chart Settings') which can be adjusted according to preference, and exporting results in various formats ('Export Results'). 7. **Reporting and Dashboarding**: User explores reporting options including creating custom groups of reports under 'Explorer' (favorites group) and running predefined reports like the "User Investigation" report where user replaces a default term with "chung". This can be used to generate reports for management which Logger also facilitates. 8. **Dashboards and Reporting Display**: User navigates through menus to access custom or default dashboards, including a security dashboard showing SANS Top 5 IDS Alerts report along with other options like ArcSight links. This step-by-step guide provides an overview of how one might use a tool for data exploration and reporting within the context of conducting investigations, emphasizing flexibility in search parameters, visualization, and customization based on specific needs or findings during the investigation process. This text discusses a compliance use case involving Logger, an auditing tool used for demonstrating regulatory compliance in various settings such as PCI (Payment Card Industry) compliance. The scenario involves setting up and using Logger to review reports related to PCI compliance. The process begins with selecting specific events like FTP activities involving 'chung' in either the source or destination usernames, which is useful for investigative purposes. Users can customize the report by specifying queries on the fly for default storage groups and running the report. The report generated will include all 'chung' related events along with their associated details such as username activity. The user interface includes various icons at the top of the report that allow exporting in different formats (PDF, MS Word, MS Excel, CSV) and scheduling automatic delivery to an inbox. To log out securely from Logger, the user clicks Logout. The text then moves on to discuss the features within Logger for managing regulatory compliance. It mentions the PCI Compliance Insight Package which provides pre-built reports that help in complying with PCI standards. These include customizable login banners and automated alerts based on specific conditions like default account usage. Each user has personalized dashboards where relevant events related to PCI compliance can be viewed, depending on their role (auditor). The interface allows for exporting the report in various formats and scheduling regular deliveries of reports via email. Lastly, optional steps are provided for configuring real-time alerts based on occurrences or conditions specified by the user. This setup is particularly useful for monitoring behaviors like default account usage without having to wait for a predefined report to be generated. Additionally, Logger allows for multiple storage groups with different retention periods depending on the type of log being managed, which can be adjusted according to compliance requirements such as PCI's one-year minimum retention period. This text discusses the use of Logger software for IT operations, focusing on its capabilities for automating log management, configuration settings, and security compliance features such as PCI (Payment Card Industry) and SOX (Securities and Exchange Commission) standards. The software allows users to configure storage rules that automatically assign incoming logs to specific groups based on the desired result. Auditor account holders have restricted access with only read permissions for configurations, ensuring they cannot alter settings but can verify them. In an IT Ops use case, Logger helps quickly diagnose issues like a down web server by allowing analysts to search and analyze log data directly from the software interface without needing extensive expertise in networking or server specifics. The Connector categorizes events into human-readable formats for easier understanding, supporting broader team roles beyond just those who manage specific devices. The text also highlights how Logger's reporting tools can be used to generate standard reports and customize them based on parameters for detailed investigations. This flexibility supports various compliance requirements while efficiently addressing operational issues. This involves setting up a focused report in a system (likely a log analysis tool) to specifically look at configuration changes related to "dmzfw1" under the "Destination Host Name" field, and then running this report. After generating the report, you can email it to management if necessary. Next, use the Live Event Viewer to monitor real-time events involving a webserver named "web1.arcnet.com". For application development logs, Logger can handle multiline log files; in this case, searching for "raa" and filtering by errors within an hour gives more focused insights into issues in Ruby Application Archive logs or similar multi-line formats. In the NetFlow use case, search for traffic to a Microsoft SQL Server on port 1433 to identify sources, then analyze these with statistical fields like bytesIn. For dynamic charting in Logger, choose different chart types and settings as needed based on specific analysis requirements. The provided text discusses various functionalities and uses within Logger, a tool used for analyzing network data through raw events and performing searches. Here's a summarized breakdown of its key features and applications: 1. **Dynamic Charting**: The system allows users to hover over pie slices on charts, which provides information about the ages and number of events. These charts are live and auto-refresh according to user settings. Additionally, there are options for exporting these charts in various formats such as PDF or CSV. 2. **NetFlow by Port**: Through a quick search within NetFlow data, users can chart counts based on destination ports. For example, port 123 is highlighted as the network time protocol (NTP) port. This function also allows saving searches and exporting results in different formats. 3. **RAW Events and Regex Use Case**: The Logger tool enables users to search for specific types of events by using a 'not cef' filter, which brings all raw event content into view. Users can then refine their search further by adding terms like 'loss nagios ALERT', focusing on network performance issues indicated by high round-trip averages (RTA). The Logger Regex helper is used to parse RAW events and assign meaningful names to variables based on identified fields such as RTA for Round Trip Average. 4. **Additional Use Cases**:

  • **Failed Logins**: Users can search through raw events for failed logins using the 'Discover Fields' capability, allowing the system to identify relevant keywords automatically. This feature helps in quickly locating and analyzing performance issues related to login attempts.

  • **Exporting Results**: Like other functions within Logger, results can be exported in multiple formats including saving locally or exporting directly to a storage location for further use or reporting purposes.

In summary, the Logger tool facilitates advanced network data analysis by providing tools for searching, charting, and exporting detailed information from raw events using features like Regex parsing and auto-refreshing charts. The provided text discusses a process for analyzing and masking credit card numbers from raw events using the Logger Regex Helper in a logging system. Here's a summarized breakdown of the key points: 1. **Expanding Search Time**: If needed, expand the search time to include 'Last hour' data when looking for specific patterns like login failed attempts or other relevant events. 2. **Using Regex for Data Extraction**: Adjust your regex query in the Logger Regex Helper to better match the format of raw event data. For instance, changing from `"Username=(?<^\s,;>

+)"` to `"Username=(?<^\s,;>

+):"` can help if there are issues with capturing all relevant information. 3. **Masking Credit Card Numbers**:

  • Navigate to the Events panel and click on one of the events to expand it.

  • Use the Logger Regex Helper to isolate credit card numbers from the raw event data. Rename the extracted field as `ccnum`. This tool parses the raw event, applies a regex pattern, and isolates specific fields which can then be renamed for better clarity and relevance.

  • The system automatically adds a column named `ccnum` in the grid that displays this information for each event. It masks all but the first digits of the card number to protect privacy while still allowing for useful reporting.

4. **Analyzing Masked Data**: To provide a report on credit card types (Visa, MasterCard, Discover, etc.), add `| top firstnum` to your search query and click Go! This will generate a chart that can be exported in PDF format for distribution. The Logger helps by automatically adding the column `ccnum` after masking as required. 5. **Applying Masking Based on Card Type**: Different card types are identified by their initial digit: 3 for American Express, Diners Club; 4 for Visa and related cards; 5 for MasterCard and similar cards; 6 for Discover. The Logger can mask any data based on user requirements to prepare reports suitable for distribution without exposing sensitive information. In summary, the text describes a method for securely analyzing raw event data using regex helpers to identify and mask credit card numbers while providing valuable reporting capabilities that comply with privacy standards. The documentation provided gives an overview of how to use ArcSight Logger for parsing and analyzing Mail Log Files related to POSTFIX logs. It includes steps on setting up a Parser named "Mail_Log_Files" which is used to parse log files into various fields, such as time, hostname, process name, subprocess name, PID, and QueueID. The parser allows grouping of email events with the same QueueID into transactions or groups, allowing for analysis based on this common value (transactionid). The documentation also explains how to configure Logger settings, including exporting content and configuring dashboards for visualizing parsed data. It mentions drillable dashboards that allow users to investigate event details by drilling down into specific columns of charts. The system is flexible enough to handle external updates like scheduled lookups from an external source (like Tor Exit Nodes) which can be refreshed through a provided script, demonstrating the Logger's capability for dynamic data integration and analysis. The passage provides an overview and instructions for utilizing a Logger software tool to analyze data, particularly focusing on URL lengths within network communications logs. Here's a summarized breakdown of the key points: 1. **Using Lookup Tables**: Instructions are given on how to interact with lookup tables in Logger. This includes editing these tables via a schedule, examining details such as file location and reloading schedules. 2. **Search Functionality**: The passage demonstrates searching within Logger using keywords like "Search" and specific search queries. It explains how to save searches for future use and load them directly into the software interface. 3. **Evaluating URL Lengths**: A method is outlined where URLs from incoming communications are analyzed by Logger, creating a new field (urllength) that measures the length of these URLs. This helps in identifying potentially long URLs which may require extra investigation for malware or compromise indicators. 4. **Using New Operators**: The passage introduces a new operator 'insubnet' designed to search within variable-length subnet mask (VLSM) networks, expanding the scope of network security analysis. 5. **Multi-Select Feature**: Logger allows users to select multiple events based on specific criteria and apply further queries or filters to these selections, enhancing targeted investigation capabilities. In summary, this passage is a practical guide for leveraging Logger's advanced search and analytical features to uncover insights from network communications data related to security concerns such as malware and compromised systems. This text appears to be documentation or instructions related to a software tool (likely Micro Focus Logger) used for searching, analyzing, and visualizing data. Here's a summary of what it covers: 1. **Search Functionality**: The text describes how to set up a search query within the Logger tool using specific criteria such as deviceProduct, categorySignificance, and sourceAddress subnet. This is done by entering keywords into a "Take me to..." box or directly in the Analyze/Search page, then refining the query with additional terms like CONTAINS for filtering. 2. **URL Decoding**: It explains how to decode URL-encoded strings within Logger. The process involves selecting the 'name' field as "Accessed URL", applying an eval operation (eval stands for evaluate) which uses the urldecode function on the requestUrl field, and then comparing both fields in the display results. 3. **Dashboard Display**: There is a mention of a new dashboard that can show up to 15 panels at once, called "Top Values". This feature allows users to view aggregated top values from different fields simultaneously. 4. **FortiGate Integration**: The text highlights how Logger integrates with Fortinet's FortiGate firewall, enabling the collection and analysis of security-related events like device actions or vendor details. 5. **Smart Reports and Sparklines**: It introduces "Smart View", a tool for dynamic analysis that uses sparklines to embed high-resolution graphics into reports, allowing users to interactively visualize data by adding/removing elements, grouping, sorting, etc. Overall, this text is about the various ways Logger can be used within Micro Focus to manage and analyze cybersecurity data from different sources like firewalls and network devices, providing a dashboard for quick insights and detailed reports for further investigation. The provided text outlines a series of steps for running and customizing reports within a Logger system, specifically using an example related to user logins and events. Here's a summarized version of the process: 1. **Run a Basic Search**: Perform a search with specific fields (all fields) to gather data on user logins over the last 24 hours. Use the "Demo User Logins" filter in the saved filters section. Set the time frame to the last day and run the search. 2. **Save as Report**: Save this search directly as a report by navigating to the reporting interface, selecting the appropriate filter, and naming the report. Confirm overwriting any existing reports with the same name. 3. **Run and Modify the Report**:

  • Run the saved report from the "Reports" section in Logger.

  • In the Explorer, find and run the "Demo User Logins" report. Adjust settings like start time to last day and scan limit to 0. Apply these changes.

  • The report should now display a bar chart with counts of event tuples related to categoryOutcome, destinationUserName, and destinationAddress over the past day.

4. **Enhance the Report**: Modify an existing saved report named "Demo User Logins Enhanced" by right-clicking and selecting it for execution in smart format. Adjust settings similarly and apply them. This enhanced report includes value-based coloring, sparklines, and multiple levels of groupings. 5. **Use FPE (Format Preserving Encryption)**: Configure Logger to work with Connectors using specific parameters for encryption and decryption. Set up a SmartMessage Receiver in Logger, install a connector binary, configure the Logger for connectivity to a secure data demo server, and upload a connector binary. Run a test alert connector to replay events into Logger. 6. **Visualize Data**:

  • Use a smart report to visualize device types using fields like deviceVendor and deviceProduct. This report displays event volumes in different ways, including through "Tree Map" and "Packed Circles" visualizations.

  • Edit the mode of this visualization to enable customization and adjust settings as needed.

This summary captures the main tasks involved in setting up, running, and customizing reports for user login activities within a Logger system, along with details about data encryption setup and visualization options. Logger Smart Reporting has been updated in various versions to improve functionality. In June 2018 for Logger 6.6, the feature was enhanced with Secure Data Format-Preserving Encryption and a SmartReport that allows multiple visualizations to be added dynamically without additional database searches. In February 2018 for Logger 6.5, Smart Reports were incorporated into the "Dynamic Analysis using Smart Reports, Sparklines" use case. The interface was converted to Micro Focus template and branding in February 2017, with minor changes to the Security Use Case report name from DEMO - User Investigation to User Investigation. Subsequent versions saw updates such as adding DMA events and content (April 2016), scheduled logger lookups, URL analysis improvements (September 2015), and more. These updates aimed at improving data visualization and functionality within the Logger tool.

Disclaimer:
The content in this post is for informational and educational purposes only. It may reference technologies, configurations, or products that are outdated or no longer supported. If there are any comments or feedback, kindly leave a message and will be responded.

Recent Posts

See All
Zeus Bot Use Case

Summary: "Zeus Bot Version 5.0" is a document detailing ArcSight's enhancements to its Zeus botnet detection capabilities within the...

 
 
 
Windows Unified Connector

Summary: The document "iServe_Demo_System_Usage_for_HP_ESP_Canada_Solution_Architects_v1.1" outlines specific deployment guidelines for...

 
 
 

Comments


@2021 Copyrights reserved.

bottom of page