Logger Demo Script 2016.05
- Pavan Raja

- Apr 8, 2025
- 18 min read
Summary:
The provided text discusses the functionality of HPE Confidential's Logger tool, which is used for monitoring transactions and events across servers and web transactions. It highlights several functionalities including transaction operation time thresholds, the ability to export and import content such as alerts, dashboards, filters, parsers, saved searches, and source types; flexibility in developing and sharing custom content with other users through exporting and importing options; and enhancements like scheduled updates for logger lookup tables. The demonstration of these features is supported by practical examples including the use of Logger to analyze mail events (POSTFIX) and perform lookups on Tor exit nodes.
### Need for a SmartConnector: A SmartConnector is a software component that facilitates the integration of different systems, applications, or data sources through a unified interface. It enables automated data exchange and enhances interoperability between various tools and platforms. The need for a SmartConnector arises due to the increasing complexity and diversity of IT environments where multiple systems and technologies are used to manage and monitor transactions and events.
### Grouping Based on Common Values: Grouping based on common values is essential for organizing data in a meaningful way, especially when dealing with large volumes of information from various sources. This can be achieved through features like transaction grouping, which allows users to categorize similar types of transactions or events together. For example, in the context of POSTFIX mail transfer agent, events grouped by QueueID demonstrate how specific categories help in managing and analyzing communication within a system.
### Practical Example: Consider the scenario where Logger is used to analyze mail events from POSTFIX. Events related to different mail servers can be grouped by their respective QueueIDs, which represent unique identifiers for each mail server queue. This grouping helps in monitoring and troubleshooting specific queues that might be experiencing issues without cluttering the analysis with unrelated data.
### Transaction Grouping: Transaction grouping is a feature within Logger where similar transactions or events are combined into groups based on shared characteristics such as source IP address, destination port, protocol type, etc. This aids in simplifying complex logs and making it easier to identify patterns, anomalies, or potential security issues across multiple servers and networks.
### Custom Content Export/Import: The ability to export custom content from Logger, including alerts, dashboards, filters, parsers, and saved searches, allows users to share specific configurations with other team members or partners without the need for extensive technical knowledge. This can be particularly useful in environments where multiple teams are involved in managing different aspects of a complex IT infrastructure.
### Scheduled Updates for Logger Lookup Tables: Scheduled updates for logger lookup tables ensure that critical data such as IP addresses, URLs, and domain names are up-to-date and relevant to current threats and trends. This proactive approach helps in enhancing the accuracy and effectiveness of security monitoring by providing real-time or near-real-time updates based on predefined schedules.
### Conclusion: The functionalities within HPE Confidential’s Logger tool, such as transaction grouping, custom content export/import, and scheduled updates for lookup tables, are crucial for efficient data analysis and management in IT environments. These features not only aid in the organization of large volumes of information but also provide valuable insights into potential security threats or performance issues across various systems and networks.
Details:
This document/quotation is a technical and legal notice provided by Hewlett Packard Enterprise (HPE) for internal evaluation purposes only. It contains confidential information about HPE's current products, sales, and service programs that are subject to change at HPE's discretion. The recipient agrees to maintain the confidentiality of this information and not reproduce or disclose it to any person or entity outside the group responsible for evaluating its contents without authorization from HPE.
The document includes demonstration scripts for HP ArcSight Logger, which requires a demonstration virtual machine running at least Logger v6.1. It provides step-by-step instructions for configuring Internet Explorer settings and accessing iPackager. Additionally, it outlines various use cases such as data analysis, security, compliance, IT operations, application development, NetFlow, raw events, regex, additional use cases, analyzing machine transactions, scheduled updates for Logger LOOKUP, URL analysis, insubnet operator, decoding URLs, dashboard for 15 tables, DNS malware analytics, and FortiGate - Logger partnership.
The document also mentions that pricing estimates are valid for 30 days from the date of submission, with a preference for hard copy over electronic formats if there's any discrepancy between them. It encourages addressing concerns, questions, or issues directly through local sales representatives. The information provided is believed to be accurate and relevant but comes without warranties or liability claims by HPE.
This document outlines how to use Logger, an application for analyzing network traffic, specifically focusing on determining whether malicious IP addresses from a Lookup File have communicated with web servers. The demonstration uses a specific instance called Malicious_Addresses which contains lists of known malicious IP addresses along with category types and scores.
To begin, the user is required to access iPackager for the first time, where they will encounter two Security Warning pop-up dialog boxes; selecting "Continue" and then "Run" will proceed further into Logger settings. After opening the application, the main function being demonstrated is using Lookup Files to correlate events in an environment by statically correlating IP addresses from events with those on a malicious list.
The demonstration involves:
1. Setting up a search query targeting port 443 for communication events. This can be expanded directly or through the use of pre-saved filters and lookup operators to narrow down events containing malicious IP addresses.
2. Enhancing search results using fields from the Lookup File, which provides contextual information about why an IP address is on the list (category type) and its level of activity as indicated by a score. This enrichment can include additional data such as geo-tagging or user identification for better analysis.
3. Using categorization through SmartConnector to refine search results further but noting limitations due to a regression bug in Logger 6.1 which affects charting from Lookup tables until resolved.
4. Exporting the final search results if needed, allowing for creation of new Lookup Files based on these findings and specific attackers.
In summary, this document provides instructions and procedures for using Logger features such as Lookup Files to correlate malicious IP addresses with network communications, enhancing data with contextual information, and exporting analysis results for further investigation.
The summarized text discusses the process of exporting search results from Logger, creating a new Lookup File with this data, and using it for further analysis and reporting within the HPE ArcSight system. Key steps include:
1. **Exporting Search Results**: From Logger, users export their search results to a CSV file by clicking "Export" then selecting "Download results." They must select a location to save the exported file before downloading it.
2. **Creating a New Lookup File**: The next step is to create a new Lookup File from the downloaded CSV file:
Navigate to "Select Lookup Files," find and click on "Add" for the desired Lookup File category, such as "Active_Attackers."
Click "Browse..." to locate the saved export file, then select it with "Open." Confirm by clicking "Save" and selecting "Active_Attackers." Finally, click "Done."
3. **Using the Lookup File for Analysis**: Once the new Lookup File is created, users can use Logger's search capabilities to analyze events more deeply:
Perform a search using terms like "sea," which allows users to access Saved Searches or Filters via autocomplete in the search box. For instance, searching "$ss$" opens saved searches, and querying "$filter$" accesses filters.
The Lookup File enhances event data with contextual information, enabling categorization of events such as successful attacks identified by malicious IP addresses connecting to web servers like Apache.
4. **Reporting**: Logger's reporting capabilities allow users to generate reports on various aspects of logged activities:
Users can explore "Top Malicious Activity" in dashboards and drill down into details using pie charts or other visual representations. Reports are device-independent, covering events across different devices like Windows, UNIX, and mainframes without requiring additional third-party applications.
5. **Categorization**: HP ArcSight's categorization feature ensures consistent handling of logged data across all devices involved in an organization's operations. This includes logins from various device types that might be added or changed through corporate acquisitions (e.g., different firewall vendors).
Overall, this text outlines the process for exporting and analyzing security-related events using Logger within the HPE ArcSight system, emphasizing its value for both audit requirements and real-time analysis by security teams.
The provided document outlines various functionalities of Logger, a tool used for data analysis within HPE products. Key features include modifying port settings to analyze different time periods, creating custom reports such as "Top Failed Logins by User" and analyzing HTTPS traffic using basic search functions like charts, top, rare, and tail.
For instance, the report "Top Failed Logins by User" can be adjusted or run over a different time period without modification required for new device types or event categories. This flexibility allows Logger to adapt seamlessly with added devices or types. When dealing with HTTPS traffic, users can search using criteria like destinationPort=443 and adjust fields to Security for detailed analysis.
The document also highlights the importance of categorization in analyzing events; even when adding new devices or types, categorization applies automatically without requiring modifications to reports. This ensures that all relevant data is captured efficiently.
Additionally, Logger offers advanced traffic analysis capabilities through its search functionality. By entering destinationPort=443 and using super indexed fields like destinationPort, users can quickly detect patterns in HTTPS traffic, identify servers running HTTPS, and monitor the type of traffic generated over time. The tool also allows for drill-down into specific events via a dashboard for deeper investigation, particularly useful for detecting least common event occurrences.
In summary, Logger provides robust data analysis tools that adapt dynamically to changes in device types or event categories, offering users advanced capabilities such as monitoring HTTPS traffic and investigating suspicious activities through customizable reports and detailed analytics.
The provided text outlines a demonstration of Logger's functionality for searching through large datasets in order to detect unusual activity or rare events. It begins by explaining how to use the simple sum command within Logger to aggregate bytesIn and bytesOut fields per device, allowing for analysis of traffic volume and potential anomalies. This is done specifically for devices connected to port 443, with the results displayed as "total_bytes_In" and "total_bytes_out". The search can be conducted over different time intervals to assess normality over time.
The text then moves on to discuss Logger's capability for event normalization and categorization, showcasing how it can group and categorize traffic against port 443 efficiently. It highlights the importance of such features in analyzing network traffic generated by various devices.
Next, there is a section dedicated to finding rare events within massive datasets using Logger Super Indexing. This feature utilizes Bloom filter algorithms to quickly determine the existence of search terms across vast data sets, which provides significant performance benefits when searching for rare events. The text includes examples demonstrating how this functionality can be applied in practical scenarios, such as identifying a rare login event that had occurred once within 10 million potential events.
Finally, the demonstration concludes with a focus on security use cases, specifically incident response and forensics. It introduces Logger's ability to display custom Login Banners and navigate through various dashboard panels providing relevant information for investigation purposes. The text also mentions how users can easily view more detailed event data by hovering over specific pie chart slices in the NetFlow Top Destination Ports panel.
Overall, this text provides a comprehensive overview of Logger's capabilities in handling large-scale data analysis tasks and serves as an example of its application in security and network monitoring scenarios.
This document provides a demonstration of how to use HPE Logger for various purposes, including compliance and network operations dashboards, as well as performing searches and customizing panels. It covers the following steps:
1. **Accessing Dashboards**: Users can switch between different dashboard types like Network Operations, Compliance, and Monitor dashboards through a dropdown menu in the interface.
2. **Customizing Panels**: In the Logger interface, users can click on Edit icons for specific panels to change their format (e.g., column, bar, pie, area, line, stacked column, or stacked bar). They can also adjust the number of top entries displayed, such as the top ten events in each panel.
3. **Role-Based Access**: Logger implements role-based access control, ensuring that users only see relevant data and events based on their roles and interests.
4. **Compliance Dashboard**: This dashboard is designed to display compliance-related items like configuration changes, failed logins, and user privilege modifications. It can be customized according to specific environment needs.
5. **Network Operations Dashboard**: Focuses on network-centric monitoring with features such as NetFlow by Destination Port, Firewall drops by Source, Network – Port Links Up and Down, and NetFlow Destination Autonomous System.
6. **TippingPoint Dashboard**: Summarizes events from TippingPoint IDS/IPS/Next Gen Firewall, including critical and major severity attacks and attacks categorized by ArcSight.
7. **Search Functionality**: The Logger interface is user-friendly, allowing straightforward text input for searches (e.g., typing 'dgchung' in the search box). Users can adjust settings like time windows and field sets to refine their searches.
8. **Advanced Search Options**: Utilizes structured searches with options such as Contains, Starts With, Ends With, and more to pinpoint specific data within logs. The Color Block View simplifies complex queries by providing a visual representation of the query results.
This document serves as a guide for efficiently using HPE Logger for various security and operational monitoring tasks.
The provided text outlines a demonstration of using a search tool called Logger for investigating potential FTP events related to a user with usernames "dgchung" or "dchung". The steps involve refining the search, utilizing built-in graphical charting, exporting results, running reports, and reviewing compliance insights.
1. **Initial Search Setup**: The user starts by searching for terms like "ftp", but no hits are found initially. This is simulated in a real investigation scenario where outcomes aren't known at the start of the search. Adjusting the query to "(dgchung OR dchung)" helps identify potential FTP events, narrowing down results to show those related to China (CN).
2. **Utilizing Logger Search Helper**: The user learns about and uses the Logger search helper by adding "| top" after entering terms. This feature suggests commands and assists in refining searches more effectively. For example, using this method reveals that the investigated person has been involved in creating users, granting roles, clearing audit logs, and sending suspicious articles via FTP.
3. **Charting Activities**: The user can quickly chart a summary of activities by clicking on Chart Settings to change the result chart format from default to pie or other types like column, bar, area, line, stacked column, or stacked bar. This visual representation helps in understanding the person's activity more clearly.
4. **Exporting Results**: Logger offers multiple options for exporting data including saving locally, to Logger, as PDF or CSV, and various chart formats. The user can export their findings based on specific criteria.
5. **Running Reports**: Beyond searching and visualizing data, Logger allows users to run reports that summarize activities of the investigated individual. This includes generating a report for "chung" which reveals FTP events related to China (CN). Users can modify default options in runtime parameters as needed.
6. **Compliance Use Case**: The demonstration extends to compliance use cases where specific regulatory requirements are addressed, such as PCI Compliance. It covers aspects like managing log retention, automated log review, and real-time alerts for compliance issues through personalized dashboards accessible only to relevant users.
7. **Custom Login Banners**: Logger allows the customization of a login banner that can be used to describe company policies and ensure user acknowledgment. This feature is particularly useful in demonstrating regulatory compliance by providing context and user agreement mechanisms within the application itself.
This demonstration script showcases how Logger can be effectively utilized for investigations, reporting on potential security risks, and managing compliance requirements through role-based access control and personalized dashboards.
Compliance reports are organized by specific standards, such as PCI Req 2 - Default Account Usage. To review default accounts used in an environment, navigate to the report and choose a vendor's product example like "Default account usage" under Acct Usage Drilldown. Select Quick for best practice settings and run with default options. The report will show which default accounts are being used, with likely SA details on page 2. Users can drill down further by clicking hyperlinked results or creating custom reports.
For analyzing real-time alerts, go to "Analyze," then select "All Alerts." Set the time window as needed and review base event triggers for PCI Requirement 2 - Default Account Usage alert. Configure alarms with customizable conditions that send notifications via email, SNMP, Syslog, or Logger's correlation engine.
For managing log storage settings, navigate to Configuration, then Settings where you can set up rules for different devices like PCI and SOX compliance requirements. Enable automatic assignment of log streams to storage groups based on retention periods suitable for audit purposes. Restrict user permissions in Logger via role-based access control to ensure data integrity and compliance with regulations.
In the IT Ops use case, investigate a web server down by searching logs containing "web1.arcnet.com". Utilize quick search functionality or adjust time windows as necessary to locate relevant issues promptly for swift resolution.
The scenario involves a corporate web server experiencing issues and being down; IT Operations is tasked with diagnosing and fixing the problem. As an analyst, you need to understand the significance of specific events occurring in the logs related to denial-of-service (DoS) attacks originating from North Korea, targeting requests for the same document repeatedly.
To investigate further, follow these steps:
1. Search web server logs and set the field to Webserver. Look for patterns or significant events that indicate a DoS attack by scrolling through the logs.
2. Note an event where you see hundreds of requests for the same document coming from North Korea; this suggests a potential cyber-attack. In this case, it's noted as GET denial of service (DoS) attack.
3. Check configuration modifications related to devices not directly managed by your team, such as /Modify/Configuration event in firewall settings that might be linked with the server issue. You may need to change search terms and add new criteria to find relevant events.
4. Utilize a categorization tool (Connector) to simplify understanding of complex network events, making it easier to build filters, rules, reports, and dashboards for specific detections like failed firewall activities using category "Category Device Group=/Firewall" and "Category Outcome=/Failure."
5. Use the Logger system to search for significant configuration events related to your web server (e.g., changes by user mike affecting dmzfw1) that might be causing the outage. You can also run standard reports or customize them based on specific criteria like destination host names containing "dmzfw1."
6. Monitor real-time event occurrences in the Live Event Viewer, focusing specifically on activities related to the webserver under issue (e.g., web1.arcnet.com). This helps in ongoing troubleshooting and response during direct contacts with the responsible parties.
7. Close the Live Event Viewer once further analysis is complete or as directed by management.
The use of a categorization tool simplifies complex events, making them more understandable across different teams and devices. The Logger system provides robust search, reporting, and real-time monitoring capabilities that aid in efficient IT operations and incident response, especially during critical business hours where changes to server configurations are being made.
Logger is a tool capable of processing multi-line log files from various sources such as Ruby Application Archive (RAA) logs, Java error logs, SVN version tracking logs, or even parts of tcpdump outputs. It can handle in-house application logs and logs produced by applications using log4j, which often generate multi-line logs that are preserved as single events for analysis.
To use Logger effectively:
1. Go to the search page and enter specific keywords like "raa" or relevant terms for your log type. Adjust the time window if necessary.
2. To analyze multi-line logs, choose the appropriate field set (e.g., MultiLine AppDev) from the dropdown menu. Widen the message column by dragging it wider to view more details.
3. Filter results by clicking on specific terms within columns like "ERROR" in the deviceSeverity column to narrow down events of interest.
4. Load a saved filter, such as Demo App Dev, and perform searches tailored to your needs (e.g., for ERRORs related to Application Development).
5. Export search results or create charts using various options provided by Logger for reporting and charting purposes.
A practical example is demonstrated in the NetFlow use case:
1. Log into the system as an admin, navigate to Analyze > Search, and enter "netflow dpt=1433" to find network flow events destined for a SQL server port (1433).
2. Adjust the search fields to show all relevant columns in the NetFlow field set. By default, most NetFlow events are one-sided; however, you can add terms like "sourceAddress" to focus on specific sources.
3. Utilize dynamic charting options within Logger for visualizations of data, with features like auto-update and export capabilities for various formats (PDF, CSV).
4. Perform quick searches within NetFlow data to visualize the most popular destination ports or least frequent source addresses by modifying search terms in real-time.
In summary, Logger is a versatile tool that empowers users to analyze complex log files and network data efficiently across different environments using customizable search filters and dynamic charting options tailored for specific needs.
To summarize the provided information, the process involves using a Logger tool to analyze raw events from various sources such as network communications or performance monitors. The steps include searching through these raw events by modifying search parameters and utilizing the Logger Regex helper to extract specific fields like deviceVendor, deviceProduct, RTA (Round Trip Average), etc. This extraction is crucial for parsing RAW data and understanding its value and impact within the context of the logs.
The process also involves using additional features such as Discover Fields in HPE Logger to identify and analyze particular events or patterns within the raw data, which can be useful for troubleshooting and performance monitoring tasks. The ability to export results after analysis allows for further use and reporting on the findings, demonstrating the versatility of the tool across different scenarios including failed login attempts and potentially sensitive information like credit card numbers, which are masked during the extraction process.
Overall, these steps highlight how HPE Logger can be effectively used to parse, analyze, and extract meaningful insights from raw event data, providing actionable information for network administrators and security analysts.
This text discusses the functionality of HPE Confidential's Logger tool, focusing on its use for analyzing machine transactions and handling sensitive data such as credit card numbers. The demonstration covers several key features including filtering options, customizing columns, exporting reports, and grouping events based on specific criteria.
1. **Credit Card Masking**: The Logger automatically adds a column named `ccnum` to display masked credit card numbers for each event. The tool can mask any data we need to protect, such as the first digits of the card number or additional digits. This is useful when preparing reports that involve sensitive information like credit card details without compromising security.
2. **Filter and Reporting**: A saved filter named "Demo Credit With Card Mask" allows users to load specific configurations for viewing events related to credit cards, with the numbers masked as required. The tool can group these transactions based on the first digit of the card number, which indicates the type of card (e.g., 3 = Amex/Diners, 4 = Visa, etc.). Users can further analyze this data by adding more detailed fields like `final` and `firstnum`.
3. **Transaction Grouping**: The Logger allows for higher-level event grouping through a TRANSACTION operator, which groups events with the same QueueID from log files (e.g., postfix logs). This is useful for real-time analysis of similar types of events or transactions within larger datasets. Each group gets its own transaction ID and count of related events.
4. **Real-Time Event Parsing**: The Logger can read log files directly using File Receivers, parse these into various fields without the need for a SmartConnector, and then group them based on common values to form transactions or groups called TRANSACTIONS. This is demonstrated with an example from POSTFIX, a mail transfer agent used in this scenario to route and deliver electronic mail, where events are grouped by QueueID.
Overall, these functionalities illustrate how the Logger tool can be customized for specific data analysis needs while protecting sensitive information through features like masking and grouping capabilities.
The text provides an overview of features and capabilities within HPE Confidential—subject to use restriction, specifically related to Logger, an IT management tool for monitoring transactions and events across servers and web transactions. It highlights several functionalities including transaction operation time thresholds, the ability to export and import content such as alerts, dashboards, filters, parsers, saved searches, and source types; flexibility in developing and sharing custom content with other users through exporting and importing options; and enhancements like scheduled updates for logger lookup tables.
The demonstration of these features is supported by practical examples including the use of Logger to analyze mail events (POSTFIX) and perform lookups on Tor exit nodes, demonstrating the tool's ability to provide context-sensitive help, flexible data handling, and detailed event investigation capabilities. The documentation also outlines how to update lookup tables automatically using a demo script provided within the Logger environment.
This text discusses how to use Logger for URL analysis, including new operations like `len(somefield)` which helps in determining the length of fields such as URLs to identify potential malware or compromised systems. It also introduces a feature called "insubnet operator" that aids in searching events within variable-length subnet mask (VLSM) networks. The text provides detailed steps on how to implement these functionalities using Logger, including examples and screenshots from the user interface for easy understanding.
The provided text discusses various applications and functionalities of Logger 6.1 and Logger 6.2, a cybersecurity tool developed by HPE (now part of HP Inc.). The main focus is on decoding strings for better communication or storage purposes, as well as demonstrating the use of these tools in specific scenarios such as URL decoding, DNS Malware Analytics, and partnership with Fortinet's FortiGate firewall.
1. **URL Decoding**: In Logger 6.1, there is a feature where users can decode encoded strings entered into a search box or through an Analyze/Search function. The 'requestUrl' field within the tool is particularly noted for being percent-encoded and contains hex values surrounded by percent signs. This process helps in making URLs more readable and understandable.
2. **DNS Malware Analytics**: Introduced in Logger 6.2, this feature allows users to analyze DNS traffic related to potential malware threats using HPE's DMA (Domain Name System Malware Analysis). A new dashboard is available for DMA activity, which displays top offending sourceAddresses and event names associated with the HPE DMA solution. Clicking on these addresses provides more details about malware types and sources.
3. **FortiGate Partnership**: The partnership between HPE Logger and Fortinet's FortiGate firewall expands their security solutions to include unified collection, storage, identification, analysis, and mitigation of complex threats. This integration allows users to search for events from the FortiGate firewall by filtering with 'deviceVendor="Fortinet"'. Advanced queries can be run using fields specific to the FortiGate fieldset, such as deviceAction, which indicates actions like pass, blocked, update, deny, etc., helping in threat identification.
Overall, these functionalities highlight how Logger 6.1 and 6.2 facilitate more effective cybersecurity by improving data handling capabilities and enhancing search and analysis processes for encoded strings and network events.
This document describes the functionality of a dashboard in Logger version 6.1 and later, specifically designed for FortiGate firewalls. The dashboard provides an overview of top actions performed by the firewall, allowing users to investigate specific events further through a search tab. Key features include:
Displaying the top 10 actions executed by the firewall.
Clicking on any action leads to the search tab for detailed investigation.
Incorporation of DMA (Data Masking Agent) and Fortinet-specific events and content.
Additional functionalities such as Scheduled Logger Lookup, URL Analysis with field length, insubnet decoding of URLs, and a dashboard with 15 panels.
Correction of an aggregation issue with IPAddress driver fields from lookup tables in the previous version (regression bug).
Removal of two panels from the "Top Mailcious Activity Dashboard": Top Malicious Attackers and Top Malicious IPs.
This document also includes revision history, noting updates made for Logger 6.2 and 6.1 versions, with enhancements like DMA events, Fortinet content, and improved dashboard features.

Comments