top of page

Logger 6.3 Demo Script

  • Writer: Pavan Raja
    Pavan Raja
  • Apr 8, 2025
  • 20 min read

Summary:

The provided text is a guide on how to use a logging tool called "Logger" for various analysis tasks such as querying IP addresses, decoding URLs, and analyzing DNS traffic for malware. Here's a summary of the main points discussed in the text: 1. **Querying IP Addresses**: Users can search for IP addresses using different formats like CIDR notation, wildcard expressions, or specific address ranges. They can also select a time frame to focus on relevant events and investigate specific IP addresses from charts by clicking on them. 2. **Network Search**: After specifying an IP subnet, users can explore different network configurations by adjusting the subnet size using various formats. This helps in identifying networks that are relevant to the query. 3. **Multi-Select Functionality**: Users can select multiple terms (e.g., "IntruShield" from deviceProduct and "Compromise" from categorySignificance) to refine search results, allowing them to focus on specific events of interest. 4. **URL Decoding**: The tool has the capability to decode URLs that are encoded. This involves entering a search query in the format `name="Accessed URL"` and using the `urldecode` function to convert encoded URLs into readable form. 5. **Dashboard for Top Values**: A new dashboard feature named "Top Values" is introduced, which displays 15 panels showing aggregated top values of different fields. This helps in visualizing multiple metrics at once without having to switch between screens or tables. 6. **DNS Malware Analytics Dashboard**: In Logger version 6.2, a new dashboard for DNS Malware Analytics (DMA) is available, displaying events related to HPE DMA. The content may differ from the initial demo but follows a similar pattern of presenting relevant security alerts and analytics. 7. **Using Length Operation**: Users can evaluate the length of the requestUrl field and create a new field named urllength at search time. Sorting results by calculated lengths helps in focusing on unusually long URLs, which may indicate potential malware or compromise. 8. **Insubnet Operator**: This operator is used to search for events within variable-length subnet mask (VLSM) networks. The command `deviceVendor="ABC Software" | eval urllength=len(requestUrl) | sort -urllength` demonstrates how to use this feature. The text also mentions the HPE DNS Malware Analytics solution, which focuses on detecting malware through DNS traffic analysis. This includes a dashboard with events from the Malware Analytics section of HPE DMA, highlighting top offending source addresses and event names identified by the solution. The integration with Fortinet further enhances security by bundling HPE Logger with FortiGate Enterprise Firewall for seamless threat monitoring and analysis. In summary, this guide provides step-by-step instructions on how to effectively use various features within the Logger tool for network analysis, event refinement, and data decoding tasks in a cybersecurity context.

Details:

This document outlines the use case demonstration scripts for Hewlett Packard Enterprise's (HPE) Logger product, designed to assist in evaluating and demonstrating its features through various scenarios. The information within this document is considered confidential and proprietary to HPE, and recipients are required to maintain confidentiality and not disclose it without authorization. The purpose of the demonstration scripts is to showcase how Logger can be utilized for different use cases such as data analysis, security monitoring, compliance tracking, and more. It includes step-by-step instructions on installing a demonstration license, detailed explanations of various use cases, and information about additional functionalities like DNS malware analytics and FortiGate partnership capabilities. The document also covers the installation process for Logger and ArcSight Event Broker integration within Logger, highlighting how to enable receivers in Logger as per the provided figures (Figures 1-4). Instructions are given on how to set up a demonstration license for ADP 2.0, which includes ArcMC 2.0 and Logger versions. Furthermore, the document provides an overview of what each use case entails, from analyzing HTTPS traffic and device independent login failures to compliance checks and IT operations optimization. It also mentions planned updates for scheduled lookups in Logger, such as URL analysis, decoding URLs, and a dashboard displaying information across 15 tables. The document emphasizes that the pricing estimates provided are valid for 30 days after submission of the proposal and highlights specific terms like "solution" definition and partnership interpretation. It concludes with instructions on how to contact local sales representatives regarding any queries or issues related to this notice. This document provides detailed instructions on how to install an ADP (Advanced Data Protection) license for Logger software. The process involves setting up ArcMC as the central licensing server and applying two specific licenses within it: one base license for the Logger system and another capacity uplift license applicable in ArcMC itself. After successfully applying these licenses, users can proceed to use Lookup Files for static correlation and event enrichment within their environment. This setup ensures that all devices are managed under ADP licensing, allowing further enhancement with contextual data from Lookup Files. This document outlines a demonstration on how to use Logger 6.3 for enhancing search capabilities using Lookup Files, categorization, and dashboard creation within the HPE ArcSight environment. The demonstration covers exporting search results, creating new Lookup Files based on these results, and utilizing categorization tools to analyze events. Here's a summary of the key points: 1. **Enhancing Search with Color-Coded Fields**: The Logger interface uses color-coded shading to indicate indexing levels (superindexed, indexed, indexable, not indexable) for various fields. This helps users quickly understand data organization and retrieval efficiency within the system. 2. **Utilizing Lookup Files**: By integrating contextual information from Lookup Files, users can access additional fields that weren't initially visible in default searches. For instance, a search on IP addresses might benefit from including category type and score from malicious events to refine drill-down results. 3. **Exporting and Creating Lookup Files**: Users can export current search results as CSV files, which are then used to create new Lookup Files for further analysis. This includes exporting default fields plus contextual information like category type and score, enabling richer data exploration without altering active searches. 4. **Categorization in Logger**: The demonstration emphasizes the use of HPE ArcSight categorization tools to unify and consistently classify events across different device types during login failures. This ensures that all logs are categorized uniformly according to a predefined standard maintained by HPE ArcSight, avoiding potential discrepancies associated with third-party applications not under direct control or support from the vendor. 5. **Dashboard Creation**: Demonstrators can create custom dashboards showing top malicious domains and IPs, accessed via dropdowns in chart slices where detailed drilldowns are possible directly from visual representations of data. 6. **Saving Filters and Searches**: Users can save specific query setups or filters for future use through autocomplete functions within the search interface. This quick access feature allows rapid retrieval of previously saved configurations without recreating complex queries manually each time. 7. **Removing Lookup Files**: At the end of demonstrations, it is important to remove any test Lookup Files to avoid clutter and ensure that the system remains prepared for subsequent testing or use cases. This document provides practical steps and visual aids (like autocomplete options in the search box) to effectively utilize Logger 6.3's features, enhancing data analysis efficiency within an ArcSight environment focused on cybersecurity event management. This document outlines how to use Logger for analyzing failed logins and HTTPS traffic within an organization. Key points include: 1. **Analyzing Failed Logins**:

  • Use the ArcSight filter called "Demo Failed Device independent categorization" to search for failed login events across various devices and types, including firewalls, VPNs, Unix systems, switches, Windows hosts, mainframes, etc.

  • The report automatically categorizes new device types or changes in vendors without requiring modifications.

  • Results are summarized by top deviceProduct type showing the count of failed logins.

2. **Analyzing HTTPS Traffic**:

  • Use basic Logger search functions to analyze traffic based on destinationPort=443, which includes HTTPS traffic.

  • Utilize super indexed fields like destinationPort for quick data retrieval and analysis.

  • The Security field set provides a statistical overview of the analyzed events through field summary statistics.

3. **Reporting**:

  • Use Logger reports to create custom dashboards showing regular updates, such as SANS Top 5 alerts or bandwidth usage by source.

  • Modify any report to adjust time periods or content according to specific needs.

4. **Data Analysis and Search Functionality**:

  • Utilize search functionalities like charts, top, rare, and tail for analyzing traffic patterns and detecting suspicious activities.

  • Implement queries efficiently using indexed fields (e.g., dpt=443) for quick result retrieval against large datasets.

5. **Field Summary Analysis**:

  • The Field Summary provides a statistical overview of the analyzed events through Logger, helping users quickly understand what data is being processed without prior knowledge of specific details.

This documentation demonstrates how to effectively use HPE Logger for auditing and security analysis tasks such as monitoring failed logins and HTTPS traffic within an organization's network. This document outlines a series of actions and steps for using Logger, an HPE product, to analyze network traffic and events. The primary focus is on detecting least common events through various operations such as changing from top to tail in event occurrences, using the rare command to display less frequent events, summing bytesIn and bytesOut fields across devices, and utilizing superindexing for faster search performance when dealing with large datasets. **Analyzing Least Common Events:** The Logger tool allows users to switch between different operations (like changing from top to tail) to view event details more effectively. This helps in identifying less common yet suspicious activities by monitoring the destinationPort 443, which is associated with HTTPS traffic. The categoryOutcome and deviceVendor fields are also used to assess whether an event might be indicative of suspicious activity. **Drill Down into Traffic:** The Logger interface includes a feature that allows for detailed investigation of denied or allowed events through a dashboard. This involves loading specific filters, such as Demo_https_tail, which provides details about the last N lines in the results related to port 443 activities. **Traffic Analysis and Summarization:** For traffic analysis, simple commands like sum(bytesIn) and sum(bytesOut), grouped by deviceProduct and source/destination addresses, help identify unusual data flow patterns. This is particularly useful for understanding network behavior between specific devices. The results can be further analyzed over different time intervals to establish whether the observed activity is normal or not. **Event Normalization and Categorization:** The Logger tool automatically normalizes and categorizes traffic against specified ports (like port 443) through product configuration, allowing for quick analysis of network traffic generation within a short period. This feature simplifies complex data by assigning it to defined categories based on characteristics such as destinationPort. **Searching for Rare Events:** When dealing with extensive datasets, the Logger's superindexing capability helps in efficiently searching for rare events by quickly determining whether specific values exist or not in vast data sets. For instance, a search can be performed across all events in the SOX storage group to find rare events related to specific source IP addresses. In summary, Logger provides a suite of powerful tools and features that enable users to analyze network traffic effectively, detect anomalies, and investigate potential security incidents by leveraging various operations such as top/tail analysis, rare event detection, byte sum calculations, and superindexing for large datasets. The provided text discusses a demonstration of Logger Super Indexing capabilities for security and forensics purposes in a system environment. Key points include: 1. **Logger Super Indexing Speed**: The demo shows that with Logger Super Indexing, events can be searched very quickly; within 10 seconds, it should return one event if possible. This is achieved through scanning rates of up to tens of millions of events per second in a controlled environment like a virtual machine or up to hundreds of millions on production servers. 2. **Performance with Non-Existent Search Terms**: The system efficiently handles searches for terms that do not exist within the event set, returning "NO EVENTS" quickly, usually within a few seconds. 3. **Custom Login Banner and Logger Features**: Demonstration includes using a custom login banner in Logger to display company policies, which is optional but customizable as per user requirements. 4. **Dashboard Functions**: The system has multiple dashboards for different purposes such as security incidents (with panels showing failed logins by user and top destination ports), network operations (showing NetFlow data), compliance (tracking configuration changes and other activities), and others like Microsoft Activity, Fortigate logs, and TippingPoint analytics. 5. **Role-Based Access Control**: Different dashboards are accessible based on the role of the users, ensuring they only view relevant information for their areas of interest. 6. **Dashboard Customization**: The demonstration covers how to customize dashboard panels by changing formats (like column, bar, pie) and adjusting the number of displayed top entries according to user needs. 7. **Monitoring and Storage Usage**: The Monitor dashboard provides a summary view of activity within Logger, including network traffic in and out, as well as storage usage details. Overall, this text outlines a comprehensive use case for how Logger Super Indexing can be effectively utilized in an enterprise environment for security analysis, incident response, and compliance monitoring through customizable dashboards that provide role-based access to relevant data. The scenario involves an aerospace company discovering that one of its users, who recently left the company, sent confidential company information to China. To investigate this incident, the Logger tool is used for searching through logs. Here’s how it works step-by-step: 1. **Search Interface**: Access the search page by selecting "Analyze" and then "Search" from the main menu. This interface resembles Google's search engine and allows users to type in their search term (in this case, "dgchung") directly into a simple text box. 2. **Real-Time Search**: Set the time range to "Last Hour" for the most recent activity related to the user. The unstructured search focuses on finding occurrences of the string "dgchung" at any point in time within the logs. 3. **Device Vendor Summary**: Clicking on "deviceVendor" under Field Summary provides an overview of where activities containing "dgchung" occur by vendor, including ORACLE, Microsoft, and Vontu. This shows that there are events related to China (CN) as a destination country. 4. **Advanced Search Interface**: To refine the search further, click on "Advanced" which opens up more options for structured searching. Here, under "Name", select "sourceHostName". Using the Contains operator, type "finance" in the filter field. This narrows down the search to servers with "finance" as part of their hostname. 5. **Graphical Representation**: Utilize the Common Conditions Editor and Color Block View to visualize the query results easily. Now, all events related to "dgchung" on finance servers are highlighted. 6. **Adding Search Conditions**: Add terms like "ftp" or modify the search string using logical operators (e.g., pipe command |) to further narrow down the search results based on specific criteria such as event types or destination countries. In this example, adding "ftp" does not yield any hits initially but adjusting the query with OR conditions reveals that some events are related to China's destination country due to suspicious activities involving data transfer. 7. **Exporting Results**: Utilize Logger’s export options to save local copies or generate reports in various formats (e.g., PDF, CSV) for further analysis or sharing with management. 8. **Reporting and Dashboards**: Use the reporting and dashboard features within Logger to present findings effectively. This includes creating custom groups of reports like Favorites that contain all demonstration reports for easy access. By choosing "Reports" and then "Dashboard", you can visualize the data on a security dashboard, providing an overview that can be shared with management or stakeholders. This process demonstrates how a digital forensics tool like Logger is used to investigate potential data breaches by cross-referencing user activities against system logs, identifying anomalies, and narrowing down search parameters based on logical operators and specific keywords. The provided text outlines the steps for using Logger, a tool designed to help with regulatory compliance demonstrations. It details how to access demo reports, customize parameters for specific queries, and review icons that allow users to export or schedule reports in various formats such as Adobe PDF, MS Word, MS Excel, and CSV. The demonstration also covers creating custom login banners within Logger, which is optional but can be used to display company policies related to compliance. For regulatory compliance use cases, the text suggests reviewing PCI Compliance Insight Package and drill-down reports and dashboards. It highlights the role-based access control that ensures users only see relevant data according to their roles. Additionally, it discusses how to generate ad hoc or scheduled reports for compliance purposes, including examples like default account usage which is a common best practice. The text explains how Logger reports have drill-down capabilities and how users can create their own reports with this feature. For real-time alerts related to compliance issues, the tool offers preconfigured options that can be adjusted based on user needs or preferences. The provided text outlines a method for configuring storage groups in Logger, an application designed to manage system logs across multiple devices. It emphasizes the ease of setting up different retention periods for various types of logs related to PCI (Payment Card Industry) compliance or other requirements. Additionally, it discusses how alerts can be configured based on specific conditions and how role-based access control ensures that only authorized personnel can modify these settings. Furthermore, the text describes a use case where an IT Ops professional investigates a web server issue by searching logs for relevant events using Logger. The process involves analyzing fields such as "Webserver" and adding additional terms to refine the search, like firewall configurations. This demonstrates how the system can categorize events in a user-friendly format, making it accessible even if certain technical details are not immediately understandable to all team members involved. In summary, this text is about setting up logging systems for compliance purposes and using these tools efficiently during incident response to diagnose and fix issues with web servers or other network components. This document outlines several steps and actions for analyzing data related to network configurations, server issues, and application logs using HPE Confidential's Logger tool. The scenarios discussed include troubleshooting a webserver issue during business hours, monitoring real-time events from an application development log, and tracking sources communicating with Microsoft SQL Servers through NetFlow data. For the webserver issue, it is recommended to check device configuration events in the report explorer by selecting 'Device Configuration Events' under Report Explorer in the main Logger menu. Adjust filters if necessary, run a custom report focusing on relevant details such as destination host name (e.g., dmzfw1). To further analyze real-time issues, use the Live Event Viewer to monitor web server events containing specific search terms like 'web1.arcnet.com'. In case of application development logs, Logger's ability to handle multi-line log files is highlighted. This can be useful for parsing Ruby Application Archive (RAA) logs or other types of complex log files used in app development environments. Users can analyze specific events such as ERROR messages by filtering the deviceSeverity field within the search results. For NetFlow use case, Logger helps to identify sources talking to Microsoft SQL Servers on port 1433. This is achieved through a simple search query 'netflow dpt=1433', which automatically ANDs together search terms and can be expanded to include all fields in the NetFlow field set for detailed analysis. Overall, Logger provides powerful tools for analyzing various aspects of network operations and system logs, offering flexibility in report creation and real-time event monitoring through its interface features and customizable filters. The summarized text provides an overview of using HPE Logger for network flow analysis and troubleshooting. Here's a detailed summary: 1. **Network Flow Events Update**: When dealing with network flow events, the byte values are primarily 62 bytes, which might be significant to network engineers. To analyze these events destined for SQL server ports (port 1433), one can use NetFlow data and focus on the top source addresses. 2. **Search and Visualization**: By adding specific search terms to a netflow query like `netflow dpt = 1433 | top sourceAddress`, users can visualize the most popular sources of traffic to SQL servers, which automatically updates if configured for auto-update. 3. **Exploring Least Frequent Sources**: Instead of just focusing on the top source addresses, one can replace "top" with "rare" in the search terms to explore the least frequent (bottom) source addresses. This simple word substitution can significantly change the focus and depth of the investigation. 4. **Chart Settings and Dynamic Charting**: Users can configure chart settings by clicking on "Chart Settings", choosing options like Pie charts, applying these settings, and hovering over pie chart slices to get detailed information including percentages and event counts. The dynamic charting feature in Logger is flexible and automatically refreshes based on user settings. 5. **Exporting Results**: Logger offers multiple export options for reporting and charting purposes, such as saving data locally or exporting directly to formats like PDF or CSV. 6. **Port Analysis with NetFlow Data**: By running a search like `netflow | chart _count by dpt`, one can visualize the most popular destination ports in an environment, which is particularly useful for analyzing network traffic distribution across different ports. 7. **Raw Events and Regex Use Case**: For issues related to network performance or latency greater than 1 ms (indicated by round-trip averages RTA), raw event logs are analyzed using Logger's capabilities. The use of regex helpers simplifies the extraction and parsing of specific fields from RAW event data, enhancing the accuracy and focus of the investigation. 8. **Additional Use Cases**: Additional scenarios in HPE Logger include exploring failed network connections or events where a performance monitor (like Nagios) alerts due to packet loss, which can be filtered using specific keywords within the Logger system for focused analysis. Overall, these use cases demonstrate how versatile and powerful HPE Logger is for analyzing complex network data flows, performing real-time monitoring, and troubleshooting performance issues by extracting meaningful insights from raw event logs. This use case focuses on analyzing raw login events using HPE Logger's "Discover Fields" capability to identify and highlight failed logins. The steps include entering keywords, searching through RAW events, utilizing the Discover Fields function to uncover fields, and visualizing results with a chart of top usernames. Additionally, it demonstrates how to find and mask credit card numbers in raw event logs using Logger's Regex Helper for further analysis or reporting purposes. The process involves: 1. Navigating to the "Analyze, Search" page in HPE Logger. 2. Entering keywords such as "login failed" into a search bar to filter events. 3. Enabling all fields and utilizing the Discover Fields function to analyze RAW events for relevant data. 4. Expanding an event in the Events panel to view unparsed logs, then hiding or expanding Field Summary if needed. 5. Clicking on discovered fields like "Username" to see Logger's analysis of counts, percentages, and sorting options. 6. Utilizing a real-time chart feature to visualize top usernames related to failed logins. 7. Masking credit card numbers by using the Logger Regex Helper to isolate specific data, rename fields if necessary, and prepare reports without exposing sensitive information. 8. Exporting charts or other findings in PDF format for distribution. This use case also covers more advanced features like higher-level event grouping (TRANSACTION), real-time file receivers, on-board parsers, and dashboard drill downs within the Logger platform to manage and analyze machine transactions efficiently. This documentation outlines a procedure for managing and utilizing Mail Logs File Receivers within a logging system, specifically in HPE Logger software. The steps involve enabling and disabling receivers to ensure proper data ingestion and analysis. Here’s a summarized breakdown of the key points: 1. **Managing Receivers**: Navigate to the "Receivers" section under configuration where you can find and manage Mail Logs File Receivers. These are used to bring in log files for real-time event parsing. 2. **Enabling and Disabling Receivers**:

  • **Enable the Receiver**: Clicking on the receiver will initiate data ingestion after about 10-20 seconds, depending on the size of the logs being processed.

  • **Disable the Receiver**: Once enabled, disabling it stops further log file consumption but does not erase previously ingested events, which can still be viewed in the dashboard for analysis.

3. **Receiver Details and Parsing**:

  • The receiver details include information about the source of the file (location), its name, actions to take with the file upon reading (e.g., rename, persist, delete), and timestamp formats.

  • Configuration settings like "Mail_Log_Files" parser are used to parse log files into structured data fields such as time, hostname, process name, subprocess name, PID, QueueID, etc.

4. **Summary Page and Transaction Operations**:

  • On the summary page, specific fields parsed from the logs can be changed by clicking on "Mail" under field set to display relevant information like time, hostname, etc., with additional fields expandable via a click action.

  • The transaction operation groups events with common attributes (like QueueID) into transactions, assigning each group a unique transaction ID and providing counts of events within each transaction. This feature allows for detailed analysis and grouping based on specific criteria.

5. **Exporting and Analyzing Data**:

  • Logger supports the export and import of various system content types like alerts, dashboards, filters, parsers, etc., facilitating easy sharing and collaboration among users.

  • The Mail Logs receiver can be analyzed through predefined charts (e.g., subprocesses, mail counts) and dynamically drilled down into event details for deeper investigation.

6. **Scheduled Updates**: HPE Logger offers the capability to schedule updates for all types of content within the system, ensuring that logs and parsed data are continually refreshed and available for analysis as needed. This summary captures the essential steps in setting up and utilizing Mail Logs File Receivers within the HPE Logger environment, emphasizing both data ingestion and advanced analytical capabilities provided by the software. The provided information outlines a method to load data from external sources into Logger, specifically focusing on updating tables of Tor Exit Nodes. Here's a summary of the key points: 1. **Loading Data**: Tables of data from external sources are loaded into Logger. This includes static correlation updates that can be scheduled for regular refreshing. The update process is managed outside of Logger's control but involves running scripts on the Logger Demo VM to fetch and format new data, such as a list of Tor Exit Nodes. 2. **Updating Data**: For demonstration purposes, a script is provided (available at the given URL) that can be run on the Logger demo VM to refresh the Lookup1.tornodes.csv file with fresh data. The script adds a specific IP address (58.99.128.47) for demonstration purposes only. 3. **Using the Script**: If the Logger demo VM has access to the internet, you can run the provided Python script from the root directory of the Logger Demo VM: ```bash cd /root/ python tor.py.for.logger.demo.py ``` 4. **Updating Lookup Table**: After running the script, the Lookup1.tornodes.csv file is updated with new data, which can be viewed and managed in Logger:

  • Navigate to the specific lookup table named "torexitnodes".

  • Edit the lookup settings to schedule regular updates of the file from disk.

5. **Using the Lookup Function**: The Logger Lookup feature allows for comparison and contrast of tables of values from external sources with searches within Logger:

  • Use the search box to enter "Lookup" or "torexitnodes".

  • Perform lookups using the command `| lookup torexitnodes`.

  • Compare results, add new columns to show details, and examine if there are any matches.

6. **Sample Search**: To save time, use a saved search named "Demo Tor Node Lookup": ```bash deviceVendor="ABC Software" | eval urllength=len(requestUrl) | sort -urllength ``` This command evaluates the length of the requestUrl field and sorts them by length. 7. **Using Length Operation**: The length operation `len()` can be used to analyze URLs for indicators of malware or compromise:

  • Use indexed fields to investigate incoming URLs in Logger.

  • Create a new field at search time that is the length of the requestUrl field and name it urllength.

  • Sort the results by the calculated lengths if needed, to focus on unusually long URLs.

8. **Insubnet Operator**: This operator helps in searching for events within variable-length subnet mask (VLSM) networks: ```bash deviceVendor="ABC Software" | eval urllength=len(requestUrl) | sort -urllength ``` The summary highlights the process of loading, updating, and utilizing data from external sources in Logger for various analysis tasks. This text appears to be a documentation or guide on how to use a logging tool (referred to as "Logger") with various features and functionalities. Here's a summarized version of the main points discussed: 1. **Querying IP Addresses**: The user can query IP addresses using different formats such as CIDR notation ("10.0.0.0/18" or "10.0.0.0/21"), wildcard expressions ("10.0.0.*.*"), and specific address ranges ("10.0.0.0-10.10.0.0"). The tool allows for the selection of a time frame (Last 2 hours) and clicking on specific IP addresses from the displayed chart to investigate further. 2. **Network Search**: After specifying an IP subnet, users can explore different network configurations by adjusting the subnet size using wildcard expressions or specific address ranges. This helps in identifying networks relevant to the query. 3. **Multi-Select Functionality**: The tool allows for selecting multiple terms (e.g., "IntruShield" from deviceProduct and "Compromise" from categorySignificance) to refine search results, enabling users to focus on specific events of interest. 4. **URL Decoding**: Logger 6.1 has the capability to decode URLs that are encoded. This involves entering a search query in the format `name="Accessed URL"` and using the `urldecode` function to convert encoded URLs into a readable form. 5. **Dashboard for Top Values**: A new dashboard feature named "Top Values" is introduced, which displays 15 panels showing aggregated top values of different fields. This helps in visualizing multiple metrics at once without having to switch between screens or tables. 6. **DNS Malware Analytics Dashboard**: In Logger 6.2, a new dashboard for DNS Malware Analytics (DMA) is available, displaying events related to HPE DMA. The content may differ from the initial demo but follows a similar pattern of presenting relevant security alerts and analytics. Overall, this text provides step-by-step instructions on how to effectively use these features within the Logger tool, tailored for network analysis, event refinement, and data decoding tasks in cybersecurity contexts. The HPE DNS Malware Analytics solution is a new method for detecting malware in enterprises by analyzing DNS traffic. It features a dashboard with events from the Malware Analytics section of the HPE DMA (Detected Malware Artifacts), which can identify and detect malware that might have escaped detection by other solutions. The HPE DMA solution focuses solely on DNS traffic to detect malware, often uncovering threats not detected by traditional methods. The dashboard highlights top offending source addresses and event names identified by the solution. Clicking a non-blank source address reveals more details about the type of malware and potential suspicious sources. HPE DMA uses a SmartConnector called field agentType: cloud_cef, which connects to a REST interface using an authorization token to retrieve Common Event Format (CEF) events. This allows for continuous monitoring and analysis of DNS traffic for signs of malware. Additionally, HPE has partnered with Fortinet to enhance security through integration of their software products. The partnership involves bundling HPE Logger with FortiGate Enterprise Firewall, enabling the collection, storage, identification, analysis, and mitigation of complex threats seamlessly. This is demonstrated using the Logger software, where selecting Analyze -> Search from the main menu allows users to filter events by deviceVendor="Fortinet". Further filtering by deviceAction="blocked" reveals details about blocked malicious files, while the FortiGate dashboard provides an overview of top firewall actions and allows for further investigation into specific incidents.

Disclaimer:
The content in this post is for informational and educational purposes only. It may reference technologies, configurations, or products that are outdated or no longer supported. If there are any comments or feedback, kindly leave a message and will be responded.

Recent Posts

See All
Zeus Bot Use Case

Summary: "Zeus Bot Version 5.0" is a document detailing ArcSight's enhancements to its Zeus botnet detection capabilities within the...

 
 
 
Windows Unified Connector

Summary: The document "iServe_Demo_System_Usage_for_HP_ESP_Canada_Solution_Architects_v1.1" outlines specific deployment guidelines for...

 
 
 

Comments


@2021 Copyrights reserved.

bottom of page