top of page

ArcSight Logger Demo Script 1

  • Writer: Pavan Raja
    Pavan Raja
  • Apr 8, 2025
  • 39 min read

Summary:

The task outlined in your instructions involves using Logger, a software tool for log management and analysis, to run specific reports and visualize data effectively. Here’s a step-by-step breakdown of how you can follow these steps: ### Step 1: Navigate to 'Reports' and Click on "Explorer." 1. **Access the Software:** Start by logging into your Logger system or accessing it through a designated interface. 2. **Navigate to Reports:** In the main menu, locate and click on the "Reports" section. Then select "Explorer." ### Step 2: Expand "Demo Reports" and Right-click, Selecting 'Demo - Device Type Counts.' 1. **Expand Sections:** Within the Explorer window, find and expand the "Demo Reports" folder if it is not already expanded. 2. **Select Report:** Right-click on the report named "Demo - Device Type Counts." This will typically be under the Demo Reports section unless specified otherwise in your specific setup or instructions. ### Step 3: Choose 'Quick Run with Default Options,' Then Ensure the 'Scan Limit' is Set to 0. Click 'Apply.' 1. **Run Report:** Click on the option "Quick Run with default options." This will initiate the report without any additional customizations unless specified otherwise in your instructions or specific requirements set by your organization. 2. **Set Scan Limit:** In the settings dialog that appears, ensure that the "Scan Limit" is set to 0. This setting allows for a comprehensive scan of all data related to the device types, which you have chosen through previous steps. 3. **Apply Settings:** Click on "Apply" or confirm your selections to run the report with these settings. ### Step 4: Observe the Results in the Tabs Labeled 'Chart' and 'Chart1.' 1. **Monitor Output:** Once the report is running, observe the visual output as displayed in the tabs labeled 'Chart' and 'Chart1'. These are typically graphical representations of your data according to the parameters set during the quick run or other configurations applied. 2. **Understand Data Representation:** Ensure you understand how each chart represents different aspects of your device types based on the settings and filters used in the report setup. ### Step 5: Enable 'Edit Mode' by Clicking on it in the Upper Right Corner. 1. **Enable Editing:** In the upper right corner, locate the "Edit Mode" button (usually represented by a pencil icon) and click on it to enable this feature. This allows you more control over customization of your report layout, including adding visualizations or modifying existing ones without running extra searches. ### Step 6: Add More Visualizations by Clicking the Plus Sign. 1. **Add Charts:** Use the plus sign located in the top toolbar where your main chart is displayed to add additional charts. You can choose from various types of visualizations like line graphs, bar charts, pie charts, etc., depending on what you want to highlight or compare within your data. 2. **Configure Visualizations:** Configure each new visualization according to your needs by selecting appropriate fields and settings. This step is crucial for ensuring that the added visualizations are effective in representing the trends, groupings, or other aspects of your device types' data as intended. ### Step 7: Enable Logger to Allow Inline Decryption of Events from SecureData (Optional). 1. **Access Configuration:** Go to the "Configuration" section within Logger and click on "Talking Points." This feature allows you to configure how Logger interacts with external systems, such as Voltage's connectors for decryption purposes. 2. **Set Up Decryption Settings:** Provide necessary credentials and specify which fields need to be decrypted as part of this configuration process. Ensure that the settings are configured correctly according to your organization’s policies and data handling requirements. 3. **Verify Decryption Functionality:** Test the setup by performing a search using encrypted field values, ensuring they are automatically decrypted when received in Logger based on your configurations. ### Step 8: Perform a Search Using the Provided Query in the Search Pane: `deviceProduct = Oracle`. 1. **Initiate Search:** Open the search pane and enter the query provided above or similar queries relevant to your data analysis needs, such as specific device models or vendor names. 2. **Review Results:** Observe the results of your search within Logger's interface, ensuring they are displayed correctly according to the decryption settings applied in previous steps. ### Step 9: Configure Logger for Inline Decryption (as per the Voltage server credentials and fields you wish to encrypt). 1. **Setup Encryption Settings:** Based on your organization’s requirements, set up encryption configurations that align with your data handling policies. This step involves setting up Password-Based Encryption (PBE) or other methods if necessary, depending on your security standards. 2. **Assign Fields for Encryption:** In the Logger configuration settings, identify and assign fields that need to be encrypted based on criteria such as sensitivity levels or compliance regulations like GDPR, HIPAA, etc. 3. **Test Encryption Functionality:** Conduct a trial run of the encryption setup by performing a search operation within Logger to ensure that selected fields are being encrypted correctly without any loss of data integrity during transit between different system components. 4. **Update Policies:** Regularly update these policies and procedures as technology evolves or regulatory requirements change, ensuring compliance with both internal security standards and external legalities. By following these steps, you can effectively use Logger for log management and analysis tasks such as running reports on device types, visualizing data through charts and graphs, configuring encryption settings for secure data handling, and performing advanced searches to extract meaningful insights from your logged data.

Details:

ArcSight Logger 6.7 Demonstration Script, dated February 22, 2019, provides a comprehensive guide on how to leverage the tool's functionalities to enhance security operations. The script is structured into twelve main sections each focusing on different aspects of threat detection and analysis: 1. **Search for Malicious IP Addresses using Logger Lookups**: Explains how to utilize lookup tables in Logger to identify malicious IP addresses during data searches. 2. **ArcSight Categorization**: Discusses how categorizations simplify the process of analyzing device-independent login failures across various devices. 3. **Ports and IPs: Analyzing HTTPS Traffic; Who Talks to My Web Servers?**: Provides insights into using ports and IP analysis for understanding HTTPS traffic and identifying who communicates with web servers. 4. **RARE EVENTS: Searching for the Needle in the Haystack FAST**: Offers strategies for quickly finding rare events or anomalies within large datasets. 5. **User Investigation: Insider Threat, Data Leak Discovery**: Guides through investigations related to insider threats and data leakage by analyzing user activities. 6. **Compliance: Auditor Role, Read Only, Limited Access**: Explains how ArcSight can be configured for compliance purposes, such as auditor roles with restricted access. 7. **IT Ops Use Case: Web Server Down, Unauthorized Changes**: Demonstrates use cases related to IT operations, specifically dealing with web server down events and unauthorized modifications. 8. **Application Development Use Case: Multi Line, App Dev**: Provides examples of how Logger can be used in application development for multi-line log analysis. 9. **NetFlow Use Case: Who is Talking to My SQL Servers**: Explores the use of NetFlow data to identify network traffic with SQL servers. 10. **Raw Events and Regex Use Case**: Introduces the concept of using raw events and regular expressions for advanced filtering and pattern matching in security analysis. 11. **Finding failed logins from RAW events (“Discover Fields”)**: Describes how to utilize discover fields within Logger to identify failed login attempts from raw event data. 12. **Finding and masking credit card numbers**: Shows techniques for detecting and potentially anonymizing credit card numbers using ArcSight Logger's capabilities. This script serves as a practical guide, detailing various scenarios where ArcSight Logger 6.7 can be effectively applied to improve security posture, compliance, and operational efficiency in an IT environment. This document appears to be a compilation of various sections or subsections within a larger documentation set, possibly related to network management, data analysis, and visualization tools. Here's a summary of each numbered item based on the provided text: 13. Analyzing Machine Transactions - Describes how to analyze transactions processed by machines, potentially involving financial or other transactional data. 14. Scheduled Updates for Logger LOOKUP - Discusses how to schedule updates for the LOOKUP table within the Logger system, which is crucial for maintaining accurate and up-to-date information. 15. EVAL: URL Analysis – Length of a Field - Focuses on analyzing the length of fields in URLs, possibly as part of a broader study or application development. 16. Smart Network Searches: Using Logger’s "insubnet" Operator - Explains how to perform smart network searches using the specific operator "insubnet" within the Logger system. 17. EVAL: Decoding URLs - Involves decoding URLs, possibly for debugging purposes or to understand the structure of web addresses used in transactions. 18. Dashboard for 15 Tables - Describes a dashboard setup that displays data from 15 different tables, presumably integrated within the Logger system, aiding in comprehensive monitoring and analysis. 19. FortiGate – Logger partnership - Discusses a partnership between FortiGate and Logger, potentially highlighting how they work together to enhance network security or management functionalities. 20. Dynamic Analysis using Smart Reports, Sparklines - Describes methods for performing dynamic analysis of data through smart reports and sparkline visualizations within the Logger system. 21. Device Types via "Tree Map" and "Packed Circles" Visualizations - Explains how to visualize device types using Tree Map and Packed Circles in the Logger system, aiding in organizational visualization and analysis. 22. Secure Data - Addresses general principles or methods for handling data securely within the Logger environment, emphasizing privacy and security protocols. Revision History - Lists changes made to previous versions of this document, helping users understand what has been updated or modified over time. Micro Focus Trademark Information - Provides details about using Micro Focus trademarks in connection with their products, possibly indicating where these sections should be included in the larger documentation set (e.g., legal notices). Company Details - Typically includes basic information about the organization that produced or is responsible for this document, such as contact information, website links, and other relevant details. The text provided is a script for demonstrating how to use ArcSight Logger to search for malicious IP addresses using Lookup Files. It explains that Lookup Files can be used for static correlation and to enrich events with contextual information, such as categorizing the reason behind an IP address being on the list of known malicious IPs (e.g., hosting malware or spam) and assigning a score indicating its activity level. This enrichment helps in geo-tagging and asset tagging, among other things. The demonstration involves: 1. Searching for malicious IP addresses using Logger Lookups to correlate events statically. 2. Using a Lookup File that contains a list of known malicious IP addresses. 3. The goal is to find out if any of these malicious IPs have communicated with the organization's web servers, specifically those running on port 443. The steps include navigating and using the "Take me to..." feature for easier access to specific functionalities within ArcSight Logger. The provided text outlines a process for using a Logger tool with a Lookup Table to perform a specific search and filtering operation involving IP addresses. Here's a summary of the steps and their purpose: 1. **Accessing the Logger**: Navigate to any Logger application, which will be used in this case to conduct searches and analyses on network data logs. 2. **Selecting Lookup Files**: Utilize the feature within the Logger that allows for searching by typing a feature's name. This is initiated by starting to type the desired feature’s name. 3. **Importing Data into Lookup Table**: The text mentions that malicious IP addresses have already been imported from a file into a Lookup Table named "Click Malicious_Addresses". This table contains details like IP address, category type, and score. 4. **Previewing the Table**: Clicking on the table name will display a preview of its fields and rows, confirming over 100,000 entries in this case. 5. **Performing a Search**: To narrow down matching events to those involving malicious IP addresses, use the lookup operator by typing 'sea' (though an example using destinationPort=443 is also provided). This allows for flexibility and efficient searching without manually entering each address. 6. **Using the Lookup Operator**: The Logger search function supports both terms, and when running a query related to port 443, you can use the lookup operator to correlate source IP addresses in events with entries in the Malicious_Addresses Lookup Table. This helps filter down from thousands of matching events to just a few that contain malicious IPs. 7. **Applying Saved Filters**: If needed, saved filters can be loaded here for more specific searches or comparisons. The text does not detail this step further but implies it’s an option available within the Logger's functionality. 8. **Interpreting Search Results**: After running the lookup operation, the results are narrowed down significantly from over 100,000 entries to just a few dozen. This is indicated by different colorings in the columns of the table that help understand the relevance or status of each entry visually. 9. **Enhancements to Logger Interface**: The text briefly mentions enhancements to the Logger's User Interface which include visual markings (colors) on the headers of columns to indicate various conditions, helping users interpret and manage data more effectively. This process demonstrates how a Logger tool with Lookup Tables can be efficiently used for static correlation searches within network event logs, narrowing down significant information from large datasets based on predefined criteria. This text discusses the use of indexed fields in a search system and how they can affect performance. The text mentions that different levels of indexing are represented by colors, with darker green indicating higher indexing. There are superindexed, indexed, indexable, and not indexable categories. In the given example, destinationPort is mentioned as super-indexed due to its frequent use in searches like searching for a rarely found or non-existent port. The text then moves on to discuss how Lookup Files can provide contextual data to enhance events. It mentions that fields such as deviceProduct and deviceVendor are seen along with summarized fields from the Lookup File, which helps in drilling down to specific events of interest. However, there's a technical issue mentioned related to Logger software where charting for IP Address fields from Lookup tables does not work due to a regression bug (LOG-17215). As a workaround until the bug is fixed, one should remove the ip_Malici term from the search and run the query as specified. Once the bug is resolved, the functionality will return to normal. The text concludes with instructions on how to perform the search without using the affected field. To create a new Lookup File based on search results and subset active attackers, follow these steps: 1. **Export Search Results**: Click Export from your search results page, then click Download results and choose a location to save the export file. This will download the data in CSV format. 2. **Create New Lookup File**:

  • Go to "Lookup Files".

  • Click "Add" next to the "Active_Attackers" Lookup File.

  • Browse to and select the saved export file using the "Choose File…" option, then click "Open".

  • Confirm the selection by clicking "Save", naming it as "Active_Attackers" if prompted, and finally click "Done".

3. **Use the New Lookup File**:

  • Navigate back to your search results page or dashboard.

  • Adjust the chart layout if needed (e.g., change it to a donut chart).

  • Drill down into details by scrolling over the pie chart slices, such as recognizing SQL injection attacks based on the categorization provided.

4. **Dashboard Navigation**:

  • Go to "Dashboards" and select "Top Malicious Activity".

  • Click on any slice in the Top_Malicious IP by Category pie chart to drill into detailed information about malicious activities.

By following these steps, you can efficiently create a new Lookup File from search results and use it within your system for further analysis and visualization. The passage outlines how to use saved searches and filters in Logger for various tasks such as opening saved searches using autocomplete in the search box, categorizing failed logins across different devices independently of their operating systems (like Windows, UNIX, or Mainframe), and utilizing ArcSight Categorization for consistent event profiling. It also discusses enhancing events with contextual data through lookup files like Active_Attackers to provide a richer analysis experience. The passage concludes by emphasizing the value of device-independent categorization in organizations dealing with diverse device types, such as when acquiring companies with different vendor devices. The summary highlights the creation of a Logger report on failed logins from various devices including firewalls, VPNs, Unix, switches, Windows hosts, mainframes, etc. This report is designed to audit failed login events and is beneficial for Security and Operations teams. The dashboard in Logger provides regular updates on top alerts from IDS and bandwidth usage by source. The specific report, "Top Failed Logins by User," uses ArcSight categorization to filter events where the behavior falls under access, authentication, or authorization categories with an outcome of failure. This setup allows for easy expansion as new devices are added; Logger automatically categorizes these new events without requiring modifications to existing reports. The report provides a chart of top failed logins and a table summarizing counts across different device types using default options in the Logger platform. This document describes how to use a tool called Logger for investigating suspicious activities by searching through large volumes of data, identifying patterns, and detecting malicious behavior. To start, navigate to the search page within Logger and run a simple search using the condition `destinationPort=443`. This will reveal all servers running HTTPS (port 443). You can adjust the time frame for the search from now to the last 24 hours, or use shorthand notation like `dpt=443` for less typing. The results of this search are displayed in a table that includes numerous columns such as Security fields and device details including source and destination addresses. To make sense of all these details quickly, Logger provides a Field Summary which gives an overview of the type of events being detected by clicking on 'Security' in the Fields dropdown menu. This summary helps users understand what types of events are being monitored, who is involved (via source and destination addresses), and from which devices or products the data is coming. To further analyze the search results, you can use the "top" command to summarize and group events based on relevant fields like source and destination address, device name, etc. This helps in identifying common patterns of suspicious activity by aggregating the most frequent events and grouping them according to specific criteria such as product type, addresses, or category outcome. To apply these steps, follow the commands: Click on Analyze, Search; enter `destinationPort=443`; adjust search time if necessary; click Go! (use shorthand notation if desired); view results in a table with Security fields and device details; use Field Summary for an overview; and finally, to group events, use the "top" command by loading a saved filter and selecting relevant fields. The article discusses various methods for analyzing network traffic using tools like Logger, which allows users to drill down into event details through charts and graphs. It emphasizes the importance of monitoring least common events as they can indicate unusual activities that might not be expected. To detect these low-frequency events, one can use commands such as "top," "tail," or "rare" after specifying relevant conditions like destination port (443) in this case. These commands help in displaying the most frequent or specific types of network traffic occurrences, allowing for easier detection of suspicious activities that might not be part of regular operations. This document outlines the use of Logger within ArcSight for analyzing network traffic and detecting unusual activity, specifically focusing on port 443. By utilizing Logger's capabilities for normalization and categorization, it is possible to sum bytesIn and bytesOut across various devices or product lines. The query involves source (src) and destination (dst) addresses, with the goal of identifying abnormal network traffic patterns specific to each device. The method includes running this query over different time intervals to assess whether such behavior is normal over time. ArcSight Logger's ability to normalize and categorize traffic against port 443 is highlighted as a significant advantage in event analysis. The process involves grouping and categorizing the network traffic data from multiple products, focusing on bytes transferred through port 443, allowing for quick analysis of generated network traffic within a short period. Furthermore, this document discusses how Logger can be used to identify rare events in extensive datasets, utilizing its Super Indexing feature. This process involves using the Logger interface to search for specific data points that are uncommon or difficult to find within large amounts of information. By leveraging super-indexed fields, performance is significantly improved when searching for such rare occurrences, making it easier and quicker to determine whether a sought value exists in vast datasets. The text discusses the functionality of Super Indexing in a logging system (Logger) which automatically maintains indexes for source IP Address, Hostname, and Username fields. It explains how to perform a search within this system by selecting Analyze from the main Logger menu, specifying a particular source IP address and searching across all events stored in a seven-year storage group. The process involves loading a saved filter called "Demo rare event" and clicking Go! to initiate the search, which can return millions of events per second depending on the setup. The effectiveness of Super Indexing is demonstrated through examples where it quickly finds an existing event (e.g., a login) from seven years of data or returns "NO EVENTS" for a non-existent IP address within a short period. This feature allows for efficient searching and analysis even in large datasets, making it useful for incident response, security, and forensics tasks like investigating insider threats or data leaks. The text discusses how people using a product called "Logger" desire custom company banners to display their policy information. These banners are optional for users. The demonstration is being customized to show this feature, which allows users to provide a visual description of their policies while still maintaining user acknowledgment as an option. The demonstration then moves on to explain how to interact with the dashboard within Logger. It starts by explaining that clicking on "Dashboards" will take you to a Security dashboard, which includes several panels displaying information of interest such as failed logins and top destination ports. The text provides instructions for refreshing any non-displaying panel and details interactions with specific elements like hovering over donut chart slices in the NetFlow Top Destination Ports panel. It also explains how to access detailed event information by clicking "View on Search Page" when interested in more specifics about events shown, and gives options for customizing the display format of each panel from column, bar, pie, area, line, stacked column, or stacked bar charts. Finally, it discusses changing the number of displayed top entries per panel as well as navigating through different pre-existing dashboards available in Logger based on user interest. The provided text describes a system used for monitoring network activities, including logging of events such as storage usage, intrusions, and configuration changes. It mentions various dashboards available, like Compliance, Network Operations, Event Count (System), Intrusion and Configuration Events, among others. The dashboard called Monitor provides insights into the activity within a Logger system, showing details about data flow in and out, as well as storage usage. The Logger has a simple interface for searching events using keywords such as "dgchung" to track specific user activities across the network logs. This setup is applicable to organizations where real-time monitoring of employee activities might be necessary for compliance and security purposes. The provided text provides a step-by-step guide on how to utilize a system or software tool that allows users to search and analyze data related to specific strings, such as "dgchung," across various vendors like ORACLE, Microsoft, and Vontu. Here's a summarized version of the key points: 1. **Initial Setup**: The user is presented with an overview of where in the system the string "dgchung" occurs by vendor. If this feature isn't immediately visible, they are instructed to click on "Update Now". 2. **Field Summary Overview**: Users can view a Field Summary automatically built for them. They have the option to customize and hide it if needed or adjust the settings to show only relevant fields. 3. **Customizing View**: The user has the ability to change the field set by clicking on the "Security" field set from the drop-down menu, allowing for a tailored view that focuses on specific security aspects. 4. **Advanced Search Functionality**: For more detailed searches, users can navigate to the Advanced section and perform structured searches using various operators like Contains, Starts With, Ends With, =, or != under the Name category with terms such as sourceHostName. This is particularly useful for focusing on servers containing specific keywords in their hostname, such as "finance". 5. **Visual Aids**: To enhance understanding, users can choose visual representations like Color Block View provided by the Common Conditions Editor, which helps to visually interpret complex queries. 6. **Search Modifications**: Users can add additional search conditions easily (e.g., adding "ftp" to refine their search results), but it's noted that in some cases, no results may be expected or desired based on typical investigative practices where the exact content of a search isn't known beforehand. This guide is designed to assist users in efficiently searching and analyzing data related to specific strings within large datasets across multiple vendors, adjusting views according to their needs, and using advanced search features for targeted analysis. The provided text describes the process of using a search tool called "Logger" for investigating an individual's activities based on their username, possibly related to FTP events and destination country (China). Here’s a summary of the steps outlined in the text: 1. **Initial Query Setup**:

  • Combine the username parts from dgchung or dchung using OR operator ("|").

  • Click "Go!" to execute the search.

2. **Reviewing Search Results**:

  • Scroll through the search results to find relevant entries, especially those related to FTP events and China as a destination country.

  • Focus on event names that might indicate activities like creating users, granting roles, clearing audit logs, or sending suspicious articles.

3. **Using Logger Search Helper**:

  • Utilize the "Logger search helper" feature by entering a pipe command (|) to refine the search.

  • Enter relevant terms in the prompt provided by the search helper, such as "top name he has been" followed by specific actions or keywords related to the investigation.

4. **Graphical Representation**:

  • Implement built-in graphical charting options offered by Logger (e.g., switching from a table view to a pie chart) to summarize activities visually.

5. **Exporting and Customizing Results**:

  • Use export features in Logger for different formats, including local save, Logger itself, PDF, CSV, various chart types, and adjusting entry numbers displayed on charts.

6. **Accessing Reports and Dashboards**:

  • Navigate to "Reports" then select "Dashboard" from the main menu to view customizable reports that can include URLs.

  • Use the "Dashboard Viewer" feature to access a specific dashboard named "Security Dashboard."

7. **Finalizing with Chart Settings and Exporting**:

  • Adjust chart settings such as result chart type (pie, column, bar) or number of entries displayed based on findings.

  • Export final search results for further analysis or documentation purposes.

The text provides a detailed guide on how to use Logger effectively for investigative purposes, including refining search terms, visualizing data through charts, and exporting detailed reports for comprehensive follow-up investigations. To run a report on the user "chung" in Logger, follow these steps: 1. **Navigate to Explorer**: Click on "Explorer" under Navigation on the left side of the Logger interface. This is where you can create and access your own reports for easier management and understanding. 2. **Access Demo Reports**: Under your created groups or directly from the main view, double-click on "Demo Reports." Scroll down to find "User Investigation," then right-click and select "Quick Run with default options." 3. **Modify Report Parameters**: Replace "admin" in the User Name report parameter field with "chung". In the Storage Groups section, choose "Default Storage Group" and click "Run Now." This will generate a report that includes all events for "chung," such as FTP activities, displaying user names along with destination or source activity. 4. **Review Report Formatting**: Look at the icons at the top of the report; Logger allows you to export it in various formats like Adobe PDF, MS Word, MS Excel, and CSV. These reports can be generated on demand or scheduled for automatic delivery to your inbox. 5. **Logging Out**: After completing the necessary review, click "Logout" to log out of Logger before proceeding with compliance use cases. Additionally, you should:

  • Review the PCI Compliance Insight Package and related drill-down reports and dashboards provided by Logger.

  • Note that Logger offers a customizable Login Banner where users can accept the company's policy. This feature is optional and tailored for demonstration purposes to provide a user acknowledgment of policies in place.

The article discusses Logger, a solution designed to address three primary challenges associated with demonstrating regulatory compliance: managing different log retention requirements, automated log review, and real-time alerting for compliance issues. Each user of Logger has access to personalized dashboards where they can view relevant events based on their role (e.g., Auditor). This includes showing PCI compliance events tailored specifically for auditors. Logger features role-based access control, ensuring that users only have visibility into the data and events pertinent to their roles. Navigating through the Logger interface, one can find the Reports section where reports such as Demo Reports are accessible. These reports include PCI Req 2 - PCI and other compliance reports organized by Default Acct Usage Drilldown. The report on Default Account Usage is particularly useful for highlighting instances of default accounts in use within an environment, emphasizing the importance of not using default accounts provided by any vendor's product. The report provides detailed information about which users are utilizing these default accounts, as indicated by a hyperlink to launch a drill-down report displaying results starting with 'sa'. Users can review and interact with the reports through various interfaces, including viewing different formats and scheduling automatic email delivery of reports. Additionally, the article mentions ArcSight's Compliance Insight Packages, specifically tailored for PCI compliance. This package includes top reports recommended by customers to aid in their compliance efforts as per the PCI specification. These reports not only reference relevant specifications but also offer actionable insights that can enhance regulatory compliance practices. The report provided assists in compliance efforts by offering preconfigured alerts for various compliance controls, such as PCI (Payment Card Industry) requirements. This saves time and effort because you don't need to manually create reports from scratch; the report has already done the heavy lifting for PCI compliance. To use this report, follow these steps: 1. Close any new browser windows that opened for the drill-down report by clicking the X on your browser. 2. Go to "Analyze" > "Alerts" from the main Logger menu in your system. Instead of waiting until after running a full scan, you can now set up real-time alerts so you catch behavior as it happens. 3. In the Alerts section, select "Event Fields" and then click "Go!". If needed, adjust the time window to "Last Hour". Logger offers many preconfigured alerts for different compliance controls like PCI. You can modify existing alerts or create new ones according to your needs. The alert here is based on a default account usage scenario. 4. Alerts can be sent to various destinations such as Email, SNMP, Syslog, and the correlation engine Express/ESM. 5. For more configuration options (optional), log in as an admin, go to "Configuration" > "Real Time Alerts", select the name of the active alert, and configure it according to specific conditions. 6. To manage storage groups for different types of logs with varying retention periods, navigate to "Configuration" > "Storage Groups". For PCI compliance, you might need to store logs from PCI devices for at least one year. This is configured under storage groups, making the process straightforward and automated. This report simplifies the compliance process by providing pre-built alerts and configurations that are tailored for specific controls like PCI. It ensures that your system's behavior can be monitored in real time, allowing you to respond promptly to any issues or changes in compliance requirements. The text provided outlines a scenario involving Logger, a system used in auditing and compliance settings such as PCI (Payment Card Industry) and SOX (Sarbanes-Oxley). It discusses how the system allows users to assign incoming log streams to specific storage groups based on predefined storage rules. This feature is designed for efficiency and convenience by automatically categorizing logs without manual intervention. Additionally, it highlights security measures in place where an auditor account has restricted permissions only for reading configurations, ensuring that auditors can verify settings without the ability to modify them. This role-based access control enhances compliance with regulatory standards like PCI and SOX. The text then pivots to discuss another use case within IT Operations (IT Ops), focusing on a scenario where a web server is down, potentially due to unauthorized changes or attacks such as Denial of Service (DoS). The scenario outlines how an administrator might log into Logger, search through logs to identify the issue, and investigate specific events like configuration modifications. The process involves: 1. Logging in as an admin. 2. Navigating to the Analyze, Search page. 3. Entering a domain name (web1.arcnet.com) to filter logs. 4. Adjusting the search time window if necessary. 5. Selecting the Webserver field and focusing on GET events. 6. Noticing requests from North Korea and investigating configuration modification events in the logs. This scenario emphasizes Logger's capabilities in quickly diagnosing issues through log analysis, which is crucial for maintaining IT operations and ensuring compliance with regulatory requirements. The text discusses a system for categorizing events in an environment where different team members might not have expertise in all aspects of event analysis, such as networking or server management. The Connector is designed to simplify this by using human-readable formats that are easy to understand regardless of the user's specific area of expertise. Categorization allows users to build filters and rules based on these categories, which can be used for reports and dashboards. This system "future proofs" investments in a solution because it is not tied to the specifics of individual vendor products. It enables easier adaptation if there are changes in firewall vendors or other related technologies. The example given involves modifying a search term from 'web1.arcnet.com' to include both this specific URL and a DMZ firewall (dmzfw1). This allows for broader searching that encompasses multiple devices and can help identify significant events such as modifications made by the user Mike, which may be contributing to system issues like server overload. The text also briefly mentions accessing reports through the Logger menu, indicating that there are standard reports available for various situations, simplifying the process of analyzing large amounts of data from different sources. The provided text describes a process for configuring and running reports on a device configuration using an explorer tool. Here's a summarized version of the steps outlined in the text: 1. **Accessing Device Configuration Reports**: Navigate to the "Device Configuration Events" report within the Explorer interface. This can be found under the Demo Reports group by clicking on the ellipsis (...). 2. **Customizing the Report**: If necessary, customize the report by right-clicking and selecting 'Quick Run' with default options. Users can choose a different storage group or scope to run the report. 3. **Running the Report**: Click 'Run Now' to execute the report tailored to specific criteria. The user should ensure that "Device Configuration Reports" are selected, then specify custom filters like Destination Host Name and add conditions such as 'Contains'. This allows for a focused report on desired devices or configurations. 4. **Emailing the Report**: After running the customized report, it can be emailed to relevant parties if required. The user has options to adjust the configuration settings through the Customize Report icon in the interface. 5. **Analyzing Live Events**: To monitor real-time events related to a specific device (e.g., web server), use the "Live Event Viewer" from the main Logger menu. Enter the relevant search terms and start monitoring for events concerning the webserver. This feature allows users to view updates in real-time, which is particularly useful for troubleshooting during business hours or when changes are being made. 6. **Closing the Live Event Viewer**: Once the necessary data has been analyzed, close the Live Event Viewer window. The text suggests that this tool can be used as an ideal complement to ongoing ITOPS activities and real-time troubleshooting, ensuring up-to-date information is always available during operations. This summary outlines the use case for Logger in analyzing multiline logs from various applications such as Ruby Application Archive (RAA) logs and NetFlow data related to Microsoft SQL Servers. For RAA logs, Logger is used to parse multi-line log files that may include errors or other critical messages. The process involves: 1. Logging into the system as an admin. 2. Navigating to the Analyze module and then to the Search page. 3. Entering specific keywords (e.g., "raa") in the search field followed by clicking "Go!". 4. Adjusting the time window if necessary, choosing the MultiLine AppDev field set, and widening the message column to view multiline messages as a single event. 5. Filtering events by severity to focus on ERRORs. 6. Loading a saved filter and exporting the results for analysis. For NetFlow data related to Microsoft SQL Servers, Logger helps identify sources communicating with these servers by: 1. Logging into the system as an admin. 2. Navigating to the Analyze module and then to the Search page. 3. Entering "netflow dpt=1433" in the search field and clicking "Go!". 4. Adjusting the time window if needed, which defaults to the last hour. 5. Viewing network activity related to SQL Server communications. Both use cases demonstrate Logger's ability to handle multiline log files and provide actionable insights through filtering, exporting, and charting capabilities. To summarize, we started by searching for NetFlow events destined for a SQL server on port 1433. By typing "netflow dpt=1433" into the search bar, all relevant NetFlow data was retrieved automatically. We then changed the fields to NetFlow and scrolled through to view all columns in the NetFlow field set, specifically focusing on flows towards a SQL server on port 1433. In Logger, we observed that most NetFlow events were one-sided, measuring single direction flows. To analyze byte values more closely, we clicked on "bytesIn" under Field Summary and updated if necessary. The statistical analysis showed that a large portion of the byte values was around 62 bytes, which might be significant for network engineers. Next, we focused on identifying the most popular source addresses by replacing "top" with "rare" in our search terms, changing the perspective to examine less frequent sources instead. This shift revealed different insights into the data. We then used Chart Settings in Logger to visualize this information dynamically using a pie chart, which showed percentages and number of events per slice. The pie chart could be auto-updated based on selected time frames, providing an interactive and dynamic view of the data. The text provides an overview of using a tool called Logger for analyzing network data, specifically NetFlow data which helps in monitoring and troubleshooting network performance. The process involves several steps: 1. **Exporting Reporting and Charting**: Logger offers multiple options for exporting data in formats like PDF, CSV, or saving locally. It also allows users to create various chart types based on the exported data. 2. **NetFlow by Port Analysis**: A quick search within NetFlow data can be performed using specific commands such as "_count by dpt | sort - _count". This example shows a search for the most popular destination ports, revealing information about network traffic and specific protocols like Network Time Protocol (NTP) on port 123. 3. **RAW Events Analysis**: For identifying network performance issues from raw event logs:

  • Perform a search by entering "not cef" in the Logger interface.

  • Expand an event to view its details, including fields that may not be parsed (marked as RAW).

  • Use filters like "loss nagios ALERT" to focus on specific types of network events that might indicate performance issues.

4. **Logger Regex Helper**: This tool is used to extract useful information from raw logs by providing a helper for regular expression extraction, which can be applied to the identified events. Overall, Logger simplifies complex network data analysis through its user-friendly interface and powerful search capabilities, making it an essential tool for IT professionals involved in network monitoring and performance tuning. The process involves parsing raw event data, where specific fields and values are identified and given meaningful names. In this case, a field labeled "RTA" is recognized with a numeric value associated with it, which represents a measurement called Round Trip Average (ms). To analyze this further, one should click on the field name to select it for more detailed analysis, then double-click to rename it to RTA if needed. After renaming or confirming the field as "RTA", you can filter the data by adding a condition where RTA is greater than 1 ms. This filtering helps in focusing on relevant entries based on specific criteria. Once the desired results are filtered, one can export these findings for further analysis or reporting using Logger's various exporting options such as saving locally, to Logger, or generating a PDF or CSV format. In another scenario, the process involves searching through raw events to find failed login attempts. This is done by entering "not cef login failed attempt" in the search bar and clicking on the checkbox for Discover Fields using the GEAR icon dropdown menu. The system will then highlight any instances where these keywords appear, indicating potential failed login attempts from RAW data. This text discusses using a tool called Logger to analyze unparsed event logs from a browser. The process involves several steps: 1. **Viewing Unparsed Events**: In the Logger interface, raw events are displayed. Users can expand these events by clicking on them. To view field summaries that Logger has identified, users should unhide them by clicking on the Expand icon located on the left side of the screen. If necessary, they can click "Update now" to ensure the field summaries are up-to-date with the events table. 2. **Analyzing RAW Events**: Users can examine individual fields within the raw events. For example, clicking on the "Username" field reveals that Logger has identified usernames from the event data and provided counts, percentages, and a sorted list of top usernames. 3. **Visualizing Data**: To quickly visualize important data such as top usernames related to searches for "login" and "failed", users can click on the blue chart button at the top. This will display a real-time chart of these top usernames. Adjusting the time range (e.g., 'Last hour') might be necessary depending on the data volume. 4. **Using REGEX for Analysis**: Users can refine their search using regular expressions (REGEX). For example, applying the following regex to find failed login attempts: `not cef login failed attempt | rex "Username=(?<^\s,;>

+)" | where Username is not null | top Username`. Adjusting this to include a colon (`:`), it becomes: `not cef login failed attempt | rex "Username=(?<^\s,;>

+):" | where Username is not null | top Username`. 5. **Secure Data Handling**: The text emphasizes that while Logger can be used to analyze raw data, securely encrypting or masking fields (which would be relevant for handling sensitive information like credit card numbers) should be handled according to specific guidelines in other parts of the document. 6. **Practical Example**: In a practical scenario, users might search for recent credit card transactions by entering "sha1hex" and clicking Go!. Adjusting the search time window if necessary (e.g., Last Hour), they would select all fields from the Events panel to analyze these RAW events using Logger's built-in REGEX helper and other features. This summary highlights how to use a tool for analyzing raw event logs, including specific methods for handling sensitive data as per best practices within the Logger interface. This process involves using a Logger Regex Helper tool within a software application to extract credit card numbers from raw event data. Here's a breakdown of what happens step-by-step: 1. **Extraction Process**:

  • The user selects an event and clicks on the RAW icon, which opens a pop-up window containing the raw event data.

  • In this pop-up, the Logger Regex Helper is used to parse the raw data and isolate credit card numbers using specific regular expressions.

  • Initially, the field where the credit card number might be located is unnamed; it's referred to as Number_3 in the pop-up window.

2. **Regex Helper Usage**:

  • The Regex Helper automatically processes the raw data, applying regex patterns to identify and extract fields that match the format of a credit card number.

  • After identifying the field containing the credit card numbers, the user renames this field to "ccnum" for easier identification and understanding in subsequent steps.

3. **Renaming and Filtering**:

  • The user proceeds by clicking OK and Go!, which activates the renamed field (now called ccnum) within the Logger interface.

  • Additional columns like firstnum are added to further refine the search criteria, allowing for more specific filtering based on the credit card type as identified by its initial digit.

4. **Filtering by Credit Card Type**:

  • A saved filter named "Demo Credit Card" is loaded and applied, focusing the search on events where the credit card number's first digit indicates a particular type of card (e.g., 3 for American Express, 4 for Visa).

  • The user can adjust the search time window if necessary; in this case, it was set to the last hour.

5. **Data Masking and Reporting**:

  • Once the filter is applied, a new column named ccnum appears automatically within the Logger grid, displaying all extracted credit card numbers masked appropriately (typically just the first digits are visible).

  • The final column contains additional data derived from these credit card numbers, such as specific identifying information or transaction details.

6. **Exporting the Report**:

  • Users can export the resulting chart in PDF format for distribution to parties who do not have access to the full unmasked data.

  • Additional columns like firstnum help in categorizing and summarizing the credit card types as per the instructions provided, ensuring a tailored report suitable for specific stakeholders.

This method allows users to securely extract relevant information from raw data while maintaining confidentiality through proper masking and filtering processes, facilitating both detailed analysis and compliance with privacy policies. The article discusses a demonstration scenario involving the use of a Logger VM (Virtual Machine) to analyze machine transactions, specifically focusing on log files from a POSTFIX mail transfer agent (MTA). Key points include: 1. **Event Grouping**: The Logger VM can group events into higher-level transactions without requiring a SmartConnector. This involves using File Receivers to read log files directly, On-Board Parsers to parse events automatically, and grouping these events under a common value called TRANSACTION. 2. **POSTFIX Integration**: The demonstration uses POSTFIX logs, which include a critical field QueueID. POSTFIX is an open-source MTA used for routing and delivering email. The article provides a link to more information about POSTFIX. 3. **Logger Components**: The demo involves the Logger TRANSACTION operator, the Logger FILE RECEIVER, and the Logger on-board PARSER. These components allow users to:

  • Read log files directly using File Receivers.

  • Automatically parse events without needing a SmartConnector.

  • Group events into higher-level transactions based on common values.

4. **Mail Logs File Receiver**: The Mail Logs File Receiver can be enabled and disabled as needed to control its inclusion in the summary page. Enabling it initially increases the EPS (Events Per Second) rate, which should return to a lower range after disabling it again once events have been read into the Logger. 5. **Demo Instructions**: Users are guided on how to enable and disable the Mail Logs File Receiver through specific steps within the demo scenario, providing practical guidance on interacting with the Logger VM for transaction analysis. 6. **File Receivers**: The article also explains that Logger FILE RECEIVERS can bring in real-time events as files are written, which is crucial for dynamic log analysis. In summary, this demonstration showcases how to use a Logger VM to efficiently analyze and group transactional data from various sources like POSTFIX logs using File Receivers and Parsers, providing insights into system performance and event handling capabilities. The file mentioned, "Mail_Log_Files," is used in a system where it is read by software that includes a parser named "Mail_Log_Files." This parser is designed to interpret log entries from mail servers (specifically, postfix logs) and extract relevant information into specific fields. After the file has been parsed, events are processed through a Logger interface where they can be summarized and analyzed. In this system: 1. **File Handling**: The file named "Mail_Log_Files" is read by software configured with a parser called "Mail_Log_Files." This parser processes postfix log entries to extract useful data into defined fields. 2. **Parser Configuration**: Under the configuration settings, there's a section where you select a source type (postfix logs) and link it to a specific parser ("Mail_Log_Files"). This setup allows for parsing of log files into various relevant fields. 3. **Event Processing**: Parsed events are processed through a Logger interface, which provides a summary view under the "Event Summary by Receiver" panel. Here, each event is displayed with selected fields (like time, hostname, process name, subprocess name, PID, and QueueID) expanded for detailed analysis. 4. **Transaction Operation**: The system allows grouping of events based on common values such as QueueID into transactions or groups. Each transaction is assigned a unique transactionid and includes a count of the related events. This feature supports advanced analysis where you can define time thresholds to group longer-lasting sets of events, which could be useful for business or web transaction tracking across different servers. 5. **Help Documentation**: For further understanding of the transaction operation, contextual help is available through online resources by searching for "transaction" in the system's help section. This provides detailed information on how to use this feature effectively for managing and analyzing log events. The Export Content Logger feature now supports the export and import of various system content, including Alerts, Dashboards, Filters, Parsers, Saved Searches, and Source Types. This includes Reports, Queries, and Templates within the Reporting section, making it simpler for users to develop and share content with others. The article specifically highlights how this tool can be useful for analyzing POSTFIX Mail events by providing a drillable dashboard that displays charts such as Mail subprocesses (with breakdowns of error, smtp, or queue manager events), Mail Counts (showing the number of events grouped by transaction ID), and Mail Transaction Lengths (listing the number of events per transaction sorted by ascending order). The article emphasizes that even those not considered domain experts on POSTFIX logs can benefit from using ArcSight Logger for parsing and analysis due to its flexibility and ability to drill down into detailed event information. It concludes with a note about scheduled updates for Logger, which allows it to load data from external sources for static correlation and provides the capability to schedule updates. The provided text is a guide on how to update and utilize a lookup file for Tor Exit Nodes within a Logger system. Here's a summary of the key points: 1. **Updating the Lookup File**: The updating process is automated, but outside the direct control of the user (indicated by "outside of Logger’s control"). A sample script provided ("tor.py.for.logger.demo.py") can be used to refresh the list of Tor Exit Nodes on a demo VM if it has internet access. 2. **Running the Script**: The script should be run in the `/root` directory of the Logger Demo VM using Python: ```bash cd /root/ python tor.py.for.logger.demo.py ``` 3. **Demo Note**: This is a demo script that adds a specific magic IP address (58.99.128.47) to the list of Tor nodes for predictable results. This is not suitable for production use. 4. **Using Logger Lookup**:

  • Go to "LOOKUP" in Logger.

  • Click on "torexitnodes", which shows a table already loaded into Logger.

  • To update or manage the lookup file, click on the "pencil icon" to edit the table settings: here you can find the name of the table, its file location, and the schedule for reloading the file from disk.

  • Use the search feature in the "Take me to …" box by entering "LOOKUP Logger Lookup". Clicking on "torexitnodes" will display the current lookup table.

  • To perform a specific lookup operation (e.g., comparing IP addresses), use the command: `| lookup torexitnodes IPAddress as destinationAddress output *`. This compares the values of the IPAddress column from the lookup table with destinationAddresses in Logger and outputs all matching results.

5. **Saved Searches**: The search can be saved for future reference under "Saved Searches" with a name like "Demo Lookup Tor Nodes". This guide provides step-by-step instructions on how to manage and utilize external data within the Logger system, specifically for Tor Exit Node information. The text discusses a feature in a system called "Demo Tor Node Lookup," which involves adding new columns to search results that include details from both the Logger event and the entry in the Tor Exit Node table for matching events. This helps users easily determine if any computers within an organization are communicating with known tor exit nodes, including country codes. Another feature mentioned is URL analysis using the "len" operation in Logger searches. This involves examining URLs to detect malware and signs of compromise. When URLs are unusually long, they may require further investigation. The process involves: 1. Entering a search query in the "Take me to..." box with keywords like "Search." 2. Specifying the device vendor (e.g., ABC Software) and identifying that the URLs are stored in the field named "requestUrl." 3. Using Logger to create a new field called "urllength," which is the length of the requestUrl field, during search time. This field can be given any name, such as "(int)urllength" or "urllength." 4. Setting the time range for the search to cover the last 30 minutes. 5. Adding this new field to the Logger search results and sorting them by the length of the URLs (urllength). This process helps in quickly identifying long URLs that may need deeper investigation for potential malware or compromise. The article discusses how to use Logger's "insubnet" operator for searching within variable-length subnet mask (VLSM) networks, which is a feature introduced in Logger 6.1. This operator allows users to specify subnets using CIDR notation or IP address ranges with wildcards. To use the "insubnet" operator: 1. Navigate to the Search/Analyze menu and type the query including the "insubnet" operator, e.g., sourceAddress insubnet "10.0.0.0/18". 2. Adjust the time frame according to your needs, for example, Last 2 hours with a subnet prefix length of 18. 3. Change the query by adjusting the network's bits or using wildcard expressions if necessary:

  • sourceAddress insubnet "10.0.0.0/21" (adjusting the number of bits)

  • sourceAddress insubnet "10.0.*.*" (using a wildcard expression for variable right-hand side addresses)

  • sourceAddress insubnet "10.0.0.0-10.10.0.0" (specifying address ranges between IP addresses).

4. Execute the search and review the results, which will show events within the specified subnet or range. 5. To investigate further, click on specific charts to get more details from those networks. 6. Optionally, remove filters such as sourceAddress = "10.0.27.221" | where for a new search without these constraints. The provided text appears to be a guide for using a feature within a software or system related to cybersecurity, possibly from Cisco, which deals with network security and monitoring. Here's a summarized breakdown of the main points mentioned: 1. **Using Multi-Select Functionality in Search:**

  • Navigate to the lowermost point of search results.

  • Find and check the checkbox labeled "Enable Multi-Select of field values."

  • This feature allows for ANDing existing queries when adding multiple terms.

  • To use this, select the option and click on specific terms like "IntruShield" from deviceProduct and "Compromise" from categorySignificance.

  • The new query will focus on compromised events related to IntruShield based on sourceAddress in the subnet "10.0.0.0-10.10.0.0".

2. **Decoding URLs:**

  • Use the eval operation to decode URLs, which involves converting encoded symbols back into understandable form.

  • In the search interface, enter relevant information in the "Take me to …" field and select "Search."

  • Utilize indexed fields like 'name' (which represents events from a Cisco firewall) to narrow down results based on 'requestUrl'.

  • The 'requestUrl' column will show encoded URLs with percent signs and hex values, which are part of URL encoding or Percent-encoding.

  • This process helps in understanding and analyzing the data more clearly by decoding these encodings.

These functionalities help users to efficiently search for specific network events and decode potentially obfuscated information like URLs, enhancing the accuracy and effectiveness of cybersecurity monitoring and analysis. This text discusses a method for decoding URLs to make them more readable in Logger software. The process involves using a function called `urldecode()` on the `requestUrl` to create a new column named `decodedurl`. Both the original encoded URL (`requestUrl`) and the decoded version are displayed on the dashboard, enhancing readability and saving time by consolidating related information into one place. Additionally, the text introduces a "Top Values" dashboard in the Logger Demo VM that shows 15 panels aggregating top values from different fields. It also mentions an expanded partnership between Micro Focus and Fortinet, integrating their products (Logger and FortiGate) to enhance security by collecting, storing, identifying, analyzing, and mitigating complex threats. The process involves searching for events originating from the Fortinet firewall using specific queries and customizing the displayed fields accordingly. The passage describes a method for analyzing data related to authentication attempts using ArcSight. It involves accessing a dashboard on a FortiGate device and selecting specific actions to generate reports. The process includes navigating through options such as "Analyze/Search" to perform a targeted search, categorizing by behavior or outcomes (users and addresses), setting the time frame, running the search, and saving it directly as a report. This saved report can be accessed later via the "Reports" section of the system, where all user-saved searches are stored in Smart Reports for future reference and analysis. The task involves working with a "Demo User Logins" report in a Smart Report tool, which allows for customization through actions like right-clicking to select 'Run in Smart Format'. Key steps include setting the time period and scan limit as instructed (Start = $Now – 1d, Scan Limit = 0), applying changes, reviewing the bar chart displaying counts of specific tuples over a day, filtering data by typing 'root' in the destinationUserName column, sorting columns, adjusting display settings for different columns, and using search boxes to filter numeric ranges. An enhanced version of this report includes features like value-based coloring, Sparklines, and multiple level groupings. This involves selecting "Demo User Logins Enhanced" from a list under Logger Search Reports, right-clicking to run in smart format with the same settings, and then examining the expanded view for sorting, grouping, totals, color grids based on data, and seeing trends through Sparklines. Additionally, this task guides how to visualize device types using "Tree Map" and "Packed Circles" by selecting appropriate fields like deviceVendor and deviceProduct, allowing graphical representation of event volume in different ways through a single report setup. To summarize this process, follow these steps from the Logger Main menu: 1. Navigate to 'Reports' and click on "Explorer." 2. Expand "Demo Reports" and right-click, selecting 'Demo - Device Type Counts.' 3. Choose 'Quick Run with default options,' then ensure the 'Scan Limit' is set to 0. Click 'Apply.' 4. Observe the results in the tabs labeled 'Chart' and 'Chart1.' 5. Enable 'Edit Mode' by clicking on it in the upper right corner. 6. Add more visualizations by clicking the plus sign. Logger Smart Reporting allows for additional charting to be done without running extra searches, thanks to its seamless integration with Voltage and ArcSight via Smart Connectors. These connectors can transparently encrypt configured event fields and send them to any destination, including Logger. 7. To configure Logger to allow inline decryption of events from SecureData, go to Configuration and click on "Talking Points." Explain that Logger can natively communicate with SecureData for decrypting encrypted event fields. 8. Provide the Voltage server credentials and specify the fields you wish to encrypt as part of the configuration process. Note that some destinationUserName field values have been preencrypted for demonstration purposes. 9. Perform a search on Oracle using the provided query in the search pane: `deviceProduct = Oracle`. The results will be decrypted automatically when received by Logger, thanks to its ability to natively talk to Voltage servers for decryption based on Logger RBAC controls. The document discusses a system for encrypting user names on a destination chart, utilizing Voltage's "format-preserving encryption" feature. This method involves Logger communicating with the Voltage server to decrypt stored values back to plain text. The decryption process takes place within the last 30 minutes and is configured through specific settings in Logger. Key aspects of this system include: 1. Utilizing pre-saved filters and Voltage's encryption format for user names. 2. Seamless integration with Voltage Connectors and Voltage SecureData to ensure compliance and protection of sensitive information while maintaining data integrity. 3. Pre-encrypting field values, as no real-time connector is demonstrated in the setup. 4. The document has undergone several revisions across different versions of Logger (from 6.5 to the latest version), each update focusing on enhancing user experience through better encryption methods and integration with Voltage products. 5. Updates have included changes in layout, removal of irrelevant sections, incorporation of new functionalities like Smart Reports for visualizing data more effectively. The document has undergone several revisions across different versions since its initial release in September 2015. Version 2.3 from February 2017 made a minor change to the security use case by updating the report title from "DEMO – User Investigation" to "User Investigation," reflecting a more accurate and specific term related to the LIKE operator. Version 2.2, released in October 2016, was revised for compatibility with Logger 6.3. No new scenarios were added, but the document corrected the Tor URL. In version 2.1 from April 2016, the document was updated to align with Logger 6.2 and introduced new features such as DMA events and content, Fortinet events and content. Additionally, it included several enhancements like Scheduled Logger Lookup, URL Analysis using 'eval' operation insubnet, Decoding URLs, and a Dashboard for 15 Panels. However, there were removals from the "Top Mailcious Activity Dashboard," including the panels Top Malicious Attackers and Top Malicious IPs. Version 2.0, released in September 2015, was specifically tailored for Logger 6.1. It introduced new features such as Scheduled Logger Lookup, URL Analysis using 'eval' operation insubnet, Decoding URLs, and a Dashboard with 15 Panels. The document also highlighted a regression bug on aggregation for IPAddress driver fields from Lookup tables, which is noted by a red note. Two panels were removed from the "Top Mailcious Activity Dashboard" as part of this update. The document's final section provides contact information and trademark details of Micro Focus International plc.

Disclaimer:
The content in this post is for informational and educational purposes only. It may reference technologies, configurations, or products that are outdated or no longer supported. If there are any comments or feedback, kindly leave a message and will be responded.

Recent Posts

See All
Zeus Bot Use Case

Summary: "Zeus Bot Version 5.0" is a document detailing ArcSight's enhancements to its Zeus botnet detection capabilities within the...

 
 
 
Windows Unified Connector

Summary: The document "iServe_Demo_System_Usage_for_HP_ESP_Canada_Solution_Architects_v1.1" outlines specific deployment guidelines for...

 
 
 

Comments


@2021 Copyrights reserved.

bottom of page