top of page

ArcMC281 Demo Script 1

  • Writer: Pavan Raja
    Pavan Raja
  • Apr 8, 2025
  • 21 min read

Summary:

The document you've provided appears to be a detailed summary of various tasks related to deploying and managing ArcSight components such as collectors, Event Brokers (EB), and custom topics for event routing. Here’s a breakdown of the main points from your description: ### 1. Deploying Connectors and Managing Remote Management - **Deployment Template**: You mentioned using a specific deployment template ('collector_to_eb37') when setting up a collector to connect to an Event Broker (EB). This template likely contains preconfigured settings that facilitate the connection between the two systems. - **Remote Management Password**: It’s recommended to change the default remote management password, which in this case is 'changeme'. Ensure you follow best practices for security by regularly updating passwords and using strong authentication methods. - **Checking Connectivity**: After installation or configuration, check the ArcSight Dashboard -> Topology View to verify that the collector is successfully connected to the Event Broker. This visual representation should show event flows in Syslog format, indicating successful integration. ### 2. Event Broker Topic and Route Management - **Managing Routes and Topics**: Using ArcMC (ArcSight Configuration Management), you can configure routes and topics for data handling efficiently without needing direct access to the Cluster Manager. This includes adding a new custom topic specifically for long-term retention of authentication failures, which is crucial if dealing with systems like Hadoop clusters where such events might need to be retained for over 10 years. - **Custom Topic Configuration**: The process involves naming the topic appropriately (e.g., 'all-auth-failures'), setting parameters such as the number of partitions and replication factor based on scalability needs, and ensuring that all authentication failure events from a source topic ('eb-cef') are routed to this custom topic. - **CEF Format**: The use of CEF format in capturing authentication failures is mentioned, which standardizes event data across different devices or applications for easier management and analysis. ### Summary of Processes and Considerations The summary highlights the setup of a centralized platform for handling diverse logs and events through ArcSight’s tools, focusing on scalability and efficiency through custom topic configurations. It also emphasizes the role of ArcMC in managing various components remotely, which is particularly useful for large-scale deployments where direct access to hardware might be limited. ### Certificate Information The certificate you've mentioned seems to be related to secure communication within the ArcSight platform or specifically for demonstrating the software during updates and releases. The details provided are typical of server certificates used in SSL/TLS connections, including information like issuer, serial number, validity period, and public key details. This type of certificate is crucial for establishing trust between a client (like a web browser) and a server (in this case, ArcMC) during secure communications. ### Conclusion This summary provides a comprehensive guide to setting up and managing an ArcSight environment with specific focus on streamlining event handling through custom topics in Event Brokers. It underscores the importance of using tools like ArcMC for effective remote management and configuration adjustment without compromising security practices such as password changes and secure certificate usage.

Details:

The document titled "ArcSight Management Center 2.8.1 Demonstration Script V3.1" is a comprehensive guide designed to assist users in setting up, configuring, managing, and utilizing ArcSight Management Center (ArcMC) version 2.8.1 effectively. It provides detailed instructions on various aspects including setup, configuration management, version management, monitoring, ADP licensing, deployment templates, instant connector deployment, integration with Event Broker, collector deployment, and event routing management. The script is dated October 1, 2018, and includes a section dedicated to concluding the demonstration or guide session. The provided information is a section from a larger document related to ArcSight Management Center (ArcMC) and its associated components, including the Event Broker. Here's a summarized version of the content:

  • **Appendix A** includes application certificates for specific software versions:

  • **Event Broker 2.2 cluster CA Certificate**: Certificate details for version 2.2.1.

  • **ArcMC 2.8.1 Server Certificate**: Certificate details for version 2.8.1 of ArcMC, which has a known issue.

  • A **Revision History** is provided to track changes and updates.

  • **Micro Focus Trademark Information** and **Company Details** are included at the end of the document.

  • The main part of the text discusses a demonstration script for ArcSight Management Center (ArcMC), which requires three specific VMs:

  • **ArcMC 2.8.1 Demonstration VM**: A mandatory virtual machine for the demonstration.

  • **(Optional) Logger 6.6.1 Demonstration VM**: An optional virtual machine for logging purposes.

  • **(Optional) Connectorhost Demonstration VM**: Another optional virtual machine, also related to connectors.

  • Download locations for these machines are mentioned as: (https://irock.jiveon.com/docs/DOC-11768)

  • The demonstration does not depend on Event Broker but includes optional integrations via:

  • **Event Broker 2.2.1**: A tested and working version that ships with ADP 2.3.1.

  • Due to hardware requirements and complexities, Event Broker is no longer a core component of the demonstration suite but steps are provided for integration.

This summary focuses on the technical aspects and setup required for using ArcMC in a demonstration environment, including optional integrations as needed. The text outlines a setup process for using Collector software with specific versions of ArcMC and Logger on a demonstration platform. Here's a summary of the steps involved: 1. **Introduction to ConnectorHost VM**: A "ConnectorHost" VM, which is a fresh CentOS installation without any installed connectors or CEB integrations, is used exclusively for demonstrating the Collector setup. This VM will be decommissioned once the issue in ArcMC 2.8.1 is fixed. 2. **Setup and Configuration of Virtual Machines**:

  • Download Logger version 6.6.1 and ArcMC version 2.8.1, along with Event Broker/Investigate optionally.

  • Ensure a NAT network configuration in VMWare Workstation set to 172.16.100.0/24. Adjust the virtual network settings if necessary for proper IP addressing.

  • Update the local hosts file on the Windows host machine to include DNS names (logger, arcmc, investigate, and connectorhost) mapped to their respective IPs (172.16.100.100, 172.16.100.117, etc.). This step is crucial for avoiding IP address usage in browser navigation.

  • The hosts file can be located at \system32\drivers\etc\.

3. **Starting Virtual Machines**: Upon startup, the ArcMC VM will initialize first, followed by the Logger VM. It may take some time for the Logger application to fully load and become accessible. 4. **Logging In**: After both VMs are up, open three browser tabs on your host machine to access ArcMC and Logger directly without minimizing VMWare Workstation. The IPs used in this setup map to specific DNS names as specified: logger.arcsight.example.com (172.16.100.100), arcmc.arcsight.example.com (172.16.100.117), investigate.arcsight.example.com (172.16.100.201), and connectorhost.arcsight.example.com (172.16.100.118). This setup is designed to facilitate a clear, step-by-step demonstration of the Collector integration with ArcMC for troubleshooting or educational purposes, as noted in the release notes for ArcMC 2.8.1 where this issue was experienced and fixed in future versions. To summarize, you need to follow these steps for the demonstration: 1. **Setup Virtual Machines**: Ensure that ArcMC, EventBroker, and Logger virtual machines are set up correctly. 2. **Return VMs to Snapshots**: Use VMware Snapshot Manager to return each VM to its previously created snapshot. This prepares them for future demonstrations. 3. **Optional Licensing Steps**:

  • If AutoPass licensing is used: You can run Logger with an instant-on license or demonstrate ADP (Advanced Data Protection) licensing. For the latter, follow specific steps to make ArcMC an ADP License Server.

4. **Logging In**: Access ArcMC and Logger via their respective URLs using default 'admin' and 'password' credentials. 5. **Initial Setup of Logger in ArcMC**:

  • Navigate to Node Management, click "View All Nodes," select "ArcNet," and add the Logger host.

6. **Add Host Details**: Fill in the required details for Logger as prompted during setup. 7. **Trust Certificate**: Import the Logger certificate by clicking "Import" when requested. 8. **Install Agent on Logger**: Agree to install the ArcMC Agent on Logger, which may take a couple of minutes. Once completed, Logger will appear in the dialogue box. These steps ensure that all virtual machines are ready for demonstration purposes, and properly configured for licensing if required. This use case focuses on centralized configuration management using ArcMC to manage configurations for Loggers, Connector Appliances, and SmartConnectors. The scenario involves managing a new Logger search filter across multiple systems. Here’s how it works step-by-step: 1. Access the ArcMC interface by navigating through "Dashboard → Monitoring Summary" in the "ArcNet" location. You should see one Green and one Red Logger on the screen. 2. In the Logger interface, go to Configuration and click Filters to access the custom search filters currently managed by ArcMC. These existing filters will be retained while a new filter is added. 3. In the ArcMC interface, navigate to Configuration Management. Click Import to import an existing configuration. Depending on the product type selected (e.g., Logger), ArcMC provides specific import options. 4. Select "Logger Filter" from the dropdown menu and provide it with a name such as "Logger Filter." Confirm the import by clicking OK. 5. This new search filter will now be distributed to all Loggers within the environment, ensuring uniformity across all systems in terms of filtering capabilities. This summary describes a process for managing search filters in a Logger interface, which is part of ArcMC (ArcSight Message Console) for configuration management. The steps involve importing existing logger filter configurations, adding new search filters, configuring subscribers to receive the settings, and verifying the changes in both the local Logger interface and the remote Logger system via ArcMC. To summarize, the process involves setting up SmartMessage receivers in a Logger environment using ArcMC (ArcSight Message Console). Here’s a step-by-step breakdown of the steps involved: 1. **Navigate to Configuration Management**: In the ArcMC interface, go to "Configuration Management" and then click on "All Subscriber Configurations". 2. **Create a New Configuration**: Click "New" to create a new configuration. Set the "Configuration Type" to "Logger" and specify the "SmartMessage Receiver" as the type. Name your configuration appropriately. 3. **Configure the Properties**: In the properties section, set "Enabled" to "Yes", and choose "UTF_8" for encoding. Click "Save". 4. **Add Subscribers**: Go back to the subscribers tab within your new configuration and click "Add Subscribers". Add the specific Logger you want to configure by entering its path (e.g., `//ArcNet/logger.arcsight.example.com/Software Logger`). Click "Add" and then "OK". 5. **Push the Configuration**: Click "Push" to send the configuration to the selected subscribers. Confirm the push request with a click on "Yes" followed by "OK". 6. **Verify in the Logger Interface**: Return to your remote Logger's interface, go to "Configuration", and then "Receivers". You should now see the new SmartMessage receiver configured for that Logger. This process allows you to centrally manage SmartMessage receivers across multiple Loggers using ArcMC, ensuring consistency and compliance with naming conventions and other configurations. This use case outlines how to manage versions of Logger, Connector Appliance, and SmartConnectors centrally using ArcMC. It involves remotely upgrading a SmartConnector Parser version through the ArcSight Marketplace as part of centralized version management. The key benefit is that administrators can now upgrade versions more efficiently without manual steps like downloading from Micro Focus, uploading to ArcMC, and distributing to the fleet. To ensure successful implementation, make sure your ArcMC VM has internet access. Common issues include incorrect default gateway settings which should be verified in VMWare Workstation Virtual Network Editor (should be 172.16.100.2) and corrected via netstat -rn commands if necessary. The provided text describes a series of steps and functionalities related to network management and software upgrading using ArcMC (ArcSight Management Center). Here's a summary of the key points: 1. **Network Configuration**:

  • An IP address `72.16.100.1` is mentioned, which suggests configuring a specific network interface on a device or system.

  • The command `route add -net 0.0.0.0 -net gw 172.16.100.2` indicates adding a route to the network, where `-net 0.0.0.0` suggests a default gateway setup and `gw 172.16.100.2` points to another specific IP address for gateway communication.

  • The command is used to verify internet access by pinging an external FQDN like `google.com`.

2. **Managing Software Versions with ArcMC**:

  • ArcMC allows central management of various versions of ArcSight products, including upgrades and configurations.

  • In the context of upgrading a SmartConnector using ArcMC:

  • Navigate to Node Management in the ArcMC interface.

  • View all nodes and expand the ArcNet location on the left-hand pane.

  • Select the software SmartConnector you wish to upgrade, such as a syslog SmartConnector.

  • Manage properties, certificates, and credentials centrally from the right navigation pane.

  • Utilize the Containers feature within the Smart Connector framework 7.3+ to separate and manage Connectors and Parsers for more efficient use of new Parsers.

  • Upgrade outdated parser frameworks by clicking "Upgrade" next to the Container name, acknowledging an outdated version as indicated by a yellow icon.

3. **Upgrading Containers**:

  • When upgrading containers in ArcMC, there's a page named "Upgrade Containers."

  • This page allows caching of Marketplace credentials when running the VM for the first time to facilitate the upgrade process.

Overall, these points outline a structured approach to network configuration and software management using specific tools like ArcMC, emphasizing centralization and automation in managing various versions of software components within an IT environment. This text outlines the process of upgrading both the framework and parsers in an ArcSight system using Marketplace or local files. The user should first create a Marketplace account if they don't have one, then select "Framework Upgrade" to move from version 7.3 to 7.8. The upgrade involves downloading a large number of new files from the Framework repository for both the framework and parsers. After confirming their credentials, users can choose between upgrading solely through Marketplace or combining local cached upgrades with Marketplace updates. The text provides specific instructions on navigating the interface to select "Framework Upgrade" and then "Parser Upgrade." Users are advised to double-check passwords as they may expire during this process. Once logged in, the user should see that their shopping cart indicates an upgrade sourced from Marketplace, which will be version 7.8.1.8077.0 of the Parser. The overall process is designed to facilitate updating both the framework and parsers simultaneously without local cached upgrades interfering with the latest available versions on the Marketplace. The passage outlines a process for upgrading the Parser version in ArcMC and highlights its integration with containers. It explains that after upgrading, a red icon appears next to the upgraded parser indicating it is not the latest version available. To upgrade, navigate to Node List from the left-hand pane under Bulk, click on Containers to display container list, then select and deploy the updated Parser in one of the containers. The process also involves uploading a Connector framework file for ArcMC itself without requiring internet access or Marketplace. The passage continues with a demonstration of centralized monitoring using ArcMC, focusing on SmartConnectors and Loggers. It covers how to monitor metrics such as inbound EPS from SmartConnectors and outbound EPS from Loggers. The user interface in ArcMC is highlighted, showing the total number of nodes for each type of managed device. This setup allows users to view all hosts managed by ArcMC and set up notifications when specific thresholds are crossed. Chart #1 - Smart Connector Health: This pie chart in the ArcMC interface shows the overall status of connectors, updating based on run time length. Clicking on the Connectors bar allows drilling down to view detailed health and current state for each connector. If a breach rule is triggered (e.g., AVG EPS_IN < 50 over the last two minutes), it will be displayed in the Breach/Health Summary section, alerting about potential issues with incoming EPS. Detailed performance metrics can be viewed by clicking on the Details eye icon, providing troubleshooting and performance analysis support. These graphs are more informative when VMs have been allowed to run for several hours. The page allows users to view information over different time periods (4 hours, 1 day, and 1 week). Chart #2 - Logger Health: This chart is not provided in the text you've given me, but it would follow a similar format as Chart #1, focusing on the health of loggers within the ArcMC interface. The details might include breach rules triggered by specific conditions (e.g., critical errors exceeding a set threshold) and detailed performance metrics for each logger to aid in troubleshooting and optimization. The text provided outlines a series of steps and explanations related to monitoring and visualizing system health using an interface called ArcMC (ArcSight Management Center). Here's a summarized breakdown of the key points discussed: 1. **Pie Chart Overview**: This tool, likely part of the ArcMC interface, is designed to provide an overview of logger statuses across different states depending on performance metrics such as EPS and runtime length. The pie chart dynamically updates to reflect the overall health status of loggers. 2. **Logger Detail View**: Users can click on specific loggers in the main display to drill down for more detailed information about individual loggers. This includes viewing breach details, performance metrics like JVM memory usage, EPS (unknown acronym), and storage group availability. 3. **Detailed Health Information**: The interface allows users to view health parameters at different granular levels including low-level health metrics such as individual Storage Group availability. Time frames available include 4 hours, 1 day, and 1 week. 4. **Device Health Donut Chart**: In the ArcMC interface, devices are grouped by type, and their overall health is represented visually as wedges in a donut chart. Additional detailed information about each device can be accessed via a grid on the right side of the screen. 5. **Configuration Options**: There are configurable settings such as a "time out" interval for devices (default 20 minutes), which automatically switches to a warning state if no events are detected within this period. It is also possible to configure an age-out feature where inactive devices are removed from the display after a predefined time, typically 14 days. 6. **Customization and Management**: The interface allows for customization of settings related to how long systems remain in the monitoring list before being marked as inactive or decommissioned. This helps system administrators manage their monitoring lists more efficiently by removing obsolete entries. 7. **User Interface and Accessibility**: The text suggests that users can navigate through various health metrics and configurations via a user interface that provides clear visual cues (like clicking on dashboard elements) for accessing detailed information about logger and device performance, thus promoting ease of use and accessibility in managing system health. The passage outlines a method for creating and managing rules to monitor specific metrics such as "EPS (Energy Performance Score)" for critical Windows servers. To implement this, follow these steps: 1. **Identify Critical Devices**: Select the critical servers by holding down the CTRL key while selecting them from the device list. These are typically named like "SPAP1.arcnet.com". 2. **Create a New Rule**: Click on 'Add New Rule' and name it, for example, "bytes out" or use EPS as the metric type. Set the severity to critical and specify the measurement value (in this case, 20). Do not require notifications but set the status to enable. 3. **Fill in Details**: Provide a meaningful description for the rule, such as explaining that it monitors if the EPS drops below a threshold. 4. **Save the Rule**: Click 'Save' to create the rule. You can later manage or disable this rule from the "Manage Rules" section. 5. **Monitor the Rule**: After creating the rule, go to the '# of devices' section to see which devices are attached to it. The rule should fire after a minute as expected events might be low initially. 6. **Visual Confirmation**: Upon successful activation, you will notice an orange section in the 'Monitoring Summary' donut wedge on the main page, indicating that critical servers with insufficient EPS data are being monitored. 7. **Topology View**: Utilize the 'Topology View' to get a comprehensive view of all reporting devices and connectors, which is crucial for strategic monitoring and management across potentially thousands of devices. This method allows administrators to efficiently focus on critical devices not meeting performance expectations, enhancing overall system reliability and performance tracking. This text provides an overview of a system called ArcMC, which is used to track device events across different locations and connectors. Key functions include: 1. **Navigation through Layouts**: On the left side, you can find the "Location" and then move through "Topology View" to reach "Reporting Devices," followed by "Smart Connectors." The bottom right displays destinations for events. ArcMC also indicates the type of destination (e.g., CEF, File, KAFKA). 2. **Detailed Event Flow**: Hovering over a device or connector shows detailed information about their health and flow in/out of the system. This includes a breakdown of device health displayed in the bottom right corner of the screen. 3. **Drill-Down Functionality**: Clicking on specific devices or connectors (like "Unix (20)" group) provides more detailed data, which can be used to set up monitoring rules for better device management. 4. **Connector and Replay Details**: Hovering over a connector reveals performance information, while clicking on it gives access to comprehensive health and performance details specific to that connector. The "Replay" feature is useful for reviewing past events or troubleshooting issues. 5. **Graph Metrics**: Depending on the system uptime, you can display graphs showing various metrics for the connector over the last 4 hours, 1 day, or even a week. This tool simplifies administration by providing quick access to detailed information and facilitating more efficient rule configuration and performance analysis across devices and connectors. The summarized text discusses how flexibility in multi-node deployments allows users to have a unified view of all ADP license entitlements across various machines. This approach eliminates the need to license individual Loggers with different capacities and enables global reporting from a single console, facilitating monitoring and capacity management for the entire fleet. To set up ArcMC as an ADP License Server, follow these steps: 1. Access the "ADP License Server" in your web browser window. 2. Confirm the prompt to proceed. 3. On the Monitoring Summary page, scroll down to view a new section called "License Usage for 30 days." This graph shows compliance on a daily basis over a 30-day period. 4. To export a detailed report of all consumption by Loggers, go to the Administration section and navigate to the Consumption Report. Add loggers and run the report to view ingestion, violations, and export it as a PDF. 5. It is recommended to let VMs run non-stop for at least 24 hours to make the report look more substantial. This setup aims to provide an easy way to manage all Logger licensing from one central location. ArcMC is optimizing ADP license usage for customers with multiple Loggers by spreading licenses across the Logger fleet, which leads to better value for money. The demonstration focuses on showcasing the capabilities of Deployment Templates in managing and deploying Connector profiles efficiently, ensuring consistency and compliance, and scaling performance as needed. The templates created during this demonstration include a Syslog daemon Connector Template and a CEF file Destination template, both pre-configured within the ArcMC GUI but newly created for the demonstration purposes. Before starting this process, ensure that a 64-bit Linux version of the Connector framework is uploaded, which was covered in demonstration #4 "Version Management & ArcSight Marketplace." This note provides a step-by-step guide on how to create templates for Connectors and Destinations in order to facilitate the deployment of an Instant Connector, specifically for Syslog Daemon and CEF (Common Event Format) File as destinations. Here's a summary of the process: 1. **Creating a Syslog Connector Template:**

  • Navigate to the "Connectors" section.

  • Click on "New" from the top right corner of your browser.

  • Fill in the details including the template name, network port (15141), and additional fields such as Name ("Syslog_15141_UDP"), Location ("ArcNet"), Device Location ("HQ"), Service Internal Name ("syslog15141udp"), and Service Display Name ("ArcSight Syslog Daemon Connector").

  • Click "Save" in the bottom right corner of your browser window.

2. **Creating a Destination Template:**

  • Use the arrow next to "Destinations."

  • Choose "CEF File" as the destination type.

  • Click "New" from the top right of your browser.

  • Name the template, for example, "cef_destination_linux".

  • Select the version "7.9" or the appropriate 64-bit Linux Connector framework uploaded previously.

  • Set the CEF path to "/opt/cef/" as it is intended for a Linux host.

By following these steps, you can create templates that simplify and expedite the deployment of Connectors with multiple destination types, enhancing flexibility and efficiency in managing complex environments. The scenario described is about deploying a Smart Connector on any host with admin rights, using the Logger VM for simplicity. To deploy the connector: 1. Navigate to the "Connectors/Collectors" section in the dashboard or topology view. 2. Click on the "+" symbol to add a new connector; choose Linux 64-bit OS and specify the installation location as "/opt/connector1". 3. Select the Syslog template from the dropdown menu, specifically Syslog Daemon -> Syslog_Connector_Template. 4. Provide the necessary details: username as root, password as arcsight, job name as syslog_connector_deployment. 5. Customize the connector template if needed and choose the destination template created in a previous use case. Ensure all options for the destination are correct. 6. Explain that multiple destinations can be added using the "+Add" button to create an eb-cef and eb-esm for Event Broker, making large-scale deployments efficient. The text outlines a process for managing a task using a software application called "Job Manager." Here's a step-by-step summary of what to do: 1. **Click on the Job Manager icon**: This should have a '1' beside it now, indicating that the job manager is active. You will see your job running and it should be in progress for a few minutes. 2. **Click on the small single line arrow**: By clicking this, you can reveal more detailed information about the task's status. The down arrow on the right side of the task provides very detailed information regarding its current stage, including preflight checks, installation of the binary itself, configuration, and runtime details like "Check that OS..." 3. **View even more detailed information**: For even more in-depth information from ArcMC for each stage, you can hover over specific stages to get additional details. If a task fails, you will be able to see why it failed by viewing the job details. You can then click the "Retry" button to retry the task with necessary updates. 4. **Job completion**: Once the job is complete, go back to the Topology View to show that the node has been managed and all remote management features are enabled. The New Topology View will now include a new Connector if not already added during creation. If you wish to add more Connectors later, simply return to the Deployment View and click on "+" to add them. 5. **Optional: ArcMC and Event Broker Integration**: This platform allows integration of Logger, ArcMC, and Event Broker/Investigate virtual machines for a comprehensive demonstration of the ADP platform. Ensure your host machine has sufficient resources (SSD drives and at least 32GB RAM) to support this setup. 6. **Use Cases**: The configuration will enable you to manage Event Broker, monitor it, and handle topics effectively as outlined in the provided use cases:

  • Managing and monitoring the Event Broker

  • Administering Event Broker topics

This process provides a comprehensive guide on how to oversee and troubleshoot tasks within a software environment using Job Manager features. This document outlines the steps for configuring an ArcSight Event Broker (CEB) deployment, including setting up connectors and deploying them to a destination. The configuration involves several tasks that can be completed before a demonstration. These steps include: 1. **Deployment of ArcSight Event Broker**: In your web browser, navigate to https://investigate.arcsight.example.com:5443. Log in using the provided credentials (admin / Arcsight!23$). Click on Deployment and next to Event Broker, click Deploy. Choose the version to install (2.21), select "Current" when prompted for configuration to use, and then click Deploy. 2. **Configuration of ArcMC**: Once installed, click Configuration from the left-hand pane, then click ArcSight Event Broker. Copy and paste the ARCMC certificate into the ArcMC Certificate dialogue box from Appendix A. Do not change the login details and click Save. The EB Web Service POD within Kubernetes/Docker will terminate and restart with the new certificate details, which can take a couple of minutes. 3. **Configuration Steps for ArcMC**: In the ArcMC console, navigate to Node Management  View All Nodes  ArcNet  Add Host. Fill in the details as shown: Host - investigate.arcsight.example.com, Type - Event Broker, Port - 38080, Cluster Port - 5443, Cluster Username - admin, and Cluster Password - Arcsight!23$. Paste the EVENT BROKER certificate from Appendix A into the provided space, then click Add and Continue to accept the import of the Event Broker certificate. After a few seconds, the host should be added, and you will see Event Broker as a managed node in the ArcMC dashboard. This process sets up an Event Broker as a managed node within the ArcMC console, allowing for further management and monitoring through the ArcSight platform. The document outlines the management and monitoring of an Event Broker (EB) within a platform called ADP (Advanced Data Platform). It emphasizes how to deploy and configure Cloud Event Brokers (CEBs) for scaling event processing capabilities. Key steps include: 1. **Visualizing EB in Topology View**: Upon clicking on the Dashboard, users can now see EB as a managed component and access detailed information about its topics by clicking on the EB entity. 2. **Deploying CEB**:

  • First, at least one CEB needs to be deployed within the EB.

  • A Collector can then be deployed on another host to start aggregating Syslog events and send them to the CEB.

  • The CEB is configured to receive events via the Kafka protocol, where it processes these events before sending them to specified destination topics within the EB.

3. **Configuration Details**:

  • A CEB can be named (e.g., ceb-1).

  • Source topic for Collectors and Syslog is set statically as eb-con-syslog, which serves as a catch all for any near-raw syslog data from Collectors. Multiple CEBs can consume from this topic to scale processing rapidly.

  • The destination topic is named eb-cef, potentially consumed by Logger or Elastic.

4. **Deployment and Monitoring**:

  • Deploying a CEB involves sending the job to the job manager similar to Connector deployment tasks. This process takes a minute or two to complete.

  • Users can monitor the progress of this job like any other in the Job Manager, although they might encounter connection errors which are normal but require investigation if persistent.

5. **Scaling and Configuration**: By configuring multiple CEBs to consume from the same topic (eb-con-syslog), users can scale event processing rapidly without altering configuration details significantly. This document provides a structured guide for deploying and managing CEBs within an EB, emphasizing scalability through simple configuration adjustments. The provided text outlines the process of deploying and configuring a Collector in an ArcSight environment, followed by management of Event Broker routes and topics using ArcMC. Here's a summarized version of the key points: 1. **Collector Deployment**:

  • After setting up a Central Event Bus (CEB), deploy a Collector for easy deployment and scalability.

  • Tips include shutting down other applications if you have less than 32GB RAM, pausing Logger in VMWare workstation if not needed.

  • Steps to add a new Collector:

  • Navigate to the Dashboard -> Deployment View.

  • Click "+" next to Connectors/Collectors and select "Add Collector".

  • Fill in details including host (connectorhost.arcsight.example.com), username (root), password (arcsight), job name (myCollector), network port (514), and remote management password (changeme).

  • Use a Deployment Template like collector_to_eb37 for the Destination Template.

  • Once installed, check the Dashboard -> Topology View to see the Collector connected to the Event Broker (EB) entity ArcMC will display event flows in Syslog format.

2. **Event Broker Topic and Route Management**:

  • Using ArcMC, manage Event Broker routes and topics for customization without direct access to the Cluster Manager.

  • Add a new custom topic for specific data retention requirements such as 10+ year retention of authentication failures from Hadoop cluster.

This summary captures the main steps and considerations for deploying and managing both connectors and event routing in an ArcSight setup, highlighting the importance of template utilization and efficient resource management within the system architecture. This summary outlines the process of creating a custom topic in an Event Broker (EB) system for capturing authentication failures from multiple sources using CEF (Common Event Format). The steps include naming the topic, specifying parameters like number of partitions and replication factor, adding a route to capture all authentication failure events from the source topic 'eb-cef' to the destination topic 'all-auth-failures'. The logic in this route is designed to handle any device or application authorization failures. The process highlights the scalable nature of EB for handling heavy lifting tasks such as normalization, conversion to CEF format, and long-term archiving of authentication failure events using third-party solutions like Hadoop. This method significantly reduces the load on connectors and provides a centralized solution for managing configurations like Logger, Connector Appliance, and SmartConnectors through ArcMC (ArcSight Configuration Management). The text provided is a summary of a use case focused on centralizing the management and monitoring of various components in an environment, such as Logger, Connector Appliance, and SmartConnectors, using ArcMC. Key features highlighted include centralized monitoring, instant connector deployment, and efficient virtual machine snapshotting for demonstration purposes. Additionally, there's mention of pre-captured application certificates that can be used to simplify the integration with management tools like Event Broker 2.2 cluster CA Certificate. The provided text is a server certificate, specifically for the ArcMC application as noted by "ArcMC" in the description. This certificate is exported from a web browser and serves to authenticate with an Event Broker during the authentication process. The certificate begins with standard headers indicating it's a Begin Certificate block ("

Recent Posts

See All
Zeus Bot Use Case

Summary: "Zeus Bot Version 5.0" is a document detailing ArcSight's enhancements to its Zeus botnet detection capabilities within the...

 
 
 
Windows Unified Connector

Summary: The document "iServe_Demo_System_Usage_for_HP_ESP_Canada_Solution_Architects_v1.1" outlines specific deployment guidelines for...

 
 
 

Comments


@2021 Copyrights reserved.

bottom of page