ArcSight HA New 1
- Pavan Raja

- Apr 8, 2025
- 11 min read
Summary:
To address potential issues with the ArcSight Manager not starting or shutting down properly when using Veritas Cluster Server (VCS), follow these steps to manually run the online/offline scripts and check their outputs for success:
1. **Ensure VCS is Configured Correctly**: Verify that VCS is set up correctly, including all necessary configurations and scripts as outlined in the initial setup instructions provided earlier.
2. **Manual Execution of Online/Offline Scripts**: - As a Cluster Administrator, navigate to `/opt/VRTSvcs/bin/ArcSightManager` on both systems where the ArcSight Manager is installed. - Execute the `online` script manually by running: ```bash /opt/VRTSvcs/bin/ArcSightManager/online ``` - Similarly, execute the `offline` script manually by running: ```bash /opt/VRTSvcs/bin/ArcSightManager/offline ```
3. **Check Output for Success**: After executing these scripts, carefully review the output to ensure they have completed successfully without errors. Look for any messages or logs that indicate a failure or issues during the process.
4. **Review VCS Logs**: If the manual execution does not resolve the issue, refer to the VCS logs for more detailed error messages and information about what might be going wrong: - The default log location for VCS logs can vary depending on your system configuration. Typically, they are found in `/var/log` or a similar directory. - Use commands like `cat`, `tail`, or specific log viewing tools to access these logs and review them for any error messages or warnings that could provide clues about the issue.
5. **Contact EMC Support**: If you encounter persistent issues after following these steps, consider reaching out to EMC (now part of Dell Technologies) support for further assistance. Provide them with detailed information from both the VCS logs and the ArcSight Manager output during manual execution attempts.
6. **Update Documentation and Training**: Ensure that all Cluster Administrators are well-versed in the setup and troubleshooting procedures documented here, including how to manually run online/offline scripts and interpret log outputs. Regular training sessions can help maintain this expertise within your team.
By following these steps, you should be able to diagnose and resolve issues with the ArcSight Manager's startup and shutdown processes when using VCS on Linux systems.
Details:
The ArcSight Technical Note "Deploying ArcSight™ ESM for High Availability" outlines strategies to enhance the availability of ArcSight Event Status Manager (ESM) by utilizing redundant hardware and third-party tools such as EMC AutoStart and Veritas. This document is applicable to versions 3.5 and later of ArcSight ESM. The primary focus is on implementing high availability, which involves configuring redundant components to minimize downtime due to component failures.
Key elements discussed include:
1. **High Availability (HA)**: Refers to the practice of using redundant hardware and software to ensure minimal interruption in service despite potential system failures. For ArcSight ESM used for real-time event monitoring, HA is crucial as it directly impacts operational efficiency.
2. **Failover Configuration**: Allows third-party software to detect failure of any ArcSight ESM component (like the Manager) through low-latency interconnects and switch Virtual IP addresses between active and standby machines. Shared storage ensures continuity by allowing the standby machine to continue from where the active one left off.
3. **Steps for Failover**: Include detecting a failed active component, shutting it down if necessary, switching the Virtual IP to the standby machine, and starting the standby component. This process is designed to reduce downtime but might still cause brief interruptions when inputs are not queued; immediate action is required in cases where inputs are queued.
4. **Downtime Reduction**: Although failover configuration significantly minimizes system unavailability, it does not completely eliminate downtime during the transition period between active and standby components. The note highlights that this issue can be mitigated to some extent by queueing inputs, but more advanced solutions might be necessary for mission-critical applications where minimal interruption is essential.
In summary, deploying ArcSight ESM for high availability involves strategic use of redundant hardware and third-party tools to ensure continued operation in the face of component failures, with detailed steps provided on how to implement failover configurations to minimize downtime.
High Availability ArcSight ESM (Extended Security Manager) is designed to ensure system reliability through the use of discrete components, automatic restarts, and cached event queues. This architecture allows for seamless operation even if certain components fail or need maintenance without losing security events. Not all ArcSight ESM components require high availability; ancillary systems like ArcSight Web, Console, and specific SmartConnectors do not necessarily need to be treated as High Availability since they are not central to data collection.
For the ArcSight Manager specifically, it can run on two machines with an Active Manager and a Standby Manager managed by Failover Management (FM) solutions such as EMC AutoStart or Veritas Cluster Server. These FM solutions direct communication from Consoles, Web interfaces, and SmartConnectors to the active manager, ensuring continuous operation even if one fails.
Similarly, many SmartConnectors can also leverage FM solutions for failover, allowing them to switch to another functional unit in case of failure without disrupting the system or losing data. This architecture is enabled through technologies like EMC AutoStart (previously Legato Automated Availability Manager) and Veritas Cluster Server that manage communication between components.
In summary, High-Availability ArcSight ESM relies on multiple strategies such as discrete component use, automatic restarts, cached event queues, and FM solutions to maintain system availability without compromising security event collection or management capabilities.
This text discusses high availability features in ArcSight Event Manager (ESM) which are designed to ensure continuous operation of systems despite hardware or software failures. The failover mechanisms include using solutions like EMC AutoStart and Veritas Cluster Server. Individual SmartConnectors can be configured to switch over to another manager if the primary one is down, but this feature may not apply to all connectors such as listeners.
For databases, there are two layers: the DBMS (Database Management System) and the storage layer. Both components can use different high availability techniques; failover for the DBMS and clustering for the storage layer. ArcSight uses an embedded Oracle DBMS which does not support clustering, but it has been tested with non-embedded versions like Oracle RAC 9i and beyond that do support clustering no longer support it.
The largest single failure point in the ArcSight ESM system is the file system underlying the database component. To address this, a high-reliability RAID or SAN subsystem with many spindles, striping, and redundancy is recommended for production systems. Specifically, ARC level 1+0 and a proposed volume layout are suggested by the ArcSight Installation and Configuration Guide for storage recommendations.
The article discusses setting up automatic failover for ArcSight Manager using EMC's AutoStart, a tool designed to manage Oracle Cluster File System (OCFS), Veritas, and other storage clustering solutions including its own, EMC AutoStart. Specifically, it focuses on version 5.2 of EMC AutoStart for Solaris 10. The setup involves verifying hardware requirements, installing AutoStart on both systems, configuring basic settings, and setting up a resource group for ArcSight Manager. To ensure optimal performance in a failover scenario, the two systems running ArcSight Manager should be identical, with each having its own IP address for management communication and utilizing EMC AutoStart instances to track system components and state.
To summarize the provided information, the text discusses the necessity of a third virtual IP address for failover purposes and how AutoStart manages this IP to ensure transparent failover across systems. Each system requires at least three (preferably four) network interfaces: management, heartbeat, and database interfaces. The management interface is connected to the public network for ArcSight SmartConnectors and Consoles to communicate with the ArcSight Manager. Heartbeat interfaces are used by AutoStart for communication between systems, ensuring redundancy to prevent "split brain syndrome." This condition occurs when two instances of AutoStart cannot communicate, causing both to think the other is down and attempt to run the ArcSight Manager, leading to IP conflicts and potential data corruption. To avoid this, heartbeat networks use crossover cables linking the systems directly, creating separate networks with non-overlapping sub-networks assigned to each interface. The fourth network card is recommended for the database interface.
This text discusses improving system security and reducing performance bottlenecks by having a separate link to the database system. It mentions that using shared storage such as a slice of a SAN with RAID-1 configuration is recommended for better reliability, as it minimizes single points of failure. The preferred disk size is 9GB or larger to accommodate multiple versions of ArcSight Manager.
The Shared Storage can be either a Storage Area Network (SAN) or a share on a different network interface. However, using a public network to connect Shared Storage is not recommended because FM software might struggle to recover from communication failures with the storage.
To set up ArcSight Manager, follow the instructions in the ArcSight ESM Installation and Configuration Guide. Set the target path in the installer to a directory on the shared drive for security reasons. Run the installer as a non-root user (creating an "Arcsight" user account is recommended) and ensure that the user running the installer has write privileges for the directory. Also, make sure the User ID for the user matches across both systems.
Finally, configure the startup scripts by selecting the option to install them but not enable immediately in the manager setup wizard so that the installation of scripts can proceed without enabling them immediately.
During installation, the init process fails to create symlinks for individual runlevels, causing the ArcSight Manager not to be started by default. However, the scripts remain available to be called by FM software. The setup script requires setting a unique $ARCSIGHT_CID value (suggesting use of the host name) in both system profiles and managersetup.
For high availability deployment of ArcSight ESM, follow these steps:
1. Install AutoStart on each system as per EMC AutoStart documentation, initially setting one node as primary and the other as slave. Once connected, promote the slave to primary.
2. Configure heartbeat networks (Domain Paths) according to AutoStart documentation to ensure proper communication between nodes.
3. Set up a state monitor for ArcSight Manager to detect its running status on each system. This involves configuring test intervals and wait times based on system performance.
This document outlines how to set up a high-availability (HA) configuration for ArcSight Event Manager (ESM). The setup involves creating a virtual IP address and resource group, configuring network path testing, setting up the ArcSight Manager process, and verifying failover functionality.
Step 1: Create a virtual IP address that will be used to fail over between two systems. This is crucial for maintaining service continuity by automatically switching the IP address assigned to the ArcSight Manager service from one system to another in case of failure.
Step 2: Create a Resource Group named "ArcSightManager." This group manages all configurations and resources needed for the manager, including network path testing between systems.
Step 3: Perform network path testing by pinging several hosts that should be reachable to determine if the network interface is down. The chosen hosts are expected to have high availability so that they are unlikely to be unavailable simultaneously.
Step 4: Set up a default configuration for the ArcSight Manager process and specify its directory as $ARCSIGHT_HOME. Configure the existence monitor to use the state monitor created earlier, then add start and stop scripts from $ARCSIGHT_HOME/utilities/failover/legato/.
Step 5: Verify the setup by displaying the resource group on one of the systems (with names ping for primary and pong for secondary). To test failover functionality, attempt to move the service group to the other machine. If successful, this indicates that the system can automatically switch in case of a failure without manual intervention.
This passage discusses setting up a failover configuration for ArcSight Manager using Veritas Cluster Server (VCS) 3.5 on Solaris 9. The goal is to ensure high availability by moving the virtual IP address between two systems, System A and System B, while also managing network interface failures.
The setup involves:
1. Ensuring that both systems have identical user accounts for Arcsight with matching UIDs to avoid permissions issues.
2. Configuring NFS mount or SCSI/Fiber Channel-based shared storage between the two systems as backend storage to ensure data accessibility during failover.
3. Assigning three IP addresses: one permanently assigned to each system and a virtual IP that will be moved between them for ArcSight Manager service.
4. Preparing by creating Arcsight user accounts, ensuring they have the same UIDs on both systems, and configuring their environment settings (.profile/.bash_profile).
5. Creating necessary shared storage configurations as outlined in the section "Setting up Failover using Veritas."
In summary, this document outlines a detailed procedure for setting up a high availability configuration of ArcSight Manager using VCS 3.5 on Solaris 9, focusing on failover mechanisms and network interface management to ensure uninterrupted service operation even during system or network failures.
To install and configure ArcSight Manager on two systems (System A and System B) using Veritas Cluster Administrator, follow these steps:
### 1. Create a Service Group
Name it "ArcSightManager".
Within this group, create a NIC resource for the network interface used by ArcSight Manager.
Also within this group, create an IP resource to assign the Virtual IP address.
Set up a dependency so that the IP address is dependent on the NIC.
### 2. Install ArcSight Manager on System A
1. **Assign Virtual IP to System A**: Use Veritas Cluster Administrator to set the Virtual IP to System A.
2. **Run the Installer**:
As the Arcsight user, run the ArcSight Manager installer on System A.
3. **Install on Shared Drive**: Install the ArcSight Manager to the shared drive.
4. **Provide Host Name**: When prompted for the Manager host name, enter the hostname that resolves to the Virtual IP address.
5. **Install Configuration**:
Select "Install scripts, but do not automatically start ArcSight Manager".
For Cluster ID, enter the hostname of System A (which should match the ARCSIGHT_CID set on this system).
6. **Finish Installation**: Complete the installation and ensure the Manager starts up correctly. Log in through the ArcSight Console using the host name mapping to the Virtual IP as the Manager host name.
7. **Shut Down Manager**: Once logged in, shut down the ArcSight Manager.
### 3. Install ArcSight Manager on System B
1. **Assign Virtual IP to System B**: Use Veritas Cluster Administrator to set the Virtual IP to System B.
2. **Run Command**: Navigate to `$ARCSIGHT_HOME/bin` on the shared drive and run `arcsight managersetup`.
3. **Continue Through Dialogs**: Follow all dialogs, keeping default settings.
4. **Service Configuration**:
When prompted whether to run the Manager as a service, select "Install scripts, but do not automatically start ArcSight Manager".
5. **Cluster ID Input**: For Cluster ID, enter the hostname of System B (matching ARCSIGHT_CID).
6. **Complete Installation**: Finish the installation and ensure it starts correctly.
This process ensures that both systems can share the same Virtual IP address for seamless communication, with each system having its own unique host name resolved to this IP.
To add ArcSight Manager to a cluster managed by VCS (Virtual Computing Environment), follow these steps:
1. Ensure that the ArcSight Manager is not running on any of the systems.
2. Make sure the configuration in VCS is saved and closed.
3. As root, shut down VCS with the command `hastop –all` on either of the machines.
4. On both machines, perform the following steps:
Copy `ArcSightManager.cf` from `$ARCSIGHT_HOME/utilities/failover/linux/` to `/etc/VRTSvcs/conf/config/`.
Edit `/etc/VRTSvcs/conf/config/main.cf` and include the following line after the last `include` line at the head of the file: `include "ArcSightManager.cf"`.
Under `/opt/VRTSvcs/bin/`, create a directory called `ArcSightManager`.
Navigate to this new directory (`cd /opt/VRTSvcs/bin/ArcSightManager`) and perform the following actions:
Create symlinks for the following files:
`$ARCSIGHT_HOME/utilities/failover/VCS/linux/clean` as `clean`
`$ARCSIGHT_HOME/utilities/failover/VCS/linux/online` as `online`
`$ARCSIGHT_HOME/utilities/failover/VCS/lin` (note: the last character is likely a typo and should be corrected if it appears in the original source)
5. Start VCS with the command `vcs start`.
6. Add ArcSight Manager as a resource to VCS by following the steps outlined for adding resources, ensuring you select the new `ArcSightManager` type created earlier.
7. Configure VCS to fail over the ArcSight Manager according to the specific instructions provided in your VCS documentation or within the scripts and configuration files already set up for this purpose.
8. Start the ArcSight Manager on either machine using the command appropriate for starting it, usually `arcsight-manager start`. The system should now be able to connect through the Virtual IP address assigned to the correct host.
9. Finish the Manager installation by logging into the ArcSight Console and mapping the host name to the Virtual IP as instructed earlier.
10. Once configured correctly, shut down the ArcSight Manager on either machine using the appropriate command (`arcsight-manager stop` if you installed it with a custom script).
This document outlines steps for setting up ArcSight Manager with high availability using Veritas Cluster Server (VCS) on Linux systems. The setup involves editing and running scripts to monitor the ArcSight Manager status, configuring resource properties in VCS, and testing the configuration. Key commands and procedures are provided for both initial setup and troubleshooting common issues during deployment.
Original System: The system involves a Veritas Cluster Server (VCS) that automatically disables hosts after they fail to function, without providing operators the chance to fix issues. Once the issues are resolved, hosts must be manually "auto enabled" by Veritas Cluster Administrators before they can resume running the service group.
Problem: ArcSight Manager does not start or shut down properly.
Solution: To resolve this issue, perform the online/offline scripts manually and check the output for success. If these steps do not work, review VCS logs for further assistance.
Additional Information: Keywords such as Manager, Database, Oracle, High Availability, Failover, EMC AutoStart, Legato, Veritas, Oracle RAC, OCFS are relevant to the context of managing a system with these components. The document is copyrighted by ArcSight, Inc., and it emphasizes that any network information included in the document (like IP addresses and hostnames) is for illustrative purposes only.

Comments