top of page

ArcSight High Availability - VCS - UNIX

  • Writer: Pavan Raja
    Pavan Raja
  • Apr 8, 2025
  • 13 min read

Summary:

The passage you've provided outlines a detailed procedure for setting up ArcSight Web on two systems using Veritas Cluster Server (VCS) for high availability. This setup involves several steps, including copying configuration files, editing scripts, and ensuring proper VCS management of the ArcSight service. Here’s a summary of what is being described: 1. **Preparation**: Ensure that ArcSight Web services are not running on either system before proceeding with the setup. 2. **Configuration for VCS**: - Copy `ArcSightWeb.cf` to `/etc/VRTSvcs/conf/config/`. - Edit the main configuration file to include this new configuration by adding `include "ArcSightWeb.cf"`. - Create a directory in `/opt/VRTSvcs/bin/` called `ArcSightWeb` and navigate into it. - Set up symlinks for `monitor`, `online`, and `offline` scripts within this directory, renaming them appropriately (e.g., `ArcSightWebAgent`). 3. **Editing Scripts**: Modify the `online` and `offline` scripts to reflect the correct paths for services management as per your setup (e.g., `/etc/init.d/arcsight_web` on RedHat systems). 4. **Bringing Up VCS**: After setting up the necessary configurations, bring Veritas Cluster Server back online by running `hastart` on both systems. 5. **Creating Resource in ArcSightDatabase**: Create a new resource for ArcSight Web within the ArcSightDatabase resource group in your management console. 6. **Installing ArcSight Web on System A**: - Assign a Virtual IP address to System A using VCS and then proceed with installing ArcSight Web by running `runwebsetup` as the oracle user under `ArcSight_Home/bin`. - Complete the installation process, ensuring that you select the option to install the web service as a service. - As root, run `runAsRoot.sh` to finalize the service setup and configure it to run under the arcsight user. 7. **Switching Services to System B**: Use VCS console commands to switch services to System B, repeat steps 3-5 for this system as well. The passage also includes troubleshooting tips such as checking VCS logs for issues after a failover and ensuring that all ArcSight components are correctly managed by VCS. The information provided is meant for internal use only and should be handled with discretion, reflecting the confidential nature of the content discussed.

Details:

ArcSight ESM is a tool used for real-time event monitoring, which can be enhanced to have high availability (HA) by using redundant hardware and third-party tools like Veritas Cluster Server (VCS). This document explains how to deploy ArcSight ESM for HA, focusing on its implementation with VCS. High Availability (HA) in this context refers to the practice of maintaining minimal downtime due to component failures. For ArcSight ESM, achieving high availability is crucial as it directly impacts real-time event monitoring capabilities. The document outlines basic failover configuration using third-party software and hardware. In a typical setup, two identical machines host an ArcSight ESM component (like the Manager). They share a file system containing an ARCSIGHT_HOME folder, which allows for seamless switching in case of failure. A Virtual IP address is used to switch between the active and standby components during failover. The process involves detecting a failed Active component, shutting it down if needed, switching the Virtual IP to the Standby machine, and starting the Standby component. Although this method still results in some downtime, it's acceptable for systems where inputs are queued. For environments without input queues, minimal disruption is experienced. The article discusses how high availability features in ArcSight Event Security Manager (ESM) help mitigate latency issues by using components such as discrete modules, automatic restarts, and cached event queues. These features ensure that if a component fails or the database is unavailable, operations can be automatically suspended and resumed without data loss. ArcSight ESM is designed for high availability through various mechanisms including: 1. The use of separate hardware or virtual machines for critical components like the Manager and SmartConnectors. 2. Automatic restart capabilities to ensure continuous operation even after a system failure. 3. Event caching, allowing events to be stored temporarily until the system can resume normal operations. 4. Failover Management (FM) solutions such as EMC AutoStart or Veritas Cluster Server are used for certain components like the Manager and SmartConnectors to maintain availability during hardware failures or other disruptions. 5. The ArcSight Manager can operate on two machines, with one serving as an Active Manager and the other as a Standby Manager, ensuring that systems remain operational even if primary equipment fails. In summary, high availability in ArcSight ESM is achieved by leveraging discrete components, automatic restarts, event caching, FM solutions for specific components like the Manager and SmartConnectors, and failover mechanisms to ensure minimal downtime and uninterrupted security event data collection. SmartConnectors can be set up to failover just like the Manager, with event cache and configuration information maintained in shared storage accessible to both active and standby components. This setup allows the standby SmartConnector to resume processing after a failover by using the "last read ID" stored in shared storage. However, listener connectors such as Cisco PIX or syslog connectors are not good candidates for simple failover solutions due to potential event loss during switchover latency. SmartConnectors include checksum algorithms for log file detection but can be tripped up if logs are rotated during failover latency. High-Availability Database: The ArcSight Database component has two layers - the DBMS and the storage layer. For Oracle 9i and ArcSight ESM v3.5, DBMS may use failover while storage uses clustering. The largest failure point in this system is the file system underpinning the ArcSight Database, which requires a high-reliability RAID or SAN subsystem with tens of spindles for reliability. Specific recommendations include RAID level 1+0 and a proposed volume layout. The provided text discusses setting up ArcSight ESM (Enterprise Security Manager) database failover using Veritas Cluster Server (VCS) on RedHat AS4, ensuring high availability (HA). It mentions that any clustering system compatible with Oracle DBMS can be used, as the storage mechanism is transparent to ArcSight ESM. The setup involves configuring two mostly identical systems called System A and System B, each requiring a shared storage solution accessible from both machines. For test setups, NFS mounts on a third machine are suggested; however, for production environments, SCSI or Fiber Channel based solutions are recommended due to the risk of single point failure in an NFS server. The setup assumes three IP addresses: one assigned permanently to System A (System A IP), another to System B (System B IP), and a Virtual IP that moves between systems during operation. Preparation steps include: 1. Creating oracle user accounts. 2. Ensuring these user accounts have the same UIDs across both systems to avoid permission issues. 3. Installing required RPMs before proceeding with the database installation, as detailed in the ArcSight Installation guide. Steps for setting up VCS on this configuration involve using Veritas Cluster Administrator (VCS), with a Java Console provided by the VCS Cluster Administrator needed during the process. The text concludes by directing to VCS documentation for further details and steps specific to VCS administration. To summarize, the process for installing and configuring ArcSight Database on two systems involves several steps using Veritas Cluster Administrator (VCS) and Oracle tools. Here’s a simplified breakdown of each step: 1. **Create a Service Group**: Name it something like "ArcSightDatabase". 2. **In the Service Group, create a NIC resource** for the network interface that ArcSight Database will use to communicate with the Manager. 3. **In the Service Group, create an IP resource**, assigning it a Virtual IP (VIP). 4. **Create a dependency** so that the VIP depends on the NIC. 5. **On System A using Veritas Cluster Administrator**: Assign the virtual IP to this system. 6. **As the root user on System A**, run the ArcSight Database installer. 7. **Install ArcSight Database Software on shared storage**. 8. **Proceed with the database installation** by installing Oracle 10G software, creating an instance, and ensuring all control files and redo logs are on shared storage. 9. **Configure allowed TNS clients** to include the VIP and both manager systems' IPs. 10. **Complete the instance creation**, selecting the desired template and setting up tablespaces with data files on shared storage. 11. **Stop the database instance** on System A, then use the VCS console to transfer the VIP and shared storage to System B. 12. **On System B as the root user**, change to the shared `$ArcSight_Home/bin` directory where you installed the ArcSight Database software. 13. **Run the database setup** again, selecting "Install Oracle 10gR2 Database Software". 14. **Copy specific files from System A to System B**: `/etc/oratab` and `/etc/profile.d/oracle.sh`. These steps ensure that ArcSight Database is installed in a high-availability configuration across two systems, utilizing shared storage for data integrity and failover capabilities. To set up an ArcSight ESM (Enterprise Security Manager) for high availability using Oracle Database with failover capabilities across two systems (System A and System B), follow these steps as the oracle user: 1. **Transfer Configuration Files:**

  • From System A, transfer the `/etc/oratab`, `/etc/profile.d/oracle.sh`, `sqlnet.ora`, `tnsnames.ora`, and `listener.ora` files to System B using `scp`.

2. **Archive and Move Critical Directories:**

  • Archive the directories `$ORACLE_HOME/admin/arcsight` and `$ORACLE_HOME/dbs` on System A, move them to shared storage, and create symbolic links back to these archives on both systems.

  • Run commands:

```bash cd $ORACLE_HOME tar -cvf admin.tar.gz $ORACLE_HOME/admin/arcsight tar -cvf dbs.tar.gz $ORACLE_HOME/dbs cd tar -xvf admin.tar.gz tar -xvf dbs.tar.gz ln -s /admin/arcsight /$ORACLE_HOME/admin/arcsight ln -s /dbs /$ORACLE_HOME/dbs ``` 3. **Start and Stop Oracle Instances:**

  • Ensure Oracle and the listener are started and stopped on both systems before proceeding. Run commands as oracle user:

```bash dbstart lsnrctl start dbshut lsnrctl stop ``` 4. **Add ArcSight Database to Cluster:**

  • Install new VCS (Virtual Circuit Software) resource types for Oracle and the listener.

  • Perform the following steps in sequence:

1. Ensure Oracle is not running on any system. 2. Confirm that the configuration in VCS is saved. 3. Add Oracle as a resource to the cluster, allowing VCS to control both the database and listener components. By following these steps, you will configure an ArcSight ESM with high availability across System A and System B, utilizing Oracle Database failover capabilities managed by VCS. This document outlines the steps for setting up Veritas Cluster Server (VCS) on RedHat AS4 with ArcSight Manager, ensuring high availability (HA). The process involves shutting down and reinstalling necessary software, configuring VCS settings, creating resources in ArcSightManager, and establishing dependencies. It assumes two identical systems connected to a shared storage solution like NFS or SCSI/Fiber Channel for production setups. Three IP addresses are assigned to the systems: one for each system (System A and System B) and another for a VIP address. The document provides detailed instructions on how to configure VCS, install necessary agents, set up ArcSight Manager resources, and ensure their dependencies for HA operation. This document outlines the process for setting up a virtual IP (VIP) for ArcSight Manager between two systems, A and B, to ensure high availability. The steps include creating user accounts with matching UIDs on both systems, configuring environment variables, and using Veritas Cluster Administrator (VCA) to manage the VIP. Here's a summarized version of the instructions: 1. **Create ArcSight User Accounts**: On both systems A and B, create user accounts named `arcsight` with corresponding UID. 2. **Set Environment Variable**: For each system, edit the `.profile/.bash_profile` files for the `arcsight` user to set the `ARCSIGHT_CID` environment variable to the hostname of that system. 3. **Service Group Configuration in VCA**:

  • Create a Service Group named ArcSightManager.

  • Add a NIC resource and an IP resource within this group, assigning the Virtual IP (VIP) address.

  • Set dependencies so that the IP depends on the NIC.

4. **Install ArcSight Manager**:

  • Assign the VIP to System A using VCA.

  • On System A, as the `arcsight` user, install ArcSight Manager from the shared drive.

  • During installation, use the VIP's host name when prompted for the manager hostname.

  • Configure and start the manager service without auto-starting it.

  • Set the Cluster ID to the system's hostname that matches the `ARCSIGHT_CID` environment variable.

  • Finish the installation and ensure the manager starts correctly, mapping the VIP host name in the ArcSight Console.

5. **Shutdown and Reconfigure**: Shut down the ArcSight Manager and reassign the VIP to System B using VCA. Repeat the install process on System B with the new VIP assignment. This setup ensures that if one system fails, the ArcSight Manager can be seamlessly moved to another system without requiring changes in configuration files or IP addresses. To summarize the provided text, here are the steps to configure and add an ArcSight Manager to a cluster using VCS (Virtual Computing Environment) for failover purposes: 1. **Ensure ArcSight Manager is not running** on any of the systems involved in the cluster. 2. **Close and save all configuration settings** in VCS if they are open. 3. **Shut down VCS** by issuing the command `hastop -all` as root, on either of the machines where it's installed. 4. For both machines in the cluster: a. Copy the script `ArcSightManager.cf` from `$ARCSIGHT_HOME/utilities/failover/linux/` to `/etc/VRTSvcs/conf/config/`. b. Edit the file `/etc/VRTSvcs/conf/config/main.cf` and add the line `include "ArcSightManager.cf"` after the last include statement at the beginning of the file. c. Create a directory called `ArcSightManager` under `/opt/VRTSvcs/bin/`. 5. **Configure VCS to fail over** the ArcSight Manager by running appropriate commands and scripts that are included in the copied `ArcSightManager.cf` file. These scripts will manage the startup, shutdown, and monitoring of the ArcSight Manager. 6. After completing these steps, restart VCS using standard procedures for your operating system to bring it back online. Once VCS is running again, it should be able to control the ArcSight Manager across both machines in the cluster, with failover capabilities if needed. These steps are crucial for setting up a highly available and fault-tolerant ArcSight environment where the management function can seamlessly shift between different systems based on availability and performance metrics. To summarize the provided instructions: 1. Navigate to the `vcs/bin/` directory. 2. Create a new directory named `ArcSightManager`. 3. Change into the `ArcSightManager` directory. 4. Within that directory, create a symbolic link named `ArcSightManagerAgent` pointing to `/opt/VRTSvcs/bin/ScriptAgent`. 5. Create symlinks within the same directory for the following files:

  • `$ARCSIGHT_HOME/utilities/failover/VCS/linux/online`

  • `$ARCSIGHT_HOME/utilities/failover/VCS/linux/offline`

  • `$ARCSIGHT_HOME/utilities/failover/VCS/linux/monitor`

6. Edit the `online`, `offline`, and `monitor` scripts to adjust paths as necessary, especially for Solaris where the correct path to the `arcsight_manager` script is `/etc/init.d/arcsight_manager`. 7. For the monitor script on Linux: ```bash #!/bin/sh . /etc/arcsight/arcsight_manager.conf $ARCSIGHT_HOME/bin/arcsight managerup if [ "$?" = "0" ]; then exit 110 else exit 100 fi ``` 8. Test the monitor script by running it as root to ensure proper functionality:

  • Successful run on a running ArcSight Manager should show `XML RPC response received` and `Heartbeat received`.

  • Failure (no heartbeat) should display `XML RPC response received` but no heartbeat.

9. Bring VCS back up by running `hastart` on both systems after testing. 10. In the ArcSightManager resource group, create a new resource of type `ArcSightManager`. 11. Modify the MonitorTimeout property under properties to set it to 120 seconds: `hatype -modify ArcSightManager MonitorTimeout 120`. These steps outline the process for setting up Veritas Cluster Server (VCS) with ArcSight Partition Archiver for high availability on a RedHat AS4 system, including necessary script modifications and configuration changes. This document outlines the steps for installing and configuring ArcSight Partition Archiver on two systems to form a Veritas Cluster Server (VCS) cluster, allowing VCS to manage the ArcSight Partition Archiver service. The process involves setting up the agent on each system, creating necessary configurations in VCS, and ensuring failover capabilities. **Installation Steps:** 1. **Assign Virtual IP Address**: Use Veritas Cluster Administrator to assign a virtual IP address to System A. 2. **Install ArcSight Agent**: On System A, as the oracle user under `ArcSight_Home/bin`, run: ```bash ./arcsight agentsetup –w –i console ``` Provide the host name that resolves to the Virtual IP address when prompted. 3. **Install Service**: As the root user, run: ```bash ./arcsight agentsvc –i –u oracle ``` 4. **Repeat on System B**: Using the VCS console, switch services to System B and repeat steps for installation. 5. **Start Partition Archiver**: Manually start the Partition Archiver on both systems and check logs for errors. 6. **Shut Down ArcSight**: Properly shut down the ArcSight Partition Archiver service. **Adding ArcSight Partition Archiver to VCS:** 1. Ensure no running instances of ArcSight Partition Archiver. 2. Close any open configurations in VCS. 3. Shut down VCS on either system using: ```bash hastop –all -force ``` 4. **Configuration Steps for Both Machines:** a. Copy `ArcSightPartitionArchiver.cf` to `/etc/VRTSvcs/conf/config/`. b. Edit `/etc/VRTSvcs/conf/config/main.cf`, adding: ```plaintext include "ArcSightPartitionArchiver.cf" ``` c. Create a directory `ArcSightPartitionArchiver` under `/opt/VRTSvcs/bin/`. This setup ensures that the ArcSight Partition Archiver can be managed as a VCS resource, providing high availability and failover capabilities. The provided text outlines a series of steps for setting up ArcSight Web Failover using Veritas Cluster Server (VCS) on a RedHat AS4 system. Here's a summary of the process: 1. **Create Directory and Symlink**: Navigate to `/svcs/bin/` and create a directory named `ArcSightPartitionArchiver`. Inside this directory, create a symbolic link named `ArcSightPartitionArchvierAgent` pointing to `/opt/VRTSvcs/bin/ScriptAgent`. 2. **Copy Files**: Copy the files `monitor`, `online`, and `offline` from the attached zip file (presumably containing DB-Archiver_VCS scripts) into the `ArcSightPartitionArchiver` directory. 3. **Edit Scripts**: Edit the `online` and `offline` scripts, noting that the correct path to the `arc_oraclepartitionarchiver_db` script on RedHat is `/etc/init.d/arcsight_manager`. 4. **Run Monitor Script**: Run the `monitor` script as root and use the `echo` command to check the return value. If the archiver is running, it should return 110; if not, it should return 100. 5. **Bring VCS Back Up**: On both systems, run the command `hastart` to bring Veritas Cluster Server back up. 6. **Create Resource in ArcSightDatabase**: In the ArcSightDatabase resource group, create a resource of type ArcSightPartitionArchiver. 7. **Install ArcSight Web on System A**:

  • Assign the Virtual IP address to System A using Veritas Cluster Administrator.

  • As the oracle user under `ArcSight_Home/bin`, run the `runwebsetup` script to set up ArcSight Webserver.

  • Follow through with the setup as normal, selecting to install web as a service.

  • As the root user, run the `runAsRoot.sh` script to install the service and configure it to run as the arcsight user.

8. **Switch Services to System B**: Using the VCS console, switch the services to System B and repeat steps 3-5 to install the service scripts on this system. This process ensures that ArcSight Web is set up in a high availability configuration using Veritas Cluster Server across two systems. To add ArcSight Web to a cluster managed by VCS (Virtual Computing Environment), follow these steps in order to ensure that VCS can control and manage ArcSight Web as a failover resource. Here's a summarized version of the process: 1. **Ensure ArcSight Web is not running** on any systems. 2. **Save and close the configuration in VCS**. 3. **Shut down VCS** by issuing the command `hastop –all -force` as root on either system. 4. **Perform these steps on both machines**:

  • Copy `ArcSightWeb.cf` to `/etc/VRTSvcs/conf/config/`.

  • Edit `/etc/VRTSvcs/conf/config/main.cf` to include the line: `include "ArcSightWeb.cf"`.

  • Under `/opt/VRTSvcs/bin/`, create a directory called `ArcSightWeb` and navigate into it.

  • Create symlinks for `monitor`, `online`, and `offline` scripts, with names corresponding to your setup (e.g., `ArcSightWebAgent`).

  • Edit the `online` and `offline` scripts, updating paths as necessary (e.g., `/etc/init.d/arcsight_web` for RedHat systems).

5. **Bring VCS back up** by running `hastart` on both systems. 6. In the ArcSightDatabase resource group, create a new resource for ArcSight Web. This process involves configuring scripts and settings to allow VCS to manage ArcSight Web's startup, shutdown, and monitoring within the cluster environment. This technical note discusses how to test the configuration of an ArcSight resource using Veritas Cluster Administrator (VCS). The steps include starting and stopping ArcSight Service Groups on multiple hosts, rebooting or power cycling systems to observe automatic startup of Oracle components, and inspecting VCS logs for issues. Common problems during setup include VCS not detecting if ArcSight Manager is running, after a failover VCS complaining about moving the service group back to the original system, and Manager not starting or shutting down. Solutions involve checking VCS logs, adding lines to monitor scripts to log output, manually running online/offline scripts, and consulting VCS logs for further assistance. The document also mentions other related keywords such as Manager, Database, Oracle, High Availability, Failover, EMC AutoStart, Legato, Veritas, Oracle RAC, OCFS. This passage outlines several points about ArcSight's intellectual property, including their copyrights, trademarks, and acknowledgments. It provides a link for users to access detailed information regarding these matters at http://www.arcsight.com/copyrightnotice. The network information provided in the examples is fictional and used only for illustrative purposes, not as actual data. Additionally, this document is considered ArcSight confidential, implying that its contents should be handled with discretion.

Disclaimer:
The content in this post is for informational and educational purposes only. It may reference technologies, configurations, or products that are outdated or no longer supported. If there are any comments or feedback, kindly leave a message and will be responded.

Recent Posts

See All
Zeus Bot Use Case

Summary: "Zeus Bot Version 5.0" is a document detailing ArcSight's enhancements to its Zeus botnet detection capabilities within the...

 
 
 
Windows Unified Connector

Summary: The document "iServe_Demo_System_Usage_for_HP_ESP_Canada_Solution_Architects_v1.1" outlines specific deployment guidelines for...

 
 
 

Comments


@2021 Copyrights reserved.

bottom of page