top of page

ArcSight FlexConnector Training V2 by Accenture

  • Writer: Pavan Raja
    Pavan Raja
  • Apr 8, 2025
  • 19 min read

Summary:

The provided text appears to be a documentation draft or outline for setting up and configuring a REST Flexconnector in ArcSight, focusing on various steps including registering an application, creating properties files, determining API endpoints, parsing JSON data, and configuring the connector itself. It also mentions using regular expressions (Regex) for testing purposes with sample logs. Here’s a more structured breakdown of the document's content: ### 1. Register Your Connector Application - **Steps:** - Register your application with specific vendors like Box, Salesforce, and Google Apps to obtain OAuth2 registration details. - These details will be used to create an OAuth2 client properties file. ### 2. Create OAuth2 Client Properties File - **Steps:** - Use the obtained values from vendor registration for creating a properties file. - Include a `redirect_uri` in the properties file. ### 3. Determine Events URL (REST API Endpoint) - **Steps:** - Understand general information about REST API endpoints provided by vendors like Box, Salesforce, and Google Apps. - Learn how to query events based on timestamps and understand rate limiting specifics. ### 4. Create a JSON Parser File - **Steps:** - Define the structure of the JSON data retrieved from the events URL. - Create a parser file to ensure proper parsing of raw JSON data. ### 5. REST FlexConnector Configuration Support Tool (restutil) - **Steps:** - Install `restutil` by providing the name of the parser file, its path, and specifying the events URL. - Link the OAuth2 client properties file to your registered application. ### 6. Flex Active List Import - **Steps:** - Create a flex connector to read data from active lists without mapping tokens directly to fields initially. - Map tokens to additional data as needed and use Velocity Macro files for converting timestamps into readable strings. ### Example Configuration for Flex Active List Import - **Steps:** - Define comments with delimiters and token counts. - Specify token names and types (e.g., IP as String). - Enable additional fields like IP_ADDRESS and CREATE_DATE. ### Additional Information and Future Enhancements - The document suggests future enhancements including using advanced regex usages, setting up a site for online testing and generation of flex capabilities, Q&A sessions based on user suggestions, and modifications based on user feedback. ### Conclusion - The document concludes by thanking the author and acknowledging copyright under their name from 2013. This documentation seems to be aimed at users who are new to setting up Flexconnectors in ArcSight or working with REST APIs and JSON data parsing. It provides a comprehensive guide that includes practical steps, future enhancements, and potential areas for user engagement and feedback.

Details:

The document "ARCSIGHT FLEXCONNECTOR TRAINING LEVEL 02" is designed for those who are looking to gain a comprehensive understanding of FlexConnectors in the context of ARCSight, an enterprise security information management platform. This training manual outlines various aspects including basic and advanced concepts related to FlexConnectors, their types, configuration file structure, and implementation details. Topics such as "Introduction to ArcSight FlexConnector," "Basic Flex Concepts," "Little Advanced Concepts of Flex," and specific configurations for different log formats (like syslog, database logs, etc.) are covered in the document. The manual provides detailed information on how to plan and implement a FlexConnector, including setting up regex for declaration and configuring the FlexAgentWizard/Regex Wizard. It also explains token declarations, event mapping, severity mapping, and discusses various types of FlexConnectors such as log-file, Regex, Syslog, Time-based Database, ID-based Database, SNMP, XML Folder Log file, Scanner for Text, XML, and Database, among others. Furthermore, the document delves into advanced topics like submessages, conditional mapping, extra processor usage, multi-line regex, parser overrides, extra mapping files, merge operations, custom categorizations, key value parsers, creating map files, defining deviceEventClassId, additional data mapping, and integrating with CounterACT Connector and REST API. It also covers the crucial aspects of importing Flex Active Lists and Assets from the smart connector installer, as well as providing examples on different types of Flex Connectors to illustrate practical applications. In summary, this manual serves not only as a training guide for implementing ARCSight FlexConnectors but also as a reference for understanding their architecture, configuration, and functionality within an enterprise security environment. This document outlines the configuration settings for various types of Flex Connectors in ArcSight, a security information and event management (SIEM) tool. The configurations include file locations for log files and databases as well as specific properties for each connector type. For fixed format or delimited log files, use Log File Flexconnector. For variable format log files, use Regex Log File or Regex Folder Follower. Time-based DB Flexconnector is used to read event info from tables based on timestamp, while ID based DB Flexconnector reads based on unique IDs. Multiple DB Flexconnector can handle logs from multiple databases, including both time-based and ID-based tables. SNMP Connector collects logs from SNMP traps, SYSLOG Connector captures security events from syslog messages, XML Connector reads logs from XML files in a folder, Scanner Connector imports scan results from scanner devices. Rest API allows collection of cloud-based application security events through configurable methods, while CounterACT enables execution of commands on third party devices directly within ArcSight. Each connector type has specific properties and file locations detailed in the document for proper configuration and integration with the SIEM tool's functionalities. The provided text outlines a comprehensive configuration for parsing various log files and system logs into a detailed database format that can be managed within the ArcSight console, an advanced security information and event management (SIEM) tool. This setup is designed to capture, analyze, and correlate security events from diverse sources such as network devices, databases, and systems, providing valuable insights for forensic analysis and incident response. Key components of this configuration include: 1. **Log File Example**: The text provides sample entries from different log formats including a standard log file with IP addresses and port numbers, a regex-based log entry mimicking system logs like SSH or PAM events, and time-based database examples showing detailed configurations for syslog entries. 2. **Parsing Mechanism**: This involves using regular expressions (regex) to identify and extract specific data fields from the log messages. The process requires clarity on what needs to be parsed and mapping these extracted fields to ArcSight's Common Event Format (CEF) schema, which standardizes event data for unified management across different security devices and applications. 3. **Regex Log File**: Demonstrates how regex can be used to parse log entries from systems like SSH or PAM, breaking down date-time formats, IP addresses, ports, and other relevant fields. This section also covers the process of creating and refining regular expressions to ensure accurate parsing. 4. **Token Declaration**: This part involves defining each token within a line of the log file. It includes specifying the number of tokens in each entry as well as individual attributes like name (token.na) which are used later for mapping and event classification based on ArcSight's schema requirements. 5. **Event Mapping with ArcSight Schema**: Once parsed, these events need to be mapped according to the predefined structure of ArcSight’s CEF fields. This involves setting up severity levels, submessages, additional data, and conditional mappings as required by the specific event class ID defined for each log type. 6. **Additional Configuration Elements**: These include parser overrides, extra processor settings, multi-line regex handling, parsing mechanisms for both delimited and non-delimited text formats, and custom categorizations to refine how logs are organized and presented in the ArcSight console. 7. **FlexAgentWizard and Regex Wizard**: Tools provided by ArcSight to facilitate the creation of regular expressions using a graphical user interface (GUI) or through scripted automation, making it easier to configure log parsers without extensive programming knowledge. This comprehensive setup not only simplifies the process of parsing complex logs but also enhances the efficiency and accuracy of security event monitoring within large-scale enterprise environments by ensuring that all events are captured, parsed, and managed in a structured manner across different devices and applications. The text provided outlines a method for configuring and utilizing tokens within a system, specifically in the context of data parsing and event mapping using ArcSight Flex Connectors. Here's a summary of the key points: 1. **Token Configuration**: Each token is assigned a user-defined name (e.g., Msg or MyIP) and specified its type such as String, IPAddress, etc. These tokens are used to parse data from input records. Built-in tokens can also be configured for specific Flex Connectors. For example:

  • `token<0>.name=Msg`, `token<0>.type=String`

  • `token<1>.name=MyIP`, `token<1>.type=IPAddress`

  • The assigned name and type help map the parsed data to specific fields in ArcSight Event schema (e.g., `event.sourceAddress=MyIP` and `event.message=Msg`).

2. **Severity Mapping**: For events, severity is crucial as it affects both the Threat Level Formula and report generation based on device or event Severity. Tokens containing values like Error, Warning, Informational, Critical, Notification are mapped to predefined levels using conditional logic (e.g., `severity.map.veryhigh.if.deviceSeverity=95`). 3. **Configuration File Details**: The provided configuration file snippet includes settings for handling unparsed events and defining a regex pattern to parse specific fields from the input data:

  • `do.unparsed.events=true`: Indicates whether unparsed event records should be processed.

  • `Regex=(\\d+\\/\\d+\\/\\d+)\\s+(\\d+\:\\d+\:\\d+)\\s+SRC\=(\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3})\\s+DST\=(\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3})\

\s+SPT\=(\\d+)\\s+DPT\=(\\d+)\\s+Sev\=(\\d+)\\s+URL\=(.*?)`: Defines a regex pattern to capture date, time, source IP, destination IP, and other details from the input.

  • `token.count=5`: Specifies the number of tokens configured in this setup.

  • Additional token configurations include names like Time_of_the_event with type Timestamp, format dd/MM/yy HH:mm:ss, and SrcIp, DstIp as IPAddress types.

This system is designed to facilitate more complex data handling and mapping within security information management systems, enhancing the ability to parse and categorize diverse log entries into structured events for better analysis and reporting. The text provides an overview of parsing mechanisms, focusing on regular expressions (Regex), SQL queries for databases, and XML processing using node expression trees. For Regex-based parsing, it highlights the importance of defining patterns to extract data from logs effectively. This can range from simple to complex scenarios depending on the log file's complexity. The text also mentions that understanding the nature of events in device or application documents is crucial before setting up a regex for parsing. It advises checking logging mechanisms and choosing appropriate collection methods (batch or real-time). For database logs, SQL queries are used to retrieve data from a database schema, which serves as another method of parsing. The text emphasizes the importance of analyzing sample logs to identify event types of interest and understanding the logging mechanism on end devices. XML processing involves defining node expressions starting from the root node through intermediate nodes to trigger nodes for effective expression handling. This is particularly relevant for XML log files where structured data extraction is necessary. The text provides practical steps such as analyzing sample logs, checking logging mechanisms, choosing collection methods, and defining configuration and agent properties files to ensure proper parsing in a test environment. It also suggests using raw events to compare with unparsed events to verify successful parsing. The text provides a detailed overview of various configuration and setup elements for different types of connectors used in system management, particularly those related to event severity, categorization, database querying, and XML parsing. Here's a summary of the key points mentioned: 1. **Categorization Files**: These are files that help organize data by assigning categories based on specific criteria. They include Additional Mapping, Key Value Parsers, and Map files which are used to place information in its exact location according to predefined rules or patterns. 2. **Event Severity and Metadata Fields**: The text requires ensuring certain event severity levels, deviceEventClassId, categorization, deviceCustom, and Flex field labels are correctly handled by the system. These fields are crucial for metadata tracking which can be used in further analysis or troubleshooting. 3. **Database Connector Configuration**: This includes several properties such as:

  • **Version Control**: Properties like version.order, version.query, version.id help manage and verify the database schema or version being connected to.

  • **Querying Events**: Features like Query, timestamp.field, uniqueid.fields are used to fetch events based on time intervals or specific event IDs.

4. **XML Flex Configuration**: This involves settings for XML parsing:

  • **Namespace Handling**: Properties such as namespace.count, namespace.prefix, namespace.uri define how the system should interpret and handle different sections of an XML file.

  • **XPath/XQuery Paths**: Features like hop.node.count, hop.node.name, hop.node.expression, trigger.node.expression are used to navigate through the XML structure for event detection or data extraction.

5. **Configuration File Details**: Each type of configuration file (Time Based, ID Based, XML Flex) has specific properties and settings that need to be configured according to the system's requirements. These include details like maximum query results, unique identifier fields, and path expressions used in querying or parsing data. This overview is crucial for anyone setting up or managing systems that involve complex event handling and configuration management from various sources including databases and XML logs. The provided text describes a method for parsing log messages from a device (PIX) using regular expressions and token mappings. Here's a summary of the process and steps involved: 1. **Context Node Specification**: The context node, which can be the root node, hop node, or trigger node, is specified to evaluate the path expression for each token. 2. **Extra Event Specifications**:

  • `extraevent.count` specifies the number of extra events.

  • `extraevent.filename` specifies the file name of additional configuration files needed for parsing.

  • `extraevent.name` assigns a name to the extra events.

3. **Sub-Message Definition**:

  • Messages are divided into two parts: one that is common across all messages and another part that varies based on the message format.

  • A sub-message consists of a timestamp, a fixed identification string (like "PIX"), and variable content specific to each message type.

4. **Parsing Steps**:

  • Define a corresponding sub-message ID.

  • Use regular expressions to match parts of the log entry.

  • Map these matched parts to event fields using predefined formats.

5. **Example**:

  • Regular expression used: `regex=(\S+ \d+ \d+:\d+:\d+) (\S+) %PIX-(\d)-(\d+): (.*)`

  • Tokens defined:

  • Timestamp (`token<0>`), IP Address (`token<1>`), Severity (`token<2>`), Sub-message ID (`token<3>`), and Variable content (`token<4>`).

This method allows for parsing multiple types of messages from a single log source without requiring separate parsers for each message format. The provided text describes a system for handling sub-messages within a larger message structure, specifically tailored for event processing in systems like ArcSight. Here's a summary of the key points and components discussed: 1. **Token Definitions**:

  • `token<4>.name=SubmessageToken` is defined as a string token used to carry sub-messages.

  • `submessage.messageid.token=SubmessageIdToken` identifies the token that holds the message identifier.

  • `submessage.token=SubmessageToken` specifies the token containing the actual sub-message content.

  • `submessage.count=1` indicates there is one sub-message ID (e.g., 106015).

2. **Sub-Message Pattern**:

  • Each sub-message has a pattern defined using regular expressions and specifies the fields, types, and formats of the data to be extracted.

  • Example patterns include `submessage<0>.pattern<0>.regex=Deny (\\S+) \\(no connection\\)\\s(\\d+\.\\d+\.\\d+\.\\d+)`, with associated fields (`event.transportProtocol`, `event.sourceAddress`), and types (`String`, `IPAddress`).

3. **Conditional Mapping**:

  • Conditional mappings allow for different types of information based on event characteristics. For example:

  • regex=Event id is (\\d+) type (\\S+) with parameter (\\S+) defines tokens for EVENTID, TYPE, and PARAMETER.

  • Standard mappings include `event.deviceEventClassId` to `EVENTID` and `event.deviceEventCategory` to `TYPE`.

  • Conditional mappings handle specific values:

  • For EVENTID 532 or 534, set `event.sourceAddress` to the PARAMETER value.

  • For EVENTID 533, set `event.sourceUserName` to the PARAMETER value.

4. **Sub-Message Example**:

  • A specific sub-message example is provided with a message ID of conditionalmapsample.

  • It includes pattern definitions similar to the main text but focuses on illustrating conditional mapping within sub-messages.

This system is designed for flexible event processing, allowing different data mappings based on specified conditions and patterns, which can be particularly useful in systems like ArcSight where detailed event handling and classification are crucial. The text describes a configuration for parsing data using regular expressions with extra processors in a FlexConnector environment. It includes details such as the structure of regex patterns, mapping values to fields, and conditional mappings based on specific conditions. The main points are: 1. A pattern is defined where event id is captured by \d+, type by \S+, and parameter by \S+. The event class ID is mapped according to deviceEventClassId. Conditional maps adjust the mapping based on values of fields, like deviceEventClassId or specific tokens from patterns. 2. Extra processors are used when data parsing requires multiple types of processing due to varied formats or structures in different parts of a log file. These processors can be linked with configuration files placed in the \user\agent\flexagent folder. 3. Example configurations include setting up an extra processor for regex type, specifying a filename and linking it to event message fields while setting flexagent variable to true and conditional settings based on specific values or tokens. 4. The parsing result can be adjusted with options like clearing the field after each successful parse, as shown in the example configuration. The provided text outlines a configuration for an "extraprocessor" that uses regex (regular expressions) to parse specific fields from event logs. There are two processors defined, each targeting different log files and using regex to extract information from specified fields. One processor extracts data from `event.name` in the file `securitymanager/Name-Name`, while the other does the same for `scm/Name-Name`. Both processors have settings that disable clearing fields after parsing and are marked as "flexagent" enabled, which suggests they can adapt to different parsing requirements dynamically. The text also discusses the concept of multiline parsing, explaining its purpose in reconstructing messages split across multiple lines in log files. It provides example regex patterns for identifying start (`\|\d+/\d+/\d+ \d+:\d+:\d+\|.*`) and end (`.*\|$`) of such multiline messages. Additionally, the text mentions a need for a parser override system that allows different versions of parsers to handle raw events differently, which is particularly relevant for SmartConnectors that map sensitive information like usernames, hostnames, or addresses across various logs and systems. This flexibility helps in generating ArcSight security events with varied types of mappings without disrupting the current mapping configurations. The provided text discusses a SmartConnector feature designed to support multiple versions of parsers, allowing users to configure their SmartConnectors with various parser versions based on their ArcSight security event mapping requirements. Each SmartConnector is equipped with an internal parameter fcp.version that represents its current Parser Version, ranging from 0 (Base Parser Version) to 7. To identify the Parser Version used for parsing a raw event, one can observe the last digit of the Agent Version field in the ArcSight security event. Additionally, the text introduces the concept of "Extra mappings," which are properties within sub-messages that can be used to add additional mapping properties directly. An example is provided where multiple sub-message patterns may exist, and extra mappings such as event.name and event.deviceProduct are configured for an unparsed category of events. Configuration instructions for default sub-message descriptors are also given, demonstrating how to set up initial configurations with delimiters and fields specified. Lastly, the text addresses the need for merge operations when some devices send information about a single event across multiple log lines. This aspect is not further elaborated upon in the provided excerpt but implies that handling such scenarios might require additional configuration or processing within the system to ensure accurate event aggregation. In data processing, particularly in log analysis where events might span multiple lines or require contextual merging based on specific fields, techniques like Multiline Parsing and Merge Operations can be employed. Multiline Parsing is not always applicable as seen in your example, where the first line captures input and the third line shows output for the same connection and message ID but with different operations. Here, a Merge Operation is more suitable to consolidate events that share common fields like 'conn' and 'msgId' across different lines. **Merge Operations** are defined procedures in log analysis where multiple log entries are combined into one based on specific criteria:

  • **Events Inclusion**: Specifies which events should be included in the merge operation.

  • **Start Condition**: Defines when a merge operation should begin, typically triggered by a pattern or token.

  • **End Condition**: Determines when a merge operation should end, often based on patterns that signal its completion.

  • **Group Identification**: Uses specific fields (like 'conn' and 'msgId') to identify events that belong in the same group for merging.

**Technical Note:** Currently, only agents using regular expressions support this feature. Configuration of a merge operation involves setting predefined property variables such as `merge.count`, `traceenabled`, `pattern.count`, `pattern.regex`, and `starts.count`. These operations are crucial for maintaining data integrity and context in complex log files, ensuring that related events are not split across multiple entries and enabling better analysis and reporting. The provided text outlines a configuration for an event merging operation, detailing various properties that can be set within the merge operations to control how events are merged. These properties include start and end patterns, timeout durations, token definitions, and more. Here's a breakdown of what is discussed: 1. **Start Pattern**: The property `merge<{mergeindex}>

.starts<{patternindex}>

.endspreviousmerge` determines if the merge processor should end the previous merge when a new event starts that matches this pattern. This setting helps in initiating a new merging operation when certain conditions are met. 2. **End Patterns**: `merge<{mergeindex}>

.ends.count` indicates how many end patterns are defined for the merge operation. Each end pattern is specified by `merge<{mergeindex}>

.ends<{patternindex}>

`, where you can set the token (`merge<{mergeindex}>

.ends<{patternindex}>

.token`) and a regular expression (`merge<{mergeindex}>

.ends<{patternindex}>

.regex`) to identify events that should trigger the end of a merge operation. 3. **Timeout**: The `merge<{mergeindex}>

.timeout` property sets how long (in milliseconds) the merging process can take before it times out. This is useful for preventing indefinite waiting periods during event aggregation. 4. **ID Tokens**: Defines which tokens are used to group events together (`merge<{mergeindex}>

.id.tokens`), and an optional delimiter (`merge<{mergeindex}>

.id.delimiter`) that separates these tokens when they are concatenated to form a unique identifier for each merged event. 5. **Send Partial Events**: The `merge<{mergeindex}>

.sendpartialevents` setting controls whether individual events should be sent as they are merged, or if complete merged results should be sent at once. 6. **Capacity**: Determines the size of the cache used to hold events during the merge operation (`merge<{mergeindex}>

.capacity`), which is necessary for aggregating multiple events into a single entity. 7. **Example Configuration**: A sample configuration with specific property settings is provided, including patterns and tokens based on regular expressions, timeout values, and token delimiters as specified in previous properties. This detailed setup allows for flexible control over how logs or other sequential data are aggregated, ensuring that relevant events are correctly merged while adhering to defined criteria. The message ID 82 indicates a successful operation without errors. It involves merging properties related to an "OperationName" which can be either BIND or RESULT, as per the regex patterns specified. The merge process starts when OperationName is set to BIND and ends when it is set to RESULT. Both start and end are defined with their respective tokens and regex expressions. In the event mapping section:

  • `event.deviceReceiptTime` is mapped to a date format.

  • `event.name` is set based on either `mergedevent.name` or `OperationName`.

  • `event.deviceAction` maps to `ResultCode`.

  • `event.destinationUserId` maps to `UserIdCustom`, which undergoes categorizations that can be controlled by the FlexConnector developer through custom categorization files. These files are CSV text files placed in specific directories based on device vendor and product, overriding existing categorizations.

The categorization file examples demonstrate how various event fields like severity, object, behavior, device group, significance, and outcome can be set for different scenarios. Key-value parsers are used to divide log lines into key-value pairs, extract tokens, and map them to event fields. These parsers work with keyvalue extra processors and similar tools in the system. The text discusses the use of key-value parsers and map files in a system, specifically within the context of ArcSight subagents. Key-Value Parsers are used for secondary processing tasks and have specific properties defined in their configuration file (vendor.subagent.sdkkeyvaluefilereader.properties). This file includes settings like delimiters and qualifiers which dictate how data should be parsed from log lines into key-value pairs. For example, the key delimiter is a whitespace character (\s), while the value delimiter is an equals sign (=). The regular expression to capture keys matches any non-whitespace characters. Leading and trailing white spaces in messages, tokens (parsed parts of the message), and keys are trimmed by default for cleaner data handling. Map files serve as a method to set ArcSight event fields based on information from other fields. They are CSV files located under specific directories that allow custom field mappings and override standard parser values. An example map file entry might involve mapping IP ranges to descriptive strings like building names, helping in hostname-to-IP resolution or categorization. The deviceEventClassId is a method used by ArcSight to create unique identifiers for each event, which can be useful when tracking specific events using rules. In summary, key-value parsers and map files are tools within the ArcSight ecosystem that facilitate efficient data handling and custom field mapping to support more sophisticated processing and reporting functionalities. This text outlines various aspects related to data handling and integration using ArcSight, a software tool used for managing security information and events. It primarily focuses on configuring and utilizing the CounterACT connector within ArcSight. The connector is designed to facilitate interactions between ArcSight and third-party devices, enabling remote command execution from the ArcSight console. Key points include: 1. **Data Mapping in Specific Environments**: In certain setups, it's necessary to map specific additional data names to standard ArcSight schema fields, which can be tailored according to different device vendors and products through the ArcSight Console. This mapping is managed on the SmartConnector machine. 2. **Additional Data Names Command**: The Get Additional Data Names command helps in identifying what additional data names are assigned to each device vendor or product combination since the SmartConnector has been operational. 3. **Mapping Requirements**: The field used for mapping additional data names must be a valid ArcSight event field, ensuring that the information is correctly mapped and integrated within the system's schema. 4. **CounterACT Connector Usage**: This connector allows for integration between ArcSight and third-party devices, supporting remote command execution from the console. It should be selected during installation among available connectors. 5. **Configuration File Creation**: To use the CounterACT connector effectively, create a configuration file named `.counteract.properties` in the specified directory with relevant commands that can be executed within ArcSight. This file includes details such as command names and parameters supported by the system. These functionalities are crucial for enhancing interoperability between different security management tools and external devices through a unified platform like ArcSight. The command will receive a series of parameters including names, display names, and actions that are specific to each parameter. These details include internal names, display names in the ArcSight console, and executable commands with template variables such as ARCSIGHT_HOME for path location and PLATFORM for OS type. This information is used in various ways: from directly populating connector parameters or rule fields, through parsing outputs using SecondLevelRegexParser to extract relevant data, up to configuring REST FlexConnector for collecting events from cloud applications via vendor APIs. The setup involves creating properties files for detailed configuration of the command execution and authentication processes with vendors' REST API endpoints. The document outlines the steps for developing a REST Flexconnector, focusing on tasks such as registering your connector application, creating an OAuth2 client properties file, determining the appropriate events URL (REST API endpoint), and configuring the REST FlexConnector Configuration Support Tool (restutil). It also covers how to create a JSON parser file and configure the connector to read data from active lists. 1. **Register Your Connector Application:**

  • Register your application with Box, Salesforce, and Google Apps for OAuth2 registration and obtain values needed for client properties file creation.

2. **Create OAuth2 Client Properties File:**

  • Use the obtained values from the registration to create an OAuth2 client properties file. This involves providing a redirect_uri as well.

3. **Determine Which Events URL (REST API Endpoint) to Use:**

  • Understand general information about REST API endpoints, including querying based on timestamp and rate limiting specifics for Box, Salesforce, and Google Apps.

4. **Create a JSON Parser File:**

  • Define the structure and create a parser for the JSON data retrieved from the events URL. View raw JSON data to ensure proper parsing.

5. **REST FlexConnector Configuration Support Tool (restutil):**

  • Install restutil by entering the name of the parser file, providing the parser file path, and specifying the events URL which is used by the connector for event retrieval. Browse for the OAuth2 client properties file and ensure it's linked to your registered application.

6. **Flex Active List Import:**

  • Create a flex connector to read data corresponding to active lists, defining tokens without mapping them directly to fields. Map tokens to additional data as needed, specifying the properties to invoke the Model Import feature and convert data into ArcSight Archive format.

7. **Example Configuration for Flex Active List Import:**

  • Define comments start with delimiter and token count. Specify token names and types (e.g., IP as String). Enable additional data fields like IP_ADDRESS and CREATE_DATE, converting timestamps to a readable string format using Velocity Macro file conversions.

This document provides detailed instructions for setting up and configuring a REST Flexconnector, from basic registration to advanced parsing and mapping of event data. The provided text discusses the SmartConnector for Asset Import in ArcSight ESM, which allows users to define and import asset modeling details from a CSV file for batch processing. Key features include automatic updating of assets based on changes in inventory and setting up processes to regularly export and update these lists. The connector supports CSV files with specific headers such as address, macAddress, hostname, location, and category. It also includes detailed configurations for the agents involved in the process, including settings for nonlocking windows file reader, starting at end, wildcard specifications, processing folders recursively, and modes for handling processed log files. Future enhancements are suggested to include advanced regex usages, agent property file important configurations, collection of other useful files, basic troubleshooting guides, and lab exercises for practice. The document outlines a plan to practice using regular expressions (Regex) for testing purposes with sample logs. It suggests starting an open forum for building and suggesting flex connectors, potentially creating a site for online testing and generation of flex capabilities. There will also be Q&A sessions and modifications based on user suggestions. The document references various sources including ArcSight Flex Dev Guide, Protect 724 Posts, other flex documents, and discussions, as well as specific individuals like Hector Aguilar - Macias and Girish Mantry. It encourages users to rate the ArcSight content in the same forum thread where it is uploaded, expressing appreciation and suggestions for motivation and further updates with more useful snapshots for flex capabilities. The document concludes by thanking the author (V.B) and mentioning the copyright under their name from 2013.

Disclaimer:
The content in this post is for informational and educational purposes only. It may reference technologies, configurations, or products that are outdated or no longer supported. If there are any comments or feedback, kindly leave a message and will be responded.

Recent Posts

See All
Zeus Bot Use Case

Summary: "Zeus Bot Version 5.0" is a document detailing ArcSight's enhancements to its Zeus botnet detection capabilities within the...

 
 
 
Windows Unified Connector

Summary: The document "iServe_Demo_System_Usage_for_HP_ESP_Canada_Solution_Architects_v1.1" outlines specific deployment guidelines for...

 
 
 

Comments


@2021 Copyrights reserved.

bottom of page