Corvil Connectors

Integration of network data analytics with Splunk, kdb+ and Hadoop.

Connectors make it easy to deliver real-time Corvil Streams data to other platforms. No software development is required for data integration. Simply configure Connectors to publish to the receiving system and data pipelines are automatically created to support a wide variety of use cases.

Whether the data streaming through Corvil APIs is high volume, highly summarized, contains aggregated statistics, or has high-performance requirements, Corvil automatically scales to reliably handle the load.

With Corvil, developers, business analysts and data architects can focus on building new data-driven applications and analysis to create value, rather than integration plumbing.

Benefits

  • Simplified stream integration and speed up time to value

  • Fast and fault-tolerant delivery to multiple receivers

  • Scalable streaming to hundreds of thousands of events per second

  • Manage data consumption costs by tailoring data streams

  • Continuously stream precision-timestamped transaction messages for compliance

  • Incorporate indexed data into business intelligence platforms

"We use Splunk all the time and it was critical for us to have the Corvil data represented within Splunk, as well as use the Corvil UI when needed."

TIER-1 GLOBAL BANK
VoIP Operations Manager

Connector Details​

A growing library of connectors is provided without cost to existing customers - allowing an authoritative record to stream directly from the network data into your chosen big data solution.

DATA PLATFORMS

Cloudera Enterprise: Stream Corvil data continuously and at scale into Cloudera Enterprise, where it can be processed, analyzed, modeled and served using a variety of engines that are part of the Cloudera big data data management platform, powered by Apache Hadoop™.

Kx Systems: The Corvil streams data into the kdb+tick real-time database to facilitate multiple use-cases: complete record of transactions for compliance reporting, data-mining, and correlation with other data sources, reconciliation analysis for electronic brokerages.

JDBC (Java Database Connectivity): The connector injects Corvil Streams into relational databases such as Oracle Database, Microsoft SQL Server, and Oracle MySQL enabling database analysts to apply SQL queries to Corvil data streams.

Splunk: Integrate Corvil data into familiar workflows and dashboards in Splunk - combining with other data sources for additional insights.

Supports contextual click-through into Corvil UI for deep-dive analysis and integration with Corvil APIs to provide granular export workflows from directly within Splunk.

BIG DATA MESSAGING PLATFORMS

Apache Kafka: Corvil’s Kafka Producer publishes Corvil Stream data as Kafka messages with CSV payload for fast and fault-tolerant delivery to multiple receivers such as Hadoop HDFS, Presto, Storm, the Elastic stack, and MongoDB.

Amazon Kinesis: The Kinesis Connector publishes Corvil Stream data as a JSON payload, which is widely supported by Kinesis receivers, such as the AWS product suite.

Flume: Corvil Streams are defined as Sources that Flume Agents can channel towards appropriate Sinks, such as HDFS, HBase or Cassandra.

BASIC LOGGING

Corvil data is externalized as syslog events, csv files, or standard console inputs for rapid data integration prototyping or providing detailed alert content.