[HTML payload içeriği buraya]
34.4 C
Jakarta
Tuesday, May 12, 2026

Energy information ingestion into Splunk utilizing Amazon Knowledge Firehose


Final up to date: December 17, 2025

Initially revealed: December 18, 2017

Amazon Knowledge Firehose helps Splunk Enterprise and Splunk Cloud as a supply vacation spot. This native integration between Splunk Enterprise, Splunk Cloud, and Amazon Knowledge Firehose is designed to make AWS information ingestion setup seamless, whereas providing a safe and fault-tolerant supply mechanism. We wish to allow prospects to observe and analyze machine information from any supply and use it to ship operational intelligence and optimize IT, safety, and enterprise efficiency.

With Amazon Knowledge Firehose, prospects can use a totally managed, dependable, and scalable information streaming answer to Splunk. On this publish, we inform you a bit extra in regards to the Amazon Knowledge Firehose and Splunk integration. We additionally present you the best way to ingest massive quantities of information into Splunk utilizing Amazon Knowledge Firehose.

Push vs. Pull information ingestion

Presently, prospects use a mix of two ingestion patterns, based on information supply and quantity, along with current firm infrastructure and experience:

  1. Pull-based strategy: Utilizing devoted pollers working the favored Splunk Add-on for AWS to tug information from varied AWS providers akin to Amazon CloudWatch or Amazon S3.
  2. Push-based strategy: Streaming information instantly from AWS to Splunk HTTP Occasion Collector (HEC) by utilizing Amazon Knowledge Firehose. Examples of relevant information sources embody CloudWatch Logs and Amazon Kinesis Knowledge Streams.

The pull-based strategy provides information supply ensures akin to retries and checkpointing out of the field. Nonetheless, it requires extra ops to handle and orchestrate the devoted pollers, that are generally working on Amazon EC2 situations. With this setup, you pay for the infrastructure even when it’s idle.

However, the push-based strategy provides a low-latency scalable information pipeline made up of serverless sources like Amazon Knowledge Firehose sending on to Splunk indexers (by utilizing Splunk HEC). This strategy interprets into decrease operational complexity and price. Nonetheless, in case you want assured information supply then you need to design your answer to deal with points akin to a Splunk connection failure or Lambda execution failure. To take action, you would possibly use, for instance, AWS Lambda Useless Letter Queues.

How about getting the perfect of each worlds?

Let’s go over the brand new integration’s end-to-end answer and study how Amazon Knowledge Firehose and Splunk collectively develop the push-based strategy right into a native AWS answer for relevant information sources.

Through the use of a managed service like Amazon Knowledge Firehose for information ingestion into Splunk, we offer out-of-the-box reliability and scalability. One of many ache factors of the previous strategy was the overhead of managing the info assortment nodes (Splunk heavy forwarders). With the brand new Amazon Knowledge Firehose to Splunk integration, there are not any forwarders to handle or arrange. Knowledge producers (1) are configured by way of the AWS Administration Console to drop information into Amazon Knowledge Firehose.

It’s also possible to create your individual information producers. For instance, you possibly can drop information right into a Firehose supply stream by utilizing Amazon Kinesis Agent, or by utilizing the Firehose API (PutRecord(), PutRecordBatch()), or by writing to a Kinesis Knowledge Stream configured to be the info supply of a Firehose supply stream. For extra particulars, consult with Sending Knowledge to an Amazon Knowledge Firehose Supply Stream.

You would possibly want to rework the info earlier than it goes into Splunk for evaluation. For instance, you would possibly wish to enrich it or filter or anonymize delicate information. You are able to do so utilizing AWS Lambda and enabling information transformation in Amazon Knowledge Firehose. On this state of affairs, Amazon Knowledge Firehose is used to decompress the Amazon CloudWatch logs by enabling the function.

Programs fail on a regular basis. Let’s see how this integration handles outdoors failures to ensure information sturdiness. In instances when Amazon Knowledge Firehose can’t ship information to the Splunk Cluster, information is robotically backed as much as an S3 bucket. You may configure this function whereas creating the Firehose supply stream (2). You may select to again up all information or solely the info that’s failed throughout supply to Splunk.

Along with utilizing S3 for information backup, this Firehose integration with Splunk helps Splunk Indexer Acknowledgments to ensure occasion supply. This function is configured on Splunk’s HTTP Occasion Collector (HEC) (3). It ensures that HEC returns an acknowledgment to Amazon Knowledge Firehose solely after information has been listed and is out there within the Splunk cluster (4).

Now let’s have a look at a hands-on train that exhibits the best way to ahead VPC circulation logs to Splunk.

How-to information

To course of VPC circulation logs, we implement the next structure.

Amazon Digital Non-public Cloud (Amazon VPC) delivers circulation log information into an Amazon CloudWatch Logs group. Utilizing a CloudWatch Logs subscription filter, we arrange real-time supply of CloudWatch Logs to an Amazon Knowledge Firehose stream.

Knowledge coming from CloudWatch Logs is compressed with gzip compression. To work with this compression, we are going to allow decompression for the Firehose stream. Firehose then delivers the uncooked logs to the Splunk Http Occasion Collector (HEC).

If supply to the Splunk HEC fails, Firehose deposits the logs into an Amazon S3 bucket. You may then ingest the occasions from S3 utilizing an alternate mechanism akin to a Lambda perform.

When information reaches Splunk (Enterprise or Cloud), Splunk parsing configurations (packaged within the Splunk Add-on for Amazon Knowledge Firehose) extract and parse all fields. They make information prepared for querying and visualization utilizing Splunk Enterprise and Splunk Cloud.

Walkthrough

Set up the Splunk Add-on for Amazon Knowledge Firehose

The Splunk Add-on for Amazon Knowledge Firehose allows Splunk (be it Splunk Enterprise, Splunk App for AWS, or Splunk Enterprise Safety) to make use of information ingested from Amazon Knowledge Firehose. Set up the Add-on on all of the indexers with an HTTP Occasion Collector (HEC). The Add-on is out there for obtain from Splunkbase. For troubleshooting help, please consult with: AWS Knowledge Firehose troubleshooting documentationSplunk’s official troubleshooting information

HTTP Occasion Collector (HEC)

Earlier than you need to use Amazon Knowledge Firehose to ship information to Splunk, arrange the Splunk HEC to obtain the info. From Splunk internet, go to the Setting menu, select Knowledge Inputs, and select HTTP Occasion Collector. Select International Settings, guarantee All tokens is enabled, after which select Save. Then select New Token to create a brand new HEC endpoint and token. While you create a brand new token, guarantee that Allow indexer acknowledgment is checked.

When prompted to pick out a supply kind, choose aws:cloudwatchlogs:vpcflow

Create an S3 backsplash bucket

To offer for conditions by which Amazon Knowledge Firehose can’t ship information to the Splunk Cluster, we use an S3 bucket to again up the info. You may configure this function to again up all information or solely the info that’s failed throughout supply to Splunk.

Observe: Bucket names are distinctive.

aws s3 create-bucket --bucket <your-s3-bucket-name> --create-bucket-configuration LocationConstraint=<your-region>

Create an Amazon Knowledge Firehose supply stream

On the AWS console, open the Amazon Knowledge Firehose console, and select Create Firehose Stream.

Choose DirectPUT because the supply and Splunk because the vacation spot.

Create Firehose Stream

If you’re utilizing Firehose to ship CloudWatch Logs and wish to ship decompressed information to your Firehose stream vacation spot, use Firehose Knowledge Format Conversion (Parquet, ORC) or Dynamic partitioning. You have to allow decompression to your Firehose stream, try Ship decompressed Amazon CloudWatch Logs to Amazon S3 and Splunk utilizing Amazon Knowledge Firehose

Enter your Splunk HTTP Occasion Collector (HEC) data in vacation spot settings

Firehose Destination setting

Observe: Amazon Knowledge Firehose requires the Splunk HTTP Occasion Collector (HEC) endpoint to be terminated with a sound CA-signed certificates matching the DNS hostname used to hook up with your HEC endpoint. You obtain supply errors in case you are utilizing a self-signed certificates.

On this instance, we solely again up logs that fail throughout supply.

Backsplash S3 settings

To watch your Firehose supply stream, allow error logging. Doing this implies that you could monitor document supply errors. Create an IAM function for the Firehose stream by selecting Create new, or Select current IAM function.

Advance settings for cloudwatch loggings

You now get an opportunity to evaluate and regulate the Firehose stream settings. If you find yourself glad, select Create Firehose Stream.

Create a VPC Stream Log

To ship occasions from Amazon VPC, it’s good to arrange a VPC circulation log. If you have already got a VPC circulation log you wish to use, you possibly can skip to the “Publish CloudWatch to Amazon Knowledge Firehose” part.

On the AWS console, open the Amazon VPC service. Then select VPC, and select the VPC you wish to ship circulation logs from. Select Stream Logs, after which select Create Stream Log. Should you don’t have an IAM function that permits your VPC to publish logs to CloudWatch, select Create and use a brand new service function.

VPC Flow Logs Settings

As soon as lively, your VPC circulation log ought to appear to be the next.

Flow logs

Publish CloudWatch to Amazon Knowledge Firehose

While you generate site visitors to or out of your VPC, the log group is created in Amazon CloudWatch. We create an IAM function to permit Cloudwatch to publish logs to the Amazon Knowledge Firehose Stream.

To permit CloudWatch to publish to your Firehose stream, it’s good to give it permissions.

$ aws iam create-role --role-name CWLtoFirehoseRole --assume-role-policy-document file://TrustPolicyForCWLToFireHose.json



Right here is the content material for TrustPolicyForCWLToFireHose.json.

{
  "Assertion": {
    "Impact": "Permit",
    "Principal": { "Service": "logs.us-east-1.amazonaws.com" },
    "Motion": "sts:AssumeRole"
  }
}

Connect the coverage to the newly created function.

$ aws iam put-role-policy 
    --role-name CWLtoFirehoseRole 
    --policy-name Permissions-Coverage-For-CWL 
    --policy-document file://PermissionPolicyForCWLToFireHose.json

Right here is the content material for PermissionPolicyForCWLToFireHose.json.

{
    "Assertion":[
      {
        "Effect":"Allow",
        "Action":["firehose:*"],
        "Useful resource":["arn:aws:firehose:us-east-1:YOUR-AWS-ACCT-NUM:deliverystream/FirehoseSplunkDeliveryStream"]
      },
      {
        "Impact":"Permit",
        "Motion":["iam:PassRole"],
        "Useful resource":["arn:aws:iam::YOUR-AWS-ACCT-NUM:role/CWLtoFirehoseRole"]
      }
    ]
}

The brand new log group has no subscription filter, so arrange a subscription filter. Setting this up establishes a real-time information feed from the log group to your Firehose supply stream. Choose the VPC circulation log and select Actions. Then select Subscription filters adopted by Create Amazon Knowledge Firehose subscription filter.

Subscription Filter option

Subscription filter details

While you run the AWS CLI command previous, you don’t get any acknowledgment. To validate that your CloudWatch Log Group is subscribed to your Firehose stream, test the CloudWatch console.

As quickly because the subscription filter is created, the real-time log information from the log group goes into your Firehose supply stream. Your stream then delivers it to your Splunk Enterprise or Splunk Cloud atmosphere for querying and visualization. The screenshot following is from Splunk Enterprise.

As well as, you possibly can monitor and examine metrics related along with your supply stream utilizing the AWS console.

Conclusion

Though our walkthrough makes use of VPC Stream Logs, the sample can be utilized in lots of different eventualities. These embody ingesting information from AWS IoT, different CloudWatch logs and occasions, Kinesis Streams or different information sources utilizing the Kinesis Agent or Kinesis Producer Library. You could use a Lambda blueprint or disable document transformation solely relying in your use case. For an extra use case utilizing Amazon Knowledge Firehose, try That is My Structure Video, which discusses the best way to securely centralize cross-account information analytics utilizing Kinesis and Splunk.

Should you discovered this publish helpful, be sure you try Integrating Splunk with Amazon Kinesis Streams.


Concerning the Authors

Tarik Makota

Tarik Makota

Tarik is a options architect with the Amazon Internet Companies Companion Community. He supplies technical steering, design recommendation and thought management to AWS’ most strategic software program companions. His profession consists of work in an especially broad software program growth and structure roles throughout ERP, monetary printing, profit supply and administration and monetary providers. He holds an M.S. in Software program Improvement and Administration from Rochester Institute of Expertise.

Roy Arsan

Roy Arsan

Roy is a options architect within the Splunk Companion Integrations workforce. He has a background in product growth, cloud structure, and constructing shopper and enterprise cloud functions. Extra lately, he has architected Splunk options on main cloud suppliers, together with an AWS Fast Begin for Splunk that permits AWS customers to simply deploy distributed Splunk Enterprise straight from their AWS console. He’s additionally the co-author of the AWS Lambda blueprints for Splunk. He holds an M.S. in Pc Science Engineering from the College of Michigan.

Yashika Jain

Yashika Jain

Yashika is a Senior Cloud Analytics Engineer at AWS, specializing in real-time analytics and event-driven architectures. She is dedicated to serving to prospects by offering deep technical steering, driving finest practices throughout real-time information platforms and fixing advanced points associated to their streaming information architectures.

Mitali Sheth

Mitali Sheth

Mitali is a Streaming Knowledge Engineer within the AWS Skilled Companies workforce, specializing in real-time analytics and event-driven architectures for AWS’ most strategic software program prospects. Extra lately, she has targeted on information governance with AWS Lake Formation, constructing dependable information pipelines with AWS Glue, and modernizing streaming infrastructure with Amazon MSK and Amazon Managed Flink for large-scale enterprise deployments. She holds an M.S. in Pc Science from the College of Florida.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles