TransWikia.com

IPFIX collector with extension support

Server Fault Asked by oonska on August 24, 2020

I need a tool to collect IPFIX records and log the contents of each packet to a log file or database to validate the accuracy of an IPFIX emitter. This IPFIX emitter sends enterprise extensions in the ipfix records that I need to validate as well as the standard suite.

So far I’ve looked into the NFDump and it covers my needs for collecting and storing records, but from what I see so far it won’t store the contents of the enterprise extensions.

Can NFDump be configured to store enterprise extensions? Is there a different IPFIX collector that will meet my needs?

3 Answers

For anyone coming here from Google, here is what I have learned (relating Open Source Tools for this). I've on my second generation of receiving IPFIX data, and I used to use a tool (forked from libipfix) that receives IPFIX and emitted that as JSON for ingestion into ELK. That is a bit old in the tooth, and I wouldn't recommend it now (particularly as NetScalar 11 sort of broke some of its pre-stored templates).

I would recommend LogStash (5.3 or later) for receiving IPFIX data; especially if your intention is to ingest it into ELK. This is done as a 'codec' (the input would be 'udp')

https://www.elastic.co/guide/en/logstash/current/plugins-codecs-netflow.html

Here's an example input

input {
  udp {
    host => "0.0.0.0"
    port => 4739
    codec => netflow {
      versions => [10]
      target => ipfix
    }
    type => ipfix
  }
}

To give you an idea of the outputs you get from that, here are a couple of messages for a TCP connection (LDAPS) and a HTTPS (SSL terminated at NetScaler) request (here shown using a 'stdout' output with 'rubydebug' codec)

{
         "ipfix" => {
                "destinationTransportPort" => 39912,
                     "flowEndMicroseconds" => "2017-04-11T02:53:09.000Z",
                       "sourceIPv4Address" => "10.x.x.x",
                     "netscalerUnknown329" => 0,
                         "egressInterface" => 0,
                         "octetDeltaCount" => 6600,
                   "netscalerAppNameAppId" => 165707776,
                     "sourceTransportPort" => 636,
                                  "flowId" => 14049270,
                  "destinationIPv4Address" => "10.y.y.y",
                      "observationPointId" => 472006666,
                   "netscalerConnectionId" => 14049269,
                          "tcpControlBits" => 25,
                   "flowStartMicroseconds" => "2017-04-11T02:53:09.000Z",
                        "ingressInterface" => 2147483651,
                                 "version" => 10,
                        "packetDeltaCount" => 16,
                  "netscalerRoundTripTime" => 0,
              "netscalerConnectionChainID" => "00000000000000000000000000000000",
                               "ipVersion" => 4,
                      "protocolIdentifier" => 6,
                     "netscalerUnknown331" => 0,
                     "netscalerUnknown332" => 0,
                      "exportingProcessId" => 0,
                      "netscalerFlowFlags" => 1090527232,
                  "netscalerTransactionId" => 342306495,
        "netscalerConnectionChainHopCount" => 0
    },
    "@timestamp" => 2017-04-11T02:53:09.000Z,
      "@version" => "1",
          "host" => "172.28.128.3",
          "type" => "ipfix"
}


{
         "ipfix" => {
               "netscalerHttpReqUserAgent" => "",
                "destinationTransportPort" => 443,
                  "netscalerHttpReqCookie" => "",
                     "flowEndMicroseconds" => "2017-04-11T02:52:49.000Z",
                     "netscalerHttpReqUrl" => "/someblah",
                       "sourceIPv4Address" => "10.z.z.z",
                  "netscalerHttpReqMethod" => "POST",
                    "netscalerHttpReqHost" => "some.example.com",
                         "egressInterface" => 2147483651,
                         "octetDeltaCount" => 1165,
                   "netscalerAppNameAppId" => 36274176,
                     "sourceTransportPort" => 59959,
                                  "flowId" => 14043803,
           "netscalerHttpReqAuthorization" => "",
                 "netscalerHttpDomainName" => "",
                    "netscalerAaaUsername" => "",
                "netscalerHttpContentType" => "",
                  "destinationIPv4Address" => "10.w.w.w",
                      "observationPointId" => 472006666,
                     "netscalerHttpReqVia" => "",
                   "netscalerConnectionId" => 14043803,
                          "tcpControlBits" => 24,
                   "flowStartMicroseconds" => "2017-04-11T02:52:49.000Z",
                        "ingressInterface" => 1,
                                 "version" => 10,
                        "packetDeltaCount" => 1,
                     "netscalerUnknown330" => 0,
              "netscalerConnectionChainID" => "928ba0c1da3300000145ec5805800e00",
                               "ipVersion" => 4,
                      "protocolIdentifier" => 6,
                  "netscalerHttpResForwLB" => 0,
                 "netscalerHttpReqReferer" => "",
                      "exportingProcessId" => 0,
               "netscalerAppUnitNameAppId" => 0,
                      "netscalerFlowFlags" => 151134208,
                  "netscalerTransactionId" => 342305773,
                  "netscalerHttpResForwFB" => 0,
        "netscalerConnectionChainHopCount" => 1,
           "netscalerHttpReqXForwardedFor" => ""
    },
    "@timestamp" => 2017-04-11T02:52:51.000Z,
      "@version" => "1",
          "host" => "172.28.128.3",
          "type" => "ipfix"
}

I'm only using this in development at present, but it seems much better than what I was using before.

The only question is what you want to do with this now that its ingestible. You could make a tool to join up the client->NS flow with the NS->backend flow, with the application request (if you've logged that).

Answered by Cameron Kerr on August 24, 2020

As Felix suggested a little more clarification and a link might be helpful. So here goes...

IPFIX adds the ability for vendors to to extend IPFIX with their own Information Elements (IEs). This is very powerful, but the protocol does not transmit all the information necessary for a collector to interpret IEs. To usefully interpret IEs a collector needs to know the IE datatype, semantics, etc.

For standard IEs this information is available in the IANA http://www.iana.org/assignments/ipfix). For vendor IEs this information is not available in any standard location or format.

What this means for oonska is that a collector can support IPFIX, but still not to do anything useful with data being exported using a vendor IE. For Scrutinizer we try very hard to learn about and support all vendor IEs. If we don't currently support IEs that you are interested in then, as Jake noted, we can add support very quickly.

I don't know what you are looking for, but Scrutinizer already supports IPFIX vendor IEs from Barracuda, Cisco, Citrix, Extreme Networks ntop, SonicWall, VMware, and others. If the IPFIX IEs you are interested in are not in that list I'll be happy to see that they are added.

The basic information needed to add support for vendor IEs to Scrutinizer (or any collectore) is:

elementName(vendorPEN/elementId){datatypesemantics}

The IANA page mentioned above has tables listing the semantics, units, and types currently defined for IPFIX.

Here is an example of a Plixer IE with the values filled in:
event_id(13745/106){identifier}

The format of the IE description isn't critical, but I like the above format from RFC7013. It is easy to read and parseable which makes importing new IEs trivial.

Other information that is useful includes

  • What range of values is valid
  • The meaning of any flag bits or enumerated values
  • Units (eg octets, packets, etc.)
  • detailed description

Once new IEs have been defined Scrutinizer will give you the ability to look at the data in those IEs any way you like. You can build custom reports using the report designer (http://www.plixer.com/blog/advanced-netflow-reporting-2/custom-netflow-reporting/) or look at each individual flow using FlowView (http://www.plixer.com/blog/netflow-reporting-2/flowview-netflow/).

You can download a copy at http://www.plixer.com/Scrutinizer-Netflow-Sflow/scrutinizer.html

Hope that helps.

Answered by Andrew Feren on August 24, 2020

Scrutinizer can do this. We'll need a document that explains the contents of your enterprise elements and we can have it done in a couple of hours.

Answered by Jake Wilson on August 24, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP