🗂️ Configuration Files
📄 config.json
Stores the system settings, contributing to the customization of the analyses performed.
We have already talked a bit about it previously
Restart
Whenever the config.json
file is modified, it is necessary to restart the service for the changes to take effect.
Either through the admin panel or via terminal with the command systemctl restart rr-flow-api.service
By default, the file comes with the following config.json
settings.
{
"api_allow_subnet": [
"127.0.0.1/32",
"::1",
"0.0.0.0/0",
"::/0"
],
"api_bind": "::",
"api_port": 5000,
"cache_lifetime": 1,
"collection_interval": 1,
"core_workers": 4,
"data_path": "/var/rr-flows/",
"debug": false,
"maximum_disk_gb": 70,
"password_admin_panel": "remontti",
"grafana": {
"datasources": "xxxxxxxxxxx",
"service_token": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"url": "http://localhost:3000"
},
"source_path": [
{
"buffer": 67108864,
"compress": "lz4",
"maximum_days": 365,
"name": "Border",
"port": 3055,
"proxyflow": {
"port": 4055
},
"sampling": 1024,
"snmp": {
"community": "public",
"ip": "10.10.10.2",
"port": 161,
"version": 2
},
"type": "netflow",
"vendor": "huawei"
}
]
}
Here is a description of the configuration items:
API Options
api_allow_subnet
A list of IP prefixes that are allowed to access the API. (Keep the localhost IPs)
"api_allow_subnet": [
"127.0.0.1/32",
"::1",
"0.0.0.0/0"
],
api_allow_subnet
It is highly recommended that you restrict this to your management IPs only, since JSON outputs are not authenticated. I am not responsible for any data leakage.
api_bind
IP address to which the API will be bound. Default is 0.0.0.0 for all.
"api_bind": "0.0.0.0",
api_port
Port on which the API will be available.
"api_port": 5000,
cache_lifetime
Cache lifetime in days. /var/cache/rr-flow/
"cache_lifetime": 3,
collection_interval
Flow collection interval in minutes. I do not recommend values like 2, 3, or 4. Prefer intervals like 1, 5, 10… since calculation complexity can end up providing or missing some data. Default is 5min up to version 1.3.0, and 1min after version 1.4.0.
"collection_interval": 1,
core_workers
The number of processes allocated for each CPU (Thread). Default is 4.
"core_workers": 4,
Warning
You may consider increasing this number, especially if additional hardware capacity is available. However, be careful not to increase it too much, as this can lead to unwanted effects such as server overload or crashes.
data_path
Path where the collected flow data will be stored.
"data_path": "/var/rr-flows/",
data_path/EXPORTER/YEAR/MONTH/DAY/nfcapd.YearMonthDayHourMinute
tree -sh /var/rr-flows/
├── Borda
│ ├── 2023
│ │ └── 07
│ │ └── 11
│ │ ├── nfcapd.202307111600
│ │ ├── nfcapd.202307111605
│ │ ├── nfcapd.202307111610
│ │ ├── nfcapd.202307111615
│ │ └── nfcapd.202307111620
│ └── nfcapd.current.29124
└── Cgnat
├── 2023
│ └── 07
│ └── 11
│ ├── nfcapd.202307111600
│ ├── nfcapd.202307111605
│ ├── nfcapd.202307111610
│ ├── nfcapd.202307111615
│ └── nfcapd.202307111620
└── nfcapd.current.29128
debug
Enables or disables debug mode. >- false - Disabled >- true - Enabled
"debug": false,
Warning
When enabled, the log file /var/log/rr-flow/rr-flow.log
will become huge. I do not recommend enabling it, only if you are facing some difficulty and need to try to identify a possible problem.
language
Admin Panel Language Setting, default is pt-BR
.
"language": "pt-BR",
Available languages:
Code | Panel Language | Dashboards Language | Grafana Language | Native Name |
---|---|---|---|---|
id-ID | Indonesian | English (United States) | Indonesian | Bahasa Indonesia |
cs-CZ | Czech | English (United States) | Czech | Čeština |
de-DE | German | English (United States) | German | Deutsch |
en-AU | English (Australia) | English (United States) | English (United States) | English (Australia) |
en-CA | English (Canada) | English (United States) | English (United States) | English (Canada) |
en-GB | English (United Kingdom) | English (United States) | English (United States) | English (UK) |
en-US | English (United States) | English (United States) | English (United States) | English (US) |
es-AR | Spanish (Argentina) | Spanish (Spain) | Spanish (Spain) | Español (Argentina) |
es-CL | Spanish (Chile) | Spanish (Spain) | Spanish (Spain) | Español (Chile) |
es-CO | Spanish (Colombia) | Spanish (Spain) | Spanish (Spain) | Español (Colombia) |
es-ES | Spanish (Spain) | Spanish (Spain) | Spanish (Spain) | Español (España) |
es-MX | Spanish (Mexico) | Spanish (Spain) | Spanish (Spain) | Español (México) |
es-PE | Spanish (Peru) | Spanish (Spain) | Spanish (Spain) | Español (Perú) |
es-US | Spanish (United States) | Spanish (Spain) | Spanish (Spain) | Español (EE.UU.) |
es-VE | Spanish (Venezuela) | Spanish (Spain) | Spanish (Spain) | Español (Venezuela) |
fr-FR | French | English (United States) | French | Français |
it-IT | Italian | English (United States) | Italian | Italiano |
hu-HU | Hungarian | English (United States) | Hungarian | Magyar |
nl-NL | Dutch | English (United States) | Dutch | Nederlands |
pl-PL | Polish | English (United States) | Polish | Polski |
pt-BR | Portuguese (Brazil) | Portuguese (Brazil) | Portuguese (Brazil) | Português (Brasil) |
pt-PT | Portuguese (Portugal) | English (United States) | Portuguese (Portugal) | Português (Portugal) |
sv-SE | Swedish | English (United States) | Swedish | Svenska |
tr-TR | Turkish | English (United States) | Turkish | Türkçe |
ru-RU | Russian | English (United States) | Russian | Русский |
ko-KR | Korean | English (United States) | Korean | 한국어 |
zh-CN | Chinese (Simplified) | English (United States) | Chinese (Simplified) | 简体中文 |
zh-TW | Chinese (Traditional) | English (United States) | Chinese (Traditional) | 繁體中文 |
ja-JP | Japanese | English (United States) | Japanese | 日本語 |
grafana
Settings for Grafana integration.
- datasources UID of the Grafana data source.
- service_token Service token for authentication in Grafana.
- url Grafana URL. (Default http://localhost:3000)
"grafana": [
{
"datasources": "xxxxxxxxxxxxxx",
"service_token": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"url": "http://localhost:3000"
}
],
maximum_disk_gb
Maximum space to be used in the data_path directory in gigabytes (default: /var/rr-flows/
).
"maximum_disk_gb": 70,
Attention: maximum_disk_gb
- Do not set more than 90% of your disk. Reserve at least 5% of the total space for the operating system to work properly.
- The application runs a cleanup routine once a day, removing the oldest files if the maximum_disk_gb limit is exceeded. However, this does not prevent the disk from filling up if data is collected too quickly before the next cleanup routine.
- Necessary estimation: The user needs to calculate the daily data collection rate and adjust the maximum_disk_gb value to ensure the disk does not reach the limit before cleaning. For example, if your collection generates 10 GB per day, leave an additional margin equivalent to this rate.
Example:
- Total disk: 80 GB
- Space for OS: 5% (4 GB)
- Daily collection margin: 10 GB (estimated)
- maximum_disk_gb: 66 GB (80 GB - 4 GB - 10 GB)
password_admin_panel
Password to access the administration panel.
"password_admin_panel": "remontti",
theme
Admin Panel Theme.
Tema | Valor |
---|---|
Light | light |
Dark | dark |
"theme": "dark",
Add new flow source
To add one or more sources, add them to source_path
source_path
List of flow data sources (routers that will send the flows). You can add multiple data sources for different routers, depending on your license level. This allows you to monitor and collect flows from multiple origins.
- buffer: Sets the input buffer of the network socket to bufflen bytes.
-
compress: Compression algorithm. (Default lz4)
Method Description Compression (%) Read Write bz2 High compression, but slow. 65% Slow Slow zstd Good compression and fast. 55% Fast Fast lz4 Very fast, medium compression. 45% Very fast Very fast lzo Fast, light compression. 40% Very fast Fast -
maximum_days: Maximum number of days the data will be kept for the source.
If the amount of stored data exceeds the limit set in maximum_disk_gb, files will be deleted based on space, not age — meaning even newer files may be removed to keep disk usage below the allowed limit. - name: Name of the data source (NO SPACES ⚠️).
- comment: Data source comment.
- port: Port that will receive the data. (If using proxyflow, use a random port)
- sampling: Applies the sampling rate, unless the sampling rate is announced by the exporting device, in which case set the value to auto.
- snmp: SNMP settings.
- community: SNMP community. (Default public)
- ip: SNMP IP address.
- port: SNMP port. (Default 161)
- version: SNMP version. (Default v2)
- ssh: SSH access settings for command automation (optional, only required if you are going to use automation via SSH in data_traffic_analysis.json).
- host: SSH IP address.
- password: SSH password.
- port: SSH port. (Default 22)
- username: SSH username.
-
type: Data source type.
- netflow (v5, v7, v9, IPFIX)
- sflow
-
proxyflow.port: Enables the Proxy Collector to process ASNs.
port
should be set to the port where the router is sending the flow data. If you do not wish to use it, simply do not add proxyflow inside source_path.
"proxyflow": {
"port": 3058
},
- vendor: Vendor of the data source.
- huawei
- cisco
- juniper
- nokia
- routeros
- linux
vendor
For “vendor”, use only those listed above. If your vendor is not listed, use linux.
Use only lowercase letters, for example: huawei, not Huawei or HUAWEI.
NOTE: License
The number of available flow sources depends on your license.
Port
When configuring new flows, remember that the collector port must not be the same for each flow. Make sure to use different ports to avoid conflicts and ensure data collection.
Here are some examples:
{
//...
"source_path": [
{
"buffer": 67108864,
"compress": "lz4",
"name": "Border",
"port": 3055,
"sampling": 1024,
"snmp": [
{
"community": "public",
"ip": "10.10.10.2",
"port": 161,
"version": 2
}
],
"type": "netflow",
"vendor": "huawei"
},
{
"buffer": 67108864,
"compress": "lz4",
"name": "Cgnat",
"port": 3066,
"sampling": 5,
"snmp": [
{
"community": "public",
"ip": "10.10.10.3",
"port": 161,
"version": 2
}
],
"type": "netflow",
"vendor": "routeros"
},
{
"buffer": 67108864,
"compress": "lz4",
"name": "Switch",
"port": 3077,
"sampling": 1024,
"snmp": [
{
"community": "public",
"ip": "10.10.10.4",
"port": 161,
"version": 2
}
],
"type": "sflow",
"vendor": "huawei"
},
{
"buffer": 67108864,
"compress": "lz4",
"name": "Border_Mikrotik",
"port": 4088,
"proxyflow": [
{
"port": 5088
}
],
"sampling": 1,
"snmp": [
{
"community": "public",
"ip": "10.10.10.4",
"port": 161,
"version": 2
}
],
"type": "netflow",
"vendor": "huawei"
}
]
//...
}
SNMP v3
SNMP v3 is not yet supported, but if you wish to keep it configured, here’s an example.
//...
"snmp": [
{
"auth_password": "senha",
"auth_protocol": "sha", // md5 | sha
"ip": "10.10.10.7",
"port": 161,
"priv_password": "senha",
"priv_protocol": "aes", // des | aes
"username": "username",
"version": 3
}
],
//...
📄 license.json
Contains information about the software license.
Structure
{
"key": "AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE"
}
license.json
It is necessary to restart the service for the license to take effect.
If you are changing the license via terminal, you can restart the rr-flow-api service with the command:
systemctl restart rr-flow-api.service
📄 custom-filters.json
File containing a set of custom filters to optimize data visualization on dashboards. You can create your own filters.
It is composed of filter, where you specify the filters, and name, where you define a name for your filter.
Structure
// Exemplo onde tudo que destino ou origem for a porta 22
// e conter as flags FIN ou Reset
//...
{
"filter": "port 22 and (flags F or flags R)",
"name": "SSH brute force"
},
//...
custom-filters.json
When editing, it is not necessary to restart the service for the changes to take effect.
Here is a basic list for you to create your own filters and use in Dashboards that have a field to enter manual filters.
Filters
Filters
src » Source.
dst » Destination.proto » Communication protocol.
port » Port number.
src port » Source port.
dst port » Destination port.net » Network prefix.
src net » Source network prefix.
dst net » Destination network prefix.ip » IP address.
src ip » Source IP address.
dst ip » Destination IP address.as » Autonomous System.
src as » Source AS.
dst as » Destination AS.
prev as » Previous AS.
next as » Next AS.bgpnext ip » Next BGP IP.
next ip » Next IP.
if number » Interface. (SNMP number)
in if number » Input interface.
out if number » Output interface.xip » Expanded IP address.
src xip » Expanded source IP address.
dst xip » Expanded destination IP address.
Operators
Operators
AND » Logical AND operator.
NOT » Logical NOT operator.
OR » Logical OR operator.
-
For the OR operator, be careful when using more than one filter. For example, if you want to combine two IP addresses with two protocol types, it would be:
(ip 1.1.1.1 OR ip 8.8.8.8) AND (proto TCP OR proto UDP) AND (port 53 OR port 443)
-
Port range:
dst port >= 0 and dst port <= 100
Flags
TCP Flags
A » ACK (Acknowledgment) - Confirms data receipt.
S » SYN (Synchronize) - Initiates a connection.
F » FIN (Finish) - Indicates end of data transmission.
R » RST (Reset) - Resets the connection.
P » PSH (Push) - Forces immediate data delivery.
U » URG (Urgent) - Indicates urgent data.
X » All flags enabled - Indicates an unusual packet.The order of flags inside tcpflags is not relevant. Unmentioned flags are treated as unspecified. To get flows with only the SYN flag enabled, use the syntax flags S and not flags AFRPU
Example
[
{
"filter": "(src ip 8.8.8.8 or src ip 1.1.1.1) AND src port 53 AND dst net 200.200.200.0/22",
"name": "Traffic from famous DNS servers to my prefix"
}
//...
]
📄 fav-asn-prefix-graphs.json
Use this file to declare the ASNs you want to explore through charts and statistical analysis. Additionally, it is possible to specify associated prefixes for greater granularity.
Filtering ASNs and Network Prefixes
In this file, you can create a set of elements specifying Autonomous Systems (ASNs) and network prefixes.
Structure
[
{
"description": "Description",
"filter": {
"as": [
"ASN A",
"ASN B",
"ASN C",
//..
],
"prefix": [
"Prefixes A",
"Prefixes B",
"Prefixes C"
//...
]
}
}
]
Example ASN + Prefix
In this case, you can combine ASNs with prefixes.
//...
{
"description": "A + B",
"filter": {
"as": [
"123",
"321"
],
"prefix": [
"123.123.123./22",
"321.321.321.0/22",
"2804:123::/32",
"2804:321::/32"
]
}
},
//...
Example ASN only
You can add as many ASNs as you want.
{
"description": "Youtube",
"filter": {
"as": [
"36040"
]
}
},
{
"description": "Google All",
"filter": {
"as": [
"15169"
"36040",
"396982"
]
}
},
//...
Example Prefix only
If description contains the word CDN in the dashboard “General Analysis - ASN + Prefixes (Favorites)”
//...
{
"description": "CDN from my ISP",
"filter": {
"prefix": [
"192.192.0.0/26",
"2001:db8:face::/64"
]
}
},
//...
fav-asn-prefix-graphs.json
When editing, it is not necessary to restart the service for changes to take effect.
📄 fav-services-graphs.json
List of known services to simplify the use of Dashboards. It is possible to create combinations with more than one port and more than one protocol type.
Structure
[
{
"NOME": {
"port": [
"número da porta"
],
"proto": [
"protocolo"
]
},
}
]
Example
[
{
"DNS": {
"port": [
"53"
],
"proto": [
"udp",
"tcp"
]
},
"FTP": {
"port": [
"21",
"20"
],
"proto": [
"tcp"
]
},
"MySQL": {
"port": [
"3306"
],
"proto": [
"tcp"
]
},
"NTP": {
"port": [
"123"
],
"proto": [
"tcp",
"udp"
]
},
"POP E-Mail": {
"port": [
"110",
"143",
"995"
],
"proto": [
"tcp"
]
},
"Ping": {
"port": [],
"proto": [
"icmp"
]
},
"Port 0": {
"port": [
"0"
],
"proto": [
"udp"
]
},
"RDP": {
"port": [
"3389"
],
"proto": [
"tcp",
"udp"
]
},
"SMTP E-Mail": {
"port": [
"25",
"465",
"587"
],
"proto": [
"tcp"
]
},
"SSH": {
"port": [
"22"
],
"proto": [
"tcp",
"udp"
]
},
"SpeedTest": {
"port": [
"8080"
],
"proto": [
"tcp"
]
},
"Telnet": {
"port": [
"23"
],
"proto": [
"tcp",
"udp"
]
},
"Tunel GRE": {
"port": [],
"proto": [
"gre"
]
},
"Web HTTP/HTTPS": {
"port": [
"80",
"443"
],
"proto": [
"tcp",
"udp"
]
}
}
]
fav-services-graphs.json
When editing, it is not necessary to restart the service for changes to take effect.
📄 ignore-asn.json
This file allows the strategic exclusion of certain ASNs (Autonomous Systems) from analysis, useful for focusing on relevant information.
Structure
[
{
"asn": "AS Number",
"description": "Description"
}
]
Example
[
{
"asn": 65530,
"description": "Ignore iBGPs"
},
{
"asn": 123,
"description": "Ignore my AS"
},
{
"asn": 321,
"description": "Ignore my Client AS"
},
{
"asn": 0,
"description": "Ignore the BUG"
}
]
ignore-asn.json
When editing, it is not necessary to restart the service for changes to take effect.
📄 my-prefix.json
Register your own network prefixes here. By including smaller prefixes, you can speed up the display of more detailed charts.
Structure
{
"asn_prefix": {
"IPv4": [
"IPv4 Prefixes"
],
"IPv6": [
"IPv6 Prefixes"
]
}
}
Example
{
"asn_prefix": {
"IPv4": [
"192.168.0.0/22",
"192.168.0.0/23",
"192.168.2.0/23",
"192.168.0.0/24",
"192.168.1.0/24",
"192.168.2.0/24",
"192.168.3.0/24"
],
"IPv6": [
"2001:db8::/32",
"2001:db8::/33",
"2001:0db8:8000::/33",
"2001:db8::/34",
"2001:db8:4000::/34",
"2001:db8:8000::/34",
"2001:db8:c000::/34"
]
}
}
my-prefix.json
When editing, it is not necessary to restart the service for changes to take effect.
📄 my-prefix-int.json
Register here your own prefixes or those of clients, as well as ASNs if necessary. Example: servers, dedicated clients, client ASNs, among others.
Structure
[
{
"description": "My ASNs and Prefixes",
"filter": {
"as": [
"ASN"
],
"prefix": [
"Prefix"
]
}
}
]
Example
[
{
"description": "My ASNs and Prefixes",
"filter": {
"as": [
"64512",
"64514"
],
"prefix": [
"192.168.0.0/21",
"192.168.128.0/22",
"2001:db8::/32",
"2001:db9::/32"
]
}
},
{
"description": "All my client ASNs",
"filter": {
"as": [
"65530",
"65531",
"65532"
]
}
},
{
"description": "Client ASN X",
"filter": {
"as": [
"65530"
],
"prefix": [
"192.168.144.0/21"
]
}
},
{
"description": "All IPs",
"filter": {
"prefix": [
"0.0.0.0/0",
"::/0"
]
}
},
{
"description": "Servers",
"filter": {
"prefix": [
"192.168.168.0/26",
"2001:db8:bebe:cafe::/64"
]
}
},
{
"description": "Dedicated Client",
"filter": {
"prefix": [
"192.168.168.225/32",
"2001:db8:bebe:100::/56"
]
}
},
{
"description": "Dedicated Client",
"filter": {
"prefix": [
"192.168.169.128/28",
"2001:db8:f0da::/48"
]
}
}
]
my-prefix.json
When editing, it is not necessary to restart the service for changes to take effect.
📄 notify.json
The notify.json
file contains the necessary settings for sending notifications via email and Telegram within the application. This file is crucial for the application to be able to send alerts and reports automatically based on monitored events.
The structure of notify.json
consists of two main sections: email
and telegram
. Each of these sections has its own specific settings, which are described below.
email
section
This section defines the settings for sending emails using an SMTP server. The system can send notifications to configured recipients when certain events or conditions are met.
- default_destination: The default email address where notifications will be sent.
- sender_name: The name that will appear as the sender of the email. Can be customized.
- smtp_host: The address of the SMTP server used to send emails.
- smtp_password: The password for the email account used to authenticate with the SMTP server.
- smtp_port: The port used for the SMTP connection. Commonly 587 (STARTTLS) or 465 (SSL/TLS).
- smtp_username: The username (usually the email address) used to authenticate with the SMTP server.
Test E-mail
You can test your connection with the email server and send a test using the API:
{URL}:{PORT}/api/test/email/connection
{URL}:{PORT}/api/test/email/Your-Message/example@example.com
telegram
section
This section defines the settings for sending messages via Telegram using the Telegram bot API. The application can send messages to a configured chat or group.
- allow_responses: List of groups or users allowed to execute commands with the bot.
- bot_token: The Telegram bot token, required for authentication and sending messages. This token can be obtained directly from the Telegram API when creating a bot.
- default_chat: The chat or group ID where messages will be sent by default.
Telegram Topics
To send messages to a specific topic within a group, you need to know the topic ID. The format to specify a topic is the group ID followed by a comma and the topic ID.
Example: -1001234567890,50
, where -1001234567890
is the group ID, and 50
is the topic ID within that group.
For more help, access the Telegram Bot menu.
Bot Commands
Access the Telegram Bot menu for more information.
Example
{
"email": {
"default_destination": "example@example.com",
"sender_name": "RR-Flow Notifications",
"smtp_host": "smtp.example.com",
"smtp_password": "example_password",
"smtp_port": "587",
"smtp_username": "notify@example.com"
},
"telegram": {
"allow_responses": [
"-1000000000000",
"200000000"
],
"bot_token": "123456789:ABCdefGHIjklMNO456PQRstUVWxyz",
"default_chat": "-1001234567890"
}
}
📄 data_traffic_analysis.json
The data_traffic_analysis.json
file contains the settings for network traffic monitoring, with data stored in the database located at $data_path/traffic_data.db
. This file can be used both to log and store collected traffic data and to configure alerts and actions to be executed based on predefined conditions. In it, it is possible to define traffic descriptions, filters, collection sources, in addition to specifying how alerts or actions will be sent (sending notices via email and Telegram or executing scripts and commands via SSH).
Example Structure
[
{
"alerts": {
"email": {
"activate": true,
"graph_color": "#9C27B0",
"graph_send": true,
"graph_time": 90,
"message": "<b>[$status]:</b> Trafego de ICMP elevado <br>👉 aggr_flows: $aggr_flows<br>👉 bpp: $bpp<br>👉 bps:$bps<br>👉 bytes: $bytes<br>👉 packets: $packets <br><br>👉 description: $description<br>👉 filter: $filter<br>👉 sources: $sources",
"subject": "[$status] Trafego de ICMP elevado",
"recipients": [
"empresa1@empresa1.com",
"empresa2@empresa2.com"
]
},
"script": {
"activate": false,
"status": {
"incident": {
"delay": 0,
"file": "/opt/rr-flow-api/scripts/example.1.sh \"$status\" \"$aggr_flows\" \"$bpp\" \"$bps\" \"$bytes\" \"$packets\" \"$description\" \"$filter\" \"$sources\" \"VALOR EXTRA1\" \"VALOR EXTRA2\""
},
"resolved": {
"delay": 10,
"file": "/opt/rr-flow-api/scripts/example.1.sh \"$status\" \"$aggr_flows\" \"$bpp\" \"$bps\" \"$bytes\" \"$packets\" \"$description\" \"$filter\" \"$sources\" \"VALOR EXTRA1\" \"VALOR EXTRA2\""
}
}
},
"ssh": {
"activate": true,
"status": {
"incident": {
"commands": [
"system-view",
"route-policy RRFLOW_TESTE permit node 10",
"apply local-preference 999",
" apply community 123:123",
"commit",
"quit",
"run save",
"y",
"quit",
"quit"
],
"delay": 0
},
"resolved": {
"commands": [
"system-view",
"route-policy RRFLOW_TESTE permit node 10",
"apply local-preference 888",
" apply community 321:321",
"commit",
"quit",
"run save",
"y",
"quit",
"quit"
],
"delay": 3600
}
}
},
"telegram": {
"activate": true,
"graph_color": "#D81B60",
"graph_send": true,
"graph_time": 90,
"message": "[$status] <b>Trafego de ICMP elevado</b> \n👉 aggr_flows: $aggr_flows\n👉 bpp: $bpp\n👉 bps:$bps\n👉 bytes: $bytes\n👉 packets: $packets \n\n👉 description: $description\n👉 filter: $filter\n👉 sources: $sources \n É teste só",
"recipients": [
"200000000",
"200000001",
"200000002"
]
}
},
"conditions": {
"logic": "AND",
"rules": [
{
"operator": ">",
"value": 10485760,
"variable": "bps"
},
{
"operator": ">",
"value": 300000,
"variable": "packets"
}
]
},
"description": "Tráfego de ICMP",
"filter": "proto icmp",
"retention": 30,
"sources": [
"BORDA-RS",
"BORDA-SC",
"BORDA-PR"
]
},
{
"alerts": {
"telegram": {
"activate": true,
"graph_color": "#D81B60",
"graph_send": true,
"graph_time": 90,
"message": "[$status] <b>Possível ataque (BORDA-RS)</b> \n👉 aggr_flows: $aggr_flows\n👉 bpp: $bpp\n👉 bps:$bps\n👉 bytes: $bytes\n👉 packets: $packets \n\n👉 description: $description\n👉 filter: $filter\n👉 sources: $sources \n É teste só"
}
},
"conditions": {
"logic": "AND",
"rules": [
{
"operator": ">",
"value": 1073741824,
"variable": "bps"
}
]
},
"description": "Possível ataque (BORDA-RS)",
"filter": "port 53 and (bpp > 512 and bpp <9000)",
"retention": 90,
"sources": [
"BORDA-RS"
]
},
{
"alerts": {
"ssh": {
"activate": true,
"status": {
"incident": {
"commands": [
"system-view",
"route-policy RRFLOW_TESTE permit node 10",
"apply local-preference 999",
" apply community 123:123",
"commit",
"quit",
"run save",
"y",
"quit",
"quit"
],
"delay": 0
},
"resolved": {
"commands": [
"system-view",
"route-policy RRFLOW_TESTE permit node 10",
"apply local-preference 888",
" apply community 321:321",
"commit",
"quit",
"run save",
"y",
"quit",
"quit"
],
"delay": 0
}
}
},
"telegram": {
"activate": true,
"message": "[$status] <b>Possível ataque (BORDA-SC)</b> \n👉 aggr_flows: $aggr_flows\n👉 bpp: $bpp\n👉 bps:$bps\n👉 bytes: $bytes\n👉 packets: $packets \n\n👉 description: $description\n👉 filter: $filter\n👉 sources: $sources \n É teste só"
}
},
"conditions": {
"logic": "AND",
"rules": [
{
"operator": ">",
"value": 1073741824,
"variable": "bps"
}
]
},
"description": "Possível ataque (BORDA-SC)",
"filter": "port 53 and bpp > 600",
"retention": 90,
"sources": [
"BORDA-SC"
]
},
{
"description": "Tráfego Todas Bordas ",
"filter": "any",
"retention": 365,
"sources": [
"BORDA-RS",
"BORDA-SC",
"BORDA-PR"
]
}
]
Configuration
-
📋 Requirements
- description: Sets the name or identification of the monitored traffic.
- filter: Sets the capture filter for traffic, which follows the standard used by tools such as tcpdump.
- retention: Sets the number of days traffic data will be kept in the database.
- sources: List of sources (or edge locations) where traffic data is being collected.
-
🚨 alerts
This section defines how alerts will be sent and which actions will be executed when conditions are met.
-
📧 email
- activate: Enables or disables email sending. Default value is
false
. - graph_color: Color of the graph included in the email, in hexadecimal format. If not set, a default color will be used.
- graph_send: Defines if the graph will be included in the email. Default value is
false
if not set. - graph_time: Sets the period in minutes for the data displayed in the graph. Default value is
60
minutes if not set. (You need to have this data period to generate the graph) - message: Email message text, where variables can be used (see “Notification and Script Variables” section).
- recipients: List of recipient email addresses. If not set, the default recipient configured in
notify.json
will be used. - subject: Email subject, with variable support.
- activate: Enables or disables email sending. Default value is
-
💬 telegram
- activate: Enables or disables sending alerts via Telegram. Default value is
false
. - graph_color: Graph color in Telegram, in hexadecimal format. If not set, a default color will be used.
- graph_send: Defines if the graph will be included in the Telegram message. Default value is
false
if not set. - graph_time: Sets the period in minutes for the data displayed in the graph. Default value is
60
minutes if not set. - message: Telegram message text, with variable support.
- recipients: List of Telegram chat IDs. If not set, the default recipient ID configured in
notify.json
will be used.
- activate: Enables or disables sending alerts via Telegram. Default value is
-
🖥️ script
- activate: Enables or disables script execution. Default value is
false
. - file: (simple/retro-compatible mode) Path to the script to be executed, with variable support.
- status: (advanced mode, optional)
- incident: Script to run when the alert condition is detected.
- delay: (optional) Time in seconds to wait before executing the script.
- file: Script path with variable support.
- resolved: Script to run when the condition returns to normal (resolved).
- delay: (optional) Time in seconds to wait before executing the script.
- file: Script path with variable support.
- incident: Script to run when the alert condition is detected.
- If the
status
block is not used, the configuration remains compatible with older versions and always runs the script specified infile
immediately.
- activate: Enables or disables script execution. Default value is
-
🔑 ssh
- activate: Enables or disables the execution of SSH commands. Default value is
false
. - command_mode: (optional) Defines the execution mode of SSH commands.
- Can be
"shell"
(default) to send commands in an interactive session (e.g.: Cisco, Huawei, Juniper, Linux), - or
"exec"
to send commands individually (e.g.: RouterOS, some Linux). - If not informed, the system will try to choose the ideal mode automatically according to the vendor.
- Can be
- status
- incident: List of commands to be executed via SSH when the alert condition is detected.
- delay: (optional) Time in seconds to wait before executing the commands.
- commands: List of commands to be sent to the device. Supports variables.
- resolved: List of commands to be executed via SSH when the alert condition returns to normal (resolved).
- delay: (optional) Time in seconds to wait before executing the commands.
- commands: List of commands to be sent to the device. Supports variables.
- incident: List of commands to be executed via SSH when the alert condition is detected.
- SSH connection parameters (host, user, password, port) must be configured in the ssh block of each corresponding source in config.json
- activate: Enables or disables the execution of SSH commands. Default value is
-
⚙️ conditions
Defines the conditions that need to be met for the alerts to be triggered.
- logic: Defines the logic between conditions. Can be
AND
orOR
. - rules: List of rules, can be multiple. (see “Rule Variables” section). Each rule contains:
- operator: The operator to be used in the comparison (
>
,<
,>=
,<=
). - value: The reference value for the comparison.
- variable: The variable that will be compared. (see “Rule Variables” section)
- operator: The operator to be used in the comparison (
- logic: Defines the logic between conditions. Can be
-
Rule Variables
The following variables can be used to create conditions (rules » variable
):
- aggr_flows: Aggregated number of flows.
- bpp: Bits per packet.
- bps: Bits per second.
- bytes: Total number of bytes.
- packets: Total number of packets.
Notification and Script Variables
The following variables can be used in email, Telegram messages, and scripts:
- $aggr_flows: Aggregated number of flows.
- $bpp: Bits per packet.
- $bps: Bits per second.
- $bytes: Total number of bytes.
- $packets: Total number of packets.
- $status: Incident status (
incident
orresolved
). - $description: Item description.
- $filter: Applied filter.
- $sources: Sources where the data was collected.
Default Behavior
- If recipients are not set in the email or Telegram sections, the values defined in
notify.json
will be used. - If graph_color is not set, a default color will be used.
- The default value for graph_send is
false
. - The default value for graph_time is
60
minutes if not specified.
Script Example
Here is a simple example that can be used to receive the data. Be creative!
#!/bin/bash
# Receive the variables passed as arguments from the example:
# "file": "/opt/rr-flow-api/scripts/example.1.sh \"$status\" \"$aggr_flows\" \"$bpp\" \"$bps\" \"$bytes\" \"$packets\" \"$description\" \"$filter\" \"$sources\" \"EXTRA VALUE1\" \"EXTRA VALUE2\""
incident=${1}
aggr_flows=${2}
bpp=${3}
bps=${4}
bytes=${5}
packets=${6}
description=${7}
filter=${8}
sources=${9}
extra1=${10}
extra2=${11}
# Create or append to log at /tmp/example.1.log
echo "==========================" >> /tmp/example.1.log
echo "Status: [${incident}]" >> /tmp/example.1.log
echo "Date: $(date)" >> /tmp/example.1.log
echo "--">> /tmp/example.1.log
echo "Description: ${description}" >> /tmp/example.1.log
echo "Filter: ${filter}" >> /tmp/example.1.log
echo "Router(s): ${sources}" >> /tmp/example.1.log
echo "--">> /tmp/example.1.log
echo "aggr_flows: ${aggr_flows}" >> /tmp/example.1.log
echo "bpp: ${bpp}" >> /tmp/example.1.log
echo "bps: ${bps}" >> /tmp/example.1.log
echo "bytes: ${bytes}" >> /tmp/example.1.log
echo "packets: ${packets}" >> /tmp/example.1.log
echo "--">> /tmp/example.1.log
echo "Extra Value 1: ${extra1}" >> /tmp/example.1.log
echo "Extra Value 2: ${extra2}" >> /tmp/example.1.log
echo "==========================" >> /tmp/example.1.log
echo "Ok!"
📄 interfaces.json
Lists the interfaces of the flow sources. This file is created automatically at service startup if it does not exist. The SNMP connection is established based on the data present in source_path. It can be edited for customization, as well as to remove unnecessary interfaces if desired.
Necessary adjustment for Upstream interfaces
By default, when obtaining interface data, all will come as “type”: 0. It is necessary to change “type”: 1 for upstream interfaces.
- type:
- 1 = ⬆️ Upstream
- 0 = ⬇️ Downstream
Only Upstream interfaces will be shown in the selection menus.
Structure
{
"FLOW NAME": [
{
"desc_value": "Interface description",
"indice": "Index number obtained by SNMP",
"name_value": "Interface name",
"type": 0 or 1
}
]
}
Obtaining interfaces via SNMP
To obtain the interface data automatically, access the admin panel (for example: http://ip:5000/login), click the [Get SNMP data] button, then click the [Get interface data] button.
Wait until the interfaces.json
file is updated. If you repeat the procedure, it will be overwritten.
Creating interfaces.json manually
It is possible to create your own interfaces file, but you will need snmpwalk to obtain the indices, example commands:
Error obtaining SNMP data
If it is not possible to establish a connection, an example will be created for the flow source.
The OIDs queried to obtain the data are:
Index number
1.3.6.1.2.1.2.2.1.2
interface name1.3.6.1.2.1.31.1.1.1.18
You can use tools like snmpwalk to check if you can establish a connection, or nmap to verify if the SNMP/UDP port is open.
To obtain the interface names, the index will be the last number after the dot.
# snmpwalk -v2c -c public 10.250.250.1 .1.3.6.1.2.1.2.2.1.2
iso.3.6.1.2.1.2.2.1.2.79 = STRING: "NULL0"
iso.3.6.1.2.1.2.2.1.2.80 = STRING: "InLoopBack0"
iso.3.6.1.2.1.2.2.1.2.81 = STRING: "GigabitEthernet0/0/0"
iso.3.6.1.2.1.2.2.1.2.132 = STRING: "25GE0/1/28.400"
iso.3.6.1.2.1.2.2.1.2.133 = STRING: "25GE0/1/28.401"
iso.3.6.1.2.1.2.2.1.2.180 = STRING: "25GE0/1/29.1004"
iso.3.6.1.2.1.2.2.1.2.181 = STRING: "25GE0/1/29.1006"
Obtain the interface description/comment:
# snmpwalk -v2c -c public 10.250.250.1 .1.3.6.1.2.1.31.1.1.1.18
iso.3.6.1.2.1.31.1.1.1.18.79 = ""
iso.3.6.1.2.1.31.1.1.1.18.80 = ""
iso.3.6.1.2.1.31.1.1.1.18.81 = ""
iso.3.6.1.2.1.31.1.1.1.18.132 = STRING: "IX_SP_IPV4"
iso.3.6.1.2.1.31.1.1.1.18.133 = STRING: "IX_SP_IPV6"
iso.3.6.1.2.1.31.1.1.1.18.180 = STRING: "OPERADORA_IPV4"
iso.3.6.1.2.1.31.1.1.1.18.181 = STRING: "OPERADORA_IPV6"
Combine the information to build the structure for interfaces.json.
{
"Borda": [
{
"desc_value": "IX_SP_IPV4",
"indice": "132",
"name_value": "25GE0/1/28.400",
"type": 1
},
{
"desc_value": "IX_SP_IPV6",
"indice": "133",
"name_value": "25GE0/1/28.401",
"type": 1
},
{
"desc_value": "OPERADORA_IPV4",
"indice": "180",
"name_value": "25GE0/1/29.1004",
"type": 1
},
{
"desc_value": "OPERADORA_IPV6",
"indice": "181",
"name_value": "25GE0/1/29.1006",
"type": 1
}
]
}
interfaces.json
When editing, it is not necessary to restart the service for changes to take effect.
📄 peers.json
Obtains the list of BGP Peers. It is also generated automatically at startup if it does not exist. It is necessary to establish an SNMP connection based on the data present in source_path. It can be edited to suit your needs.
Structure
{
"FLOW NAME": [
{
"asn": "AS NUMBER",
"ip_peer": "REMOTE PEER IP ADDRESS",
"name": "DESCRIPTION FOR SESSION"
},
]
}
Obtaining Peer via SNMP
Through the admin panel, click the [Get SNMP data] button and then [Get peers data]. It is important to note that this action will always overwrite the existing file.
Support
At the moment, it is possible to obtain peers only from Huawei and Cisco. However, you can manually create this file easily.
Creating manually
You can create your own interfaces file, but you will need snmpwalk to obtain the indices, for example with the command snmpwalk -v2c -c public 10.250.250.1 IF-MIB::ifIndex
Example
{
"Border": [
{
"asn": "15169",
"ip_peer": "187.16.216.55",
"name": "GOOGLE"
},
{
"asn": "65530",
"ip_peer": "10.50.50.1",
"name": "Cgnat"
}
],
"Cgnat": [
{
"asn": "65530",
"ip_peer": "10.50.50.2",
"name": "iBGP Borda"
},
{
"asn": "65530",
"ip_peer": "10.50.50.6",
"name": "iBGP BNG"
}
]
}
Error obtaining SNMP data
If it is not possible to establish a connection, an example will be created for the flow source.
The OIDs queried to obtain the data are:
OID PEER HUAWEI
IP : 1.3.6.1.4.1.2011.5.25.177.1.1.2.1.4.0
AS : 1.3.6.1.4.1.2011.5.25.177.1.1.2.1.2.0
OID PEER CISCO
IP : 1.3.6.1.2.1.15.3.1.7
AS : 1.3.6.1.2.1.15.3.1.9
You can use tools like snmpwalk to check if you can establish a connection, or nmap to check if the SNMP/UDP port is open.
peers.json
When editing, it is not necessary to restart the service for changes to take effect.