Skip to Content
OpsLogging

Logging

Proper logging is essential for monitoring, debugging, and maintaining your FiveM server. This guide covers structured logging, log rotation, and centralized logging with Grafana Loki.

Server Logs

FiveM generates logs in multiple locations:

  • logs/ directory: Server logs, error logs, and resource logs
  • Console output: Real-time server console
  • txAdmin interface: Web-based log viewer

Log Locations

# Default FiveM log locations /opt/fivem/logs/ ├── server.log # Main server log ├── server-error.log # Error log ├── server-console.log # Console output └── [resource-name].log # Individual resource logs

Log Rotation

Configure log rotation to prevent disk space issues and maintain log history.

Basic Log Rotation

Linux (logrotate)

Create logrotate configuration:

# /etc/logrotate.d/fivem /opt/fivem/logs/*.log { daily rotate 7 compress delaycompress missingok notifempty create 0644 fivem fivem sharedscripts postrotate # Reload server if needed (optional) systemctl reload fivem || true endscript }

Configuration Options:

  • daily: Rotate logs daily
  • rotate 7: Keep 7 days of logs
  • compress: Compress old logs
  • delaycompress: Compress on next rotation
  • missingok: Don’t error if log file missing
  • notifempty: Don’t rotate empty logs
  • create: Set permissions on new log files

Test Log Rotation

# Test logrotate configuration sudo logrotate -d /etc/logrotate.d/fivem # Force rotation (for testing) sudo logrotate -f /etc/logrotate.d/fivem

Advanced Rotation

Size-Based Rotation

# /etc/logrotate.d/fivem /opt/fivem/logs/*.log { size 100M rotate 10 compress delaycompress missingok notifempty create 0644 fivem fivem }

Hourly Rotation (High-Traffic Servers)

# /etc/logrotate.d/fivem-hourly /opt/fivem/logs/*.log { hourly rotate 24 compress delaycompress missingok notifempty create 0644 fivem fivem }

Structured Logging

Structured logging uses consistent formats (JSON) for easier parsing and analysis.

Basic Structured Logging

-- server.lua local function Log(level, message, data) local logEntry = { timestamp = os.date('%Y-%m-%d %H:%M:%S'), level = level, message = message, server = GetConvar('sv_hostname', 'Unknown'), data = data or {} } print(json.encode(logEntry)) end -- Usage Log('INFO', 'Player joined', { playerId = source, playerName = GetPlayerName(source) }) Log('ERROR', 'Database query failed', { query = 'SELECT * FROM users', error = errorMessage })

Advanced Structured Logging

-- logging.lua local LogLevel = { DEBUG = 0, INFO = 1, WARN = 2, ERROR = 3, FATAL = 4 } local currentLogLevel = LogLevel.INFO local function ShouldLog(level) return level >= currentLogLevel end local function StructuredLog(level, levelName, message, metadata) if not ShouldLog(level) then return end local logEntry = { timestamp = os.date('!%Y-%m-%dT%H:%M:%SZ'), level = levelName, message = message, server = GetConvar('sv_hostname', 'Unknown'), version = GetResourceMetadata(GetCurrentResourceName(), 'version', 0), metadata = metadata or {} } -- Write to console print(json.encode(logEntry)) -- Write to file (if file logging enabled) -- WriteToFile(logEntry) end -- Convenience functions function LogDebug(message, metadata) StructuredLog(LogLevel.DEBUG, 'DEBUG', message, metadata) end function LogInfo(message, metadata) StructuredLog(LogLevel.INFO, 'INFO', message, metadata) end function LogWarn(message, metadata) StructuredLog(LogLevel.WARN, 'WARN', message, metadata) end function LogError(message, metadata) StructuredLog(LogLevel.ERROR, 'ERROR', message, metadata) end function LogFatal(message, metadata) StructuredLog(LogLevel.FATAL, 'FATAL', message, metadata) end -- Export for use in other resources exports('LogInfo', LogInfo) exports('LogError', LogError)

Promtail Setup and Configuration

Promtail is the log shipper for Grafana Loki. It collects logs from files and ships them to Loki.

Installation

Ubuntu/Debian

# Download Promtail wget https://github.com/grafana/loki/releases/download/v2.9.0/promtail-linux-amd64.zip unzip promtail-linux-amd64.zip sudo mv promtail-linux-amd64 /usr/local/bin/promtail sudo chmod +x /usr/local/bin/promtail

Create Promtail User

sudo useradd --no-create-home --shell /bin/false promtail

Configuration

Create Promtail configuration file:

# /etc/promtail/config.yml server: http_listen_port: 9080 grpc_listen_port: 0 positions: filename: /tmp/positions.yaml clients: - url: http://localhost:3100/loki/api/v1/push scrape_configs: - job_name: fivem static_configs: - targets: - localhost labels: job: fivem server: myserver __path__: /opt/fivem/logs/*.log pipeline_stages: # Parse JSON logs - json: expressions: timestamp: timestamp level: level message: message server: server # Extract timestamp - timestamp: source: timestamp format: '2006-01-02 15:04:05' # Add labels - labels: level: server: # Output format - output: source: message

Advanced Promtail Configuration

Multiple Log Sources

# /etc/promtail/config.yml server: http_listen_port: 9080 grpc_listen_port: 0 positions: filename: /tmp/positions.yaml clients: - url: http://localhost:3100/loki/api/v1/push scrape_configs: # FiveM server logs - job_name: fivem-server static_configs: - targets: - localhost labels: job: fivem log_type: server __path__: /opt/fivem/logs/server.log pipeline_stages: - json: expressions: timestamp: timestamp level: level message: message - timestamp: source: timestamp format: '2006-01-02 15:04:05' - labels: level: # FiveM error logs - job_name: fivem-errors static_configs: - targets: - localhost labels: job: fivem log_type: error __path__: /opt/fivem/logs/server-error.log pipeline_stages: - regex: expression: '^(?P<timestamp>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) (?P<level>\w+) (?P<message>.*)' - timestamp: source: timestamp format: '2006-01-02 15:04:05' - labels: level: # Resource logs - job_name: fivem-resources static_configs: - targets: - localhost labels: job: fivem log_type: resource __path__: /opt/fivem/logs/*.log __path_exclude__: /opt/fivem/logs/server*.log pipeline_stages: - regex: expression: '^\[(?P<resource>[^\]]+)\] (?P<message>.*)' - labels: resource:

Systemd Service

Create systemd service for Promtail:

# /etc/systemd/system/promtail.service [Unit] Description=Promtail service After=network.target [Service] Type=simple User=promtail ExecStart=/usr/local/bin/promtail -config.file /etc/promtail/config.yml Restart=always RestartSec=10 [Install] WantedBy=multi-user.target

Enable and Start

sudo systemctl daemon-reload sudo systemctl enable promtail sudo systemctl start promtail sudo systemctl status promtail

Grafana Loki Integration

Grafana Loki is a horizontally-scalable, highly-available log aggregation system.

Installation

# docker-compose.yml version: '3.8' services: loki: image: grafana/loki:2.9.0 ports: - "3100:3100" command: -config.file=/etc/loki/local-config.yaml volumes: - ./loki-data:/loki - ./loki-config.yaml:/etc/loki/local-config.yaml restart: unless-stopped promtail: image: grafana/promtail:2.9.0 volumes: - /opt/fivem/logs:/var/log/fivem:ro - ./promtail-config.yml:/etc/promtail/config.yml - /tmp/positions.yaml:/tmp/positions.yaml command: -config.file=/etc/promtail/config.yml restart: unless-stopped depends_on: - loki grafana: image: grafana/grafana:latest ports: - "3000:3000" environment: - GF_SECURITY_ADMIN_PASSWORD=admin volumes: - ./grafana-data:/var/lib/grafana restart: unless-stopped depends_on: - loki

Loki Configuration

# loki-config.yaml auth_enabled: false server: http_listen_port: 3100 ingester: lifecycler: address: 127.0.0.1 ring: kvstore: store: inmemory replication_factor: 1 final_sleep: 0s chunk_idle_period: 5m chunk_retain_period: 30s schema_config: configs: - from: 2020-10-24 store: boltdb-shipper object_store: filesystem schema: v11 index: prefix: index_ period: 24h storage_config: boltdb_shipper: active_index_directory: /loki/boltdb-shipper-active cache_location: /loki/boltdb-shipper-cache shared_store: filesystem filesystem: directory: /loki/chunks limits_config: enforce_metric_name: false reject_old_samples: true reject_old_samples_max_age: 168h chunk_store_config: max_look_back_period: 0s table_manager: retention_deletes_enabled: true retention_period: 168h

Grafana Data Source

  1. Open Grafana (http://localhost:3000 )
  2. Login with admin/admin (change password)
  3. Go to Configuration → Data Sources
  4. Add data source → Select “Loki”
  5. URL: http://loki:3100
  6. Click “Save & Test”

Log Parsing Patterns

Common Log Patterns

JSON Logs

# Promtail pipeline for JSON logs pipeline_stages: - json: expressions: timestamp: timestamp level: level message: message playerId: data.playerId - timestamp: source: timestamp format: '2006-01-02T15:04:05Z' - labels: level: playerId:

Standard FiveM Logs

# Promtail pipeline for standard logs pipeline_stages: - regex: expression: '^(?P<timestamp>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) \[(?P<level>\w+)\] (?P<message>.*)' - timestamp: source: timestamp format: '2006-01-02 15:04:05' - labels: level:

Multi-Line Logs

# Promtail pipeline for stack traces pipeline_stages: - regex: expression: '^(?P<timestamp>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) (?P<level>\w+) (?P<message>.*)' - timestamp: source: timestamp format: '2006-01-02 15:04:05' - labels: level: - multiline: firstline: '^\d{4}-\d{2}-\d{2}' max_wait_time: 3s

Custom Parsing

Extract Player Information

pipeline_stages: - regex: expression: 'Player (?P<playerId>\d+) \((?P<playerName>[^)]+)\)' - labels: playerId: playerName:

Extract Resource Names

pipeline_stages: - regex: expression: '\[(?P<resource>[^\]]+)\]' - labels: resource:

Alerting Setup

Set up alerts in Grafana for critical log events.

Grafana Alert Rules

Alert: High Error Rate

# grafana-alerts.yml groups: - name: fivem-alerts interval: 1m rules: - alert: HighErrorRate expr: | sum(rate(loki_request_duration_seconds_count{level="ERROR"}[5m])) > 10 for: 5m labels: severity: warning annotations: summary: "High error rate detected" description: "Error rate is {{ $value }} errors/second"

Alert: Server Crash

- alert: ServerCrash expr: | count_over_time({job="fivem"} |= "FATAL" [5m]) > 0 for: 1m labels: severity: critical annotations: summary: "Server crash detected" description: "FATAL error in server logs"

Alert Channels

Configure notification channels in Grafana:

  1. Go to Alerting → Notification channels
  2. Add channel (Discord, Email, Slack, etc.)
  3. Configure webhook URL or credentials
  4. Test notification

Discord Webhook Example

{ "content": "🚨 **FiveM Server Alert**", "embeds": [{ "title": "{{ .GroupLabels.alertname }}", "description": "{{ .CommonAnnotations.description }}", "color": 15158332, "fields": [{ "name": "Severity", "value": "{{ .CommonLabels.severity }}", "inline": true }] }] }

Centralized Logging Architecture

Architecture Overview

FiveM Server → Log Files → Promtail → Loki → Grafana Log Rotation Archive Storage

Multi-Server Setup

For multiple servers, use a centralized Loki instance:

Central Loki Server

# docker-compose.yml (Central Server) version: '3.8' services: loki: image: grafana/loki:2.9.0 ports: - "3100:3100" volumes: - ./loki-data:/loki - ./loki-config.yaml:/etc/loki/local-config.yaml restart: unless-stopped grafana: image: grafana/grafana:latest ports: - "3000:3000" environment: - GF_SECURITY_ADMIN_PASSWORD=admin volumes: - ./grafana-data:/var/lib/grafana restart: unless-stopped depends_on: - loki

Remote Promtail Configuration

# /etc/promtail/config.yml (On each FiveM server) server: http_listen_port: 9080 grpc_listen_port: 0 positions: filename: /tmp/positions.yaml clients: - url: http://central-loki-server:3100/loki/api/v1/push scrape_configs: - job_name: fivem static_configs: - targets: - localhost labels: job: fivem server: server-01 # Unique server identifier __path__: /opt/fivem/logs/*.log pipeline_stages: - json: expressions: timestamp: timestamp level: level message: message - timestamp: source: timestamp format: '2006-01-02 15:04:05' - labels: level: server:

Log Retention

Configure retention policies:

# loki-config.yaml limits_config: retention_period: 720h # 30 days table_manager: retention_deletes_enabled: true retention_period: 720h

Best Practices

  1. Use Structured Logging: JSON format for easier parsing
  2. Label Appropriately: Use labels for filtering (server, level, resource)
  3. Set Retention: Configure retention based on storage capacity
  4. Monitor Promtail: Ensure Promtail is running and shipping logs
  5. Test Alerts: Regularly test alert rules
  6. Backup Logs: Archive important logs before rotation
  7. Secure Access: Use authentication for Grafana and Loki
  8. Resource Limits: Set appropriate limits to prevent storage issues

Troubleshooting

Promtail Not Shipping Logs

# Check Promtail status sudo systemctl status promtail # Check Promtail logs sudo journalctl -u promtail -f # Test Promtail configuration promtail -config.file /etc/promtail/config.yml -dry-run

Loki Not Receiving Logs

# Check Loki status docker ps | grep loki # Check Loki logs docker logs loki # Test Loki endpoint curl http://localhost:3100/ready

High Log Volume

# Adjust Promtail batch size clients: - url: http://localhost:3100/loki/api/v1/push batch_size: 100 batch_wait: 1s timeout: 10s

Summary

  • Log Rotation: Use logrotate to prevent disk space issues
  • Structured Logging: Use JSON format for easier parsing
  • Promtail: Collects and ships logs to Loki
  • Grafana Loki: Centralized log aggregation system
  • Alerting: Set up alerts for critical events
  • Multi-Server: Use centralized Loki for multiple servers
  • Best Practices: Structure logs, label appropriately, set retention
Last updated on