Log Analysis Part 1: Using Fast Reverse Proxy (FRP) to Expose Logging Infrastructure

2020-05-04

Log Analysis Series Overview

This post is the first in a three part series of collecting and analyzing the blakejarvis.com web server logs, focused on identifying internet scanners and exploit attempts.

The Problem: Sending Data to a Home Network

After publishing my blog, I observed a significant number of internet scanners attempting to hack my website by accessing URIs associated with known vulnerabilities, uploading PHP injections, or port scans on ports 80 and 443. I wanted an easy way to collect the nginx logs and send them to an ELK instance located within my home local area network (LAN), but I wanted a secure way of doing this as well, without exposing any home IP address ports to the public internet.

How Should Devices in a Private Network be accessed?

There are four primary ways devices located within a LAN located behind a router can be accessed by other devices on the public internet:

  1. Port forward - This is a router configuration whereby the router passes all traffic on a certain port directly to a port on a device within a private network. Port forwarding is effective but can expose a device to the public internet without firewall rules. In addition, internet service providers assign home users semi-static IP addresses, that could change as frequently as every month. This could cause interruptions when sending data to a residential IP address.

  2. Virtual Private Network (VPN) - VPNs are the most robust and secure way of connecting devices over the public internet, regardless of their location behind a router. While robust, there is significant overhead in configuring the VPN and can be overkill for only exposing one service.

  3. SSH Port Forwarding - SSH tunnels can be used to forward ports from a local server to a remote server, using the syntax ssh -R 2000:localhost:9200 remote-server, where remote-server exposes the localhost's port 9200 on its port 2000. SSH port forwarding is a viable option for accessing devices behind a NAT or firewall.

  4. Fast Reverse Proxy (FRP) - Fast Reverse Proxy (FRP) is a GitHub project that allows ports to be forwarded and exposed on other systems in a client / server architecture, similar to SSH port forwarding. I decided to use FRP over SSH port forwarding due to FRP's robust logging and feature set, such as supported docker images, traffic dashboards, and configuration options.

The Architecture

To connect to the Elasticsearch server located behind a router, FRP can be used to forward the Elasticsearch port from the server to the blakejarvis.com server, where logs can be sent over the FRP service, bypassing the NAT.

Configuring FRP on the Server

The FRP server is the device exposed to the public internet, where the client can access it, but the server cannot access the client due to a NAT or firewall. In this case, the FRP server is blakejarvis.com. The following steps were taken on the blakejarvis.com server to get the FRP service up:

Retrieve the FRP Server Docker image from Docker Hub:

user@blakejarvis
docker pull snowdreamtech/frps

Create a config file named frps.ini with the following content (see the frps_full.ini as a reference):

ftps.ini
[common]
bind_port = 2000
tls_only = true

Start the FRP Server Docker container:

user@blakejarvis
docker run  --network host -d -v frps.ini --name frps snowdreamtech/frps

Inspect the docker logs to ensure the following was returned, indicating the service is up:

user@blakejarvis
docker logs [continer_id] -f -t
2020/04/12 00:09:26 [I] [service.go:157] frps tcp listen on 0.0.0.0:2000
2020/04/12 00:09:26 [I] [root.go:209] start frps success

Configuring FRP on the Client

The FRP client is the device that is behind a NAT or firewall. In this case, I wish to expose the Elasticsearch port on a device within my home network to the blakejarvis.com server by forwarding that port to the blakejarvis.com server where the Elasticsearch port can be reached through the FRP tunnel. The Elasticsearch service is already running and port 9200 is open. The following steps were taken on the Elasticsearch server within the home network to get the FRP service up:

Retrieve the FRP Client Docker image from Docker Hub:

user@elasticsearch
docker pull snowdreamtech/frpc

Create a config file named frps.ini with the following content (see the frpc_full.ini as a reference):

frps.ini
[common]
server_addr = 3.84.178.124
server_port = 2000
tls_enable = true

[tcp]
type = tcp
local_ip = 127.0.0.1
local_port = 9200
remote_port = 9200

Start the FRP Client Docker container:

user@elasticsearch
docker run  --network host -d -v frpc.ini --name frpc snowdreamtech/frpc

Inspect the docker logs to ensure the following was returned, indicating the service is up:

user@elasticsearch
docker logs [continer_id] -f -t
2020/04/12 00:09:31 [I] [service.go:282] [xxx] login to server success, get run id [xxx], server udp port [0]
2020/04/12 00:09:31 [I] [proxy_manager.go:144] [xxx] proxy added: [tcp]
2020/04/12 00:09:32 [I] [control.go:179] [xxx] [tcp] start proxy success

Testing the Connection

For my use case, I am only exposing the forwarded ports to the localhost, the blakejarvis.com server. I can test if the connection is up by running the following:

Test if FRP port 2000 is open:

user@blakejarvis
netstat -plnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State 
tcp6       0      0 :::443                  :::*                    LISTEN
tcp6       0      0 :::80                   :::*                    LISTEN
tcp6       0      0 :::2000                 :::*                    LISTEN

Test if FRP sends data to/from Elasticsearch:

user@blakejarvis
curl 127.0.0.1:9200 -u elastic
Enter host password for user 'elastic':
{
  "cluster_name" : "docker-cluster",
  [...]
  "tagline" : "You Know, for Search"
}

The Elasticsearch service handled the request over FRP. It works! Note AWS' firewall rules are used to restrict access to the FRP port to the LANs public IP address.

In a later blog I will write about what logs I am sending from the server into Elasticsearch, and what information I am aiming to obtain from data aggregations.

Last updated