<style>
/* Start numbering at H2 */
.markdown-body { counter-reset: h2; }
/* H2 increments the main counter and resets lower levels */
.markdown-body h2 { counter-increment: h2; counter-reset: h3; }
.markdown-body h3 { counter-increment: h3; counter-reset: h4; }
.markdown-body h4 { counter-increment: h4; counter-reset: h5; }
.markdown-body h5 { counter-increment: h5; counter-reset: h6; }
.markdown-body h6 { counter-increment: h6; }
/* Show numbers (no numbering for H1 because we don't define h1::before) */
.markdown-body h2::before { content: counter(h2) ". "; }
.markdown-body h3::before { content: counter(h2) "." counter(h3) " "; }
.markdown-body h4::before { content: counter(h2) "." counter(h3) "." counter(h4) " "; }
.markdown-body h5::before { content: counter(h2) "." counter(h3) "." counter(h4) "." counter(h5) " "; }
.markdown-body h6::before { content: counter(h2) "." counter(h3) "." counter(h4) "." counter(h5) "." counter(h6) " "; }
</style>
# Setting up Docker Dockge, NGinx Proxy Manager (Reverse Proxy) and Duniter v2s ĞTest container stacks on one server
This is a *light* documentation of my own (@Nicolas80) setup on an Oracle Cloud VM (Ubuntu 24.04 LTS; aarch64 architecture) to run Duniter Gtest nodes in Docker containers stacks, managed by Dockge, and exposed via NGinx Proxy Manager as reverse proxy.
I don't give too many details about the server setup itself since it's out of scope of this documentation.
Please be **careful** when following these instructions, since a wrong configuration could lead to data loss and there could be some mistakes in this document or in the way I did things ⚠️
## Dockge stack (to manage all other Docker container stacks)
<https://github.com/louislam/dockge>
Small docker container that can manage multiple docker compose stacks on the same host.
> **_NOTE:_** Since we only want to work with Docker Compose, each time we want to create a *separate folder* for *each stack*.
>
>And in each folder we create a `compose.yaml` file with the configuration of that stack.
>
>We will have to do this manually for the first stack (Dockge itself) - then we will be able to create new stacks from the Dockge web UI.
Creating that one in a folder named `dockge` - which will be important to refer to the custom docker network from all the other docker container stacks later on.
In my case, I put it in `/home/$USER/docker/dockge/`.
Also, we define another folder `/home/$USER/docker/stacks/` that will be used to store all the other docker compose stacks that will be managed by Dockge (you should also create it).
In the stack, we define a custom docker network `dockge_net` that will always be running, so that we can reuse it for all our other docker container stacks that will have to be exposed in the same reverse proxy container (see details on next point about reverse proxy).
>/home/\$USER/docker/dockge/compose.yaml file
``` yaml
services:
dockge:
image: louislam/dockge:1
# Always providing a *unique* container_name for the services to be able to refer to them
# from the reverse proxy later on
container_name: dockge
restart: unless-stopped
ports:
# Only exposing Locally ("127.0.0.1") the Dockge Web UI port !
# x.x.x.x : Host Port : Container Port
- 127.0.0.1:5001:5001
volumes:
- /var/run/docker.sock:/var/run/docker.sock
#- ./data:/app/data
- /home/$USER/docker/dockge/data:/app/data
# If you want to use private registries, you need to share the auth file with Dockge:
# - /root/.docker/:/root/.docker
# Stacks Directory
# ⚠️ READ IT CAREFULLY. If you did it wrong, your data could end up writing into a WRONG PATH.
# ⚠️ 1. FULL path only. No relative path (MUST)
# ⚠️ 2. Left Stacks Path === Right Stacks Path (MUST)
#- /opt/stacks:/opt/stacks
- /home/$USER/docker/stacks:/home/$USER/docker/stacks
environment:
# Tell Dockge where is your stacks directory
- DOCKGE_STACKS_DIR=/home/$USER/docker/stacks
networks:
dockge_net:
# Using a custom docker network to be able to share it with all other stacks later on
networks:
dockge_net:
driver: bridge
```
⚠️ The *full* path of that docker network (outside of this stack) will be `dockge_dockge_net` because we need to add the stack folder name as prefix when we want to refer to that network from other stacks.
### Finishing configuring Dockge and accessing Web UI
To finish the configuration of Dockge, we need to access it's web UI to setup the admin user.
Since in my case it's on a remote (Oracle) server, I created an SSH tunnel to forward the port 5001 from the server to my local computer.
>Example of SSH tunnel command, forwarding local (client) port 5001 to remote server host \"127.0.0.1\" and remote port 5001
``` shell
ssh youruser@yourserver.brussels.ovh -L5001:127.0.0.1:5001
```
Then you can access the web UI on your local computer at `http://127.0.0.1:5001/`.
## NGinx Proxy Manager as Reverse Proxy (on same shared docker network)
<https://github.com/NginxProxyManager/nginx-proxy-manager>
You can have **only ONE** reverse proxy container running on the server, that will be used to expose all the other containers (like Duniter nodes) via domain names and SSL/TLS certificates.
The reason is that for your *unique* public IP address of your server, you can have only one container listening on ports `80` (HTTP) and `443` (HTTPS) - which are the standard ports for web traffic.
From `dockge` web UI, we can create a new stack for NGinx Proxy Manager (NPM) as reverse proxy.
You can give for stack name something like `nginx-proxy-manager` (which will create a folder with that name in the stacks folder defined in Dockge).
>Then we edit the `compose.yaml` file in the UI (which will be created in that folder)
``` yaml
services:
#First login with this default user:
# Email: admin@example.com
# Password: changeme
nginx-proxy-manager:
image: jc21/nginx-proxy-manager:latest
container_name: nginx
restart: unless-stopped
ports:
- 0.0.0.0:80:80
- 0.0.0.0:443:443
# The Admin Web UI port; which we don't want to expose publicly => "127.0.0.1"
- 127.0.0.1:81:81
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
networks:
dockge_dockge_net: null
# Using the same shared docker network `dockge_dockge_net` with all
# docker container stacks that we want to expose via that reverse proxy.
networks:
dockge_dockge_net:
external: true
```
### Configuring NGinx Proxy Manager
After starting that stack from Dockge web UI, we can access the NPM web UI via an SSH tunnel (since we only exposed it on localhost).
>Example of SSH tunnel command, forwarding local (client) port 81 to remote server host \"
``` shell
ssh youruser@yourserver.brussels.ovh -L81:127.0.0.1:81
```
Then you can access the NGinx web UI on your local computer at `http://127.0.0.1:81/`.
### Optional extra steps
Now, from NGinx Proxy Manager web UI, you can create SSL/TLS certificates for your domain names (using Let's Encrypt) and add new proxy hosts to forward requests to the different docker container stacks that you will create later on.
We can also create new proxy hosts for the 2 admin web UIs we just defined (**Dockge** and **NGinx Proxy Manager**) to be able to access them remotely via domain names secured with SSL/TLS certificates; but for those 2 which **need** to stay **secure** and only accessible by you (the **admin**), you also **need** to create access rules to properly secure them if you do it ⚠️
I will not cover those parts here.
### Additional notes
I now use a different application to perform the reverse proxy, which provides extra functionalities / possibilities: the Open Source version of **[Pangolin](https://github.com/fosrl/pangolin)**.
That application is more flexible and powerful than NGinx Proxy Manager and also supports easily **and safely** exposing admin web UIs with proper authentication and access rules relying on SSO (Single Sign-On).
I will not cover that here.
## Duniter (v2s) ĞTest stacks
### Archive node
Same as before, from `dockge` web UI, we can create a new stack for our Duniter v2s Archive node.
You can give for stack name something like `duniter-gtest-archive` (which will create a folder with that name in the stacks folder defined in **Dockge**).
>Then we edit the `compose.yaml` file in the UI (which will be created in that folder)
``` yaml
services:
duniter-gtest-archive:
image: duniter/duniter-v2s-gtest-1100:1000-0.12.0
platform: linux/arm64/v8
container_name: duniter-gtest-archive
restart: unless-stopped
ports:
# # (Private) Prometheus (for monitoring)
# # can't seem to add it as a data source in grafana :-/
# - 127.0.0.1:9617:9615
# RPC port exposed only on localhost & reverse proxy (localhost can be useful for access with `gcli` from within remote server)
- 127.0.0.1:9955:9944
# P2P/IPFS port exposed only with reverse proxy - no need to expose it on localhost
# - 127.0.0.1:30335:30333
volumes:
- data-mirror:/var/lib/duniter/
environment:
# gdev, gtest or g1
- DUNITER_CHAIN_NAME=gtest
- DUNITER_VALIDATOR=false
- DUNITER_PRUNING_PROFILE=archive # <--- To be used from SQUID
- DUNITER_NODE_NAME=ChangeMe-GTest-archive # Name of the node that will appear on the network
- DUNITER_LISTEN_ADDR=/ip4/0.0.0.0/tcp/30333/ws
# Adapt the dns part with you own domain name for the archive node's P2P endpoint
- DUNITER_PUBLIC_ADDR=/dns/archive.gtest.fr.brussels.ovh/tcp/443/wss
# This allows communicating public endpoints to other nodes
# Adapt the dns part with you own domain name for the archive node's RPC endpoint
- DUNITER_PUBLIC_RPC=wss://archive-rpc.gtest.fr.brussels.ovh/
# Adapt the dns part with you own domain name for the Squid node's graphql endpoint (see later in this doc)
- DUNITER_PUBLIC_SQUID=https://squid.gtest.fr.brussels.ovh/v1/graphql
# Path to a json file containing public endpoints to gossip on the network
#- DUNITER_PUBLIC_ENDPOINTS=
networks:
dockge_dockge_net: null
volumes:
data-mirror: null
# Using the same shared docker network `dockge_dockge_net` with all
# docker container stacks that we want to expose via that reverse proxy.
networks:
dockge_dockge_net:
external: true
```
#### Reverse Proxy configuration (within NGinx Proxy Manager)
Please adapt the dns names below with your own domain names.
In the Proxy Hosts, we use the `container_name` and *internal* port values defined in the compose file above to refer to the container we want to expose (**this works because they are on the same docker network**).
>RPC Access (port 9944/ws)
Domain Names: archive-rpc.gtest.fr.brussels.ovh
Scheme: http
#Using "container_name"
Forward Hostname / IP: duniter-gtest-archive
Forward Port: 9944
Websockets Support: Enabled
SSL Certificate: Let's Encrypt (Force SSL: Enabled)
>P2P/IPFS Access (port 30333/ws)
Domain Names: archive.gtest.fr.brussels.ovh
Scheme: http
#Using "container_name"
Forward Hostname / IP: duniter-gtest-archive
Forward Port: 30333
Websockets Support: Enabled
SSL Certificate: Let's Encrypt (Force SSL: Enabled)
#### Testing RPC Access (port 9944/ws; NOT exposed for Smith nodes!)
We need to check the starting logs of the node to retrieve relevant information.
You can start the stack from Dockge web UI and also see the logs there.
>You can also use the following command; when inside the folder where the `compose.yaml` file is located (for me: `/home/$USER/docker/stacks/duniter-gtest-archive/`)
``` shell
docker compose logs -f
duniter-gtest-archive | Node key file '/var/lib/duniter/node.key' exists.
duniter-gtest-archive | Node peer ID is '12D3KooWJRtTHx39h2sgMgWkMt1iAu9H4Pd7WTvfS2bCk6JuujSN'.
duniter-gtest-archive | Starting duniter with parameters: --name Nicolas80-GTest-archive --node-key-file /var/lib/duniter/node.key --public-addr /dns/archive.gtest.fr.brussels.ovh/tcp/443/wss --public-rpc wss://archive-rpc.gtest.fr.brussels.ovh/ --public-squid https://squid.gtest.fr.brussels.ovh/v1/graphql --listen-addr /ip4/0.0.0.0/tcp/30333/ws --rpc-cors all --state-pruning archive --blocks-pruning archive --chain gtest -d /var/lib/duniter --unsafe-rpc-external
duniter-gtest-archive | 2026-01-10 08:54:20 Duniter
duniter-gtest-archive | 2026-01-10 08:54:20 ✌ version 0.12.0-unknown
duniter-gtest-archive | 2026-01-10 08:54:20 ❤ by librelois <c@elo.tf>:tuxmain <tuxmain@zettascript.org>:c-geek <https://forum.duniter.org/u/cgeek>:HugoTrentesaux <https://trentesaux.fr>:bgallois <benjamin@gallois.cc>:Duniter Developers <https://duniter.org>:Axiom-Team Developers <https://axiom-team.fr>, 2021-2026
duniter-gtest-archive | 2026-01-10 08:54:20 📋 Chain specification: ĞTest
duniter-gtest-archive | 2026-01-10 08:54:20 🏷 Node name: Nicolas80-GTest-archive
duniter-gtest-archive | 2026-01-10 08:54:20 👤 Role: FULL
duniter-gtest-archive | 2026-01-10 08:54:20 💾 Database: ParityDb at /var/lib/duniter/chains/gtest/paritydb/full
duniter-gtest-archive | 2026-01-10 08:54:21 Creating transaction pool txpool_type=ForkAware ready=Limit { count: 8192, total_bytes: 20971520 } future=Limit { count: 819, total_bytes: 2097152 }
duniter-gtest-archive | 2026-01-10 08:54:23 Local node identity is: 12D3KooWJRtTHx39h2sgMgWkMt1iAu9H4Pd7WTvfS2bCk6JuujSN
duniter-gtest-archive | 2026-01-10 08:54:23 Running litep2p network backend
duniter-gtest-archive | 2026-01-10 08:54:23 💻 Operating system: linux
duniter-gtest-archive | 2026-01-10 08:54:23 💻 CPU architecture: aarch64
duniter-gtest-archive | 2026-01-10 08:54:23 💻 Target environment: gnu
duniter-gtest-archive | 2026-01-10 08:54:23 💻 Memory: 23977MB
duniter-gtest-archive | 2026-01-10 08:54:23 💻 Kernel: 6.14.0-1018-oracle
duniter-gtest-archive | 2026-01-10 08:54:23 💻 Linux distribution: Debian GNU/Linux 11 (bullseye)
duniter-gtest-archive | 2026-01-10 08:54:23 💻 Virtual machine: no
duniter-gtest-archive | 2026-01-10 08:54:23 📦 Highest known block at #1310312
duniter-gtest-archive | 2026-01-10 08:54:23 Running JSON-RPC server: addr=0.0.0.0:9944,[::]:42183
duniter-gtest-archive | 2026-01-10 08:54:23 ***** Duniter has fully started *****
duniter-gtest-archive | 2026-01-10 08:54:23 〽 Prometheus exporter started at 127.0.0.1:9615
duniter-gtest-archive | 2026-01-10 08:54:24 maintain txs=(0, 0) a=1 i=0 views=[(1310312, 0, 0)] event=Finalized { hash: 0x02bacb2360ae6f50463718dae498298e9ff5ee7c161cb3c1737049ed85e5a306, tree_route: [] } duration=76.96µs
duniter-gtest-archive | 2026-01-10 08:54:25 maintain txs=(0, 0) a=1 i=1 views=[(1310313, 0, 0)] event=NewBestBlock { hash: 0x7f8c026238daf34d7be4de02ba2fba44efeb711bec68157a98535d6f68d53d06, tree_route: None } duration=197.442µs
duniter-gtest-archive | 2026-01-10 08:54:25 🏆 Imported #1310313 (0x8e84…2439 → 0x7f8c…3d06)
duniter-gtest-archive | 2026-01-10 08:54:25 🏆 Imported #1310314 (0x7f8c…3d06 → 0xc4fe…ff04)
...
```
>Important infos
duniter-gtest-archive | Starting duniter with parameters: --name Nicolas80-GTest-archive --node-key-file /var/lib/duniter/node.key --public-addr /dns/archive.gtest.fr.brussels.ovh/tcp/443/wss --public-rpc wss://archive-rpc.gtest.fr.brussels.ovh/ --public-squid https://squid.gtest.fr.brussels.ovh/v1/graphql --listen-addr /ip4/0.0.0.0/tcp/30333/ws --rpc-cors all --state-pruning archive --blocks-pruning archive --chain gtest -d /var/lib/duniter --unsafe-rpc-external
...
Local node identity is: 12D3KooWJRtTHx39h2sgMgWkMt1iAu9H4Pd7WTvfS2bCk6JuujSN
>Creating test link for RPC access via PolkadotJS (add \"/ws\" at the end of the URL)
https://polkadot.js.org/apps/?rpc=wss://archive-rpc.gtest.fr.brussels.ovh/ws#/explorer
Since we exposed the RPC port *\"locally\"* on the host (using \"127.0.0.1:9955:9944\" in the compose file above), we could also connect directly without going through the reverse proxy when using applications running on the host directly (for example `gcli` where we could configure \"ws://127.0.0.1:9955\" as *duniter endpoint*).
It is also possible to use it with polkadot, but then we need to use port forwarding with SSH (and to use \"ws://\" since there is no SSL anymore).
##### Example for port forwarding *local* host RPC port with SSH
>This forwards local (client) port 9955 has to be forwarded to remote host \"127.0.0.1\" and remote port 9955 (which is mapped to container port 9944 from the compose file above).
``` shell
ssh youruser@yourserver.brussels.ovh -L9955:127.0.0.1:9955
```
>Then we can connect like this (on the local computer where the SSH tunnel is running):
https://polkadot.js.org/apps/?rpc=ws://127.0.0.1:9955
#### Testing IPFS/P2P Access (port 30333/ws)
>Need to initialize IPFS daemon first (if not already done) - and keep it running
``` shell
ipfs daemon
Initializing daemon...
Kubo version: 0.39.0
Repo version: 18
System version: amd64/linux
Golang version: go1.25.4
PeerID: 12D3KooWLPC5cwNS1WE3D2BRN19VKiFW1bSh8HiBDt6Gw2f49gJr
Swarm listening on 127.0.0.1:4001 (TCP+UDP)
Swarm listening on 172.17.0.1:4001 (TCP+UDP)
Swarm listening on 172.21.0.1:4001 (TCP+UDP)
Swarm listening on 192.168.42.42:4001 (TCP+UDP)
Swarm listening on [::1]:4001 (TCP+UDP)
Run 'ipfs id' to inspect announced and discovered multiaddrs of this node.
RPC API server listening on /ip4/127.0.0.1/tcp/5001
WebUI: http://127.0.0.1:5001/webui
Gateway server listening on /ip4/127.0.0.1/tcp/8080
Daemon is ready
```
>Crafting and executing the connect command
``` shell
ipfs swarm connect /dns/archive.gtest.fr.brussels.ovh/tcp/443/wss/p2p/12D3KooWJRtTHx39h2sgMgWkMt1iAu9H4Pd7WTvfS2bCk6JuujSN
>
connect 12D3KooWJRtTHx39h2sgMgWkMt1iAu9H4Pd7WTvfS2bCk6JuujSN success
```
If we want to try different ways to access the node (direct mapping of port 30333/ws on the server); we should restart the IPFS daemon as otherwise it will keep the previous connections!
In this case, the port 30333 was NOT exposed (you can check the \"ports\" section of the compose file above).
>BUT, if that port would be OPEN with say this:
``` yaml
ports:
# # (Private) Prometheus (for monitoring)
# # can't seem to add it as a data source in grafana :-/
# - 127.0.0.1:9617:9615
# - 127.0.0.1:9946:9944
# - 127.0.0.1:30335:30333
- 0.0.0.0:30335:30333 # <--- HERE
- 127.0.0.1:9955:9944
```
We can adapt the address using \"ws\" instead of \"wss\" and the port 30335 instead of 443 (default for https).
>Then we could connect like this:
``` shell
# Reusing same dns name (but we won't be using the reverse proxy this time)
ipfs swarm connect /dns/archive.gtest.fr.brussels.ovh/tcp/30335/ws/p2p/12D3KooWJRtTHx39h2sgMgWkMt1iAu9H4Pd7WTvfS2bCk6JuujSN
>
connect 12D3KooWJRtTHx39h2sgMgWkMt1iAu9H4Pd7WTvfS2bCk6JuujSN success
```
Side note: It is not necessary to expose the P2P/IPFS port publicly since we configure the reverse proxy - this last part was just to illustrate how to connect directly if needed.
### Smith node
Same as before, from `dockge` web UI, we can create a new stack for our Duniter v2s Archive node.
You can give for stack name something like `duniter-gtest-smith` (which will create a folder with that name in the stacks folder defined in **Dockge**).
>Then we edit the `compose.yaml` file in the UI (which will be created in that folder)
``` yaml
services:
duniter-gtest-smith:
image: duniter/duniter-v2s-gtest-1100:1000-0.12.0
container_name: duniter-gtest-smith
restart: unless-stopped
ports:
# (Private) Prometheus (for monitoring)
# can't seem to add it as a data source in grafana :-/
#- 127.0.0.1:9635:9615
- 127.0.0.1:9964:9944 # RPC port - CANNOT be exposed publicly for Smith nodes! (don't use something else than "127.0.0.1:xxxx:9944" here)
#- 127.0.0.1:30353:30333 # P2P/IPFS port exposed with reverse proxy (only)
volumes:
- data-validator:/var/lib/duniter/
environment:
# gdev, gtest or g1
- DUNITER_CHAIN_NAME=gtest
- DUNITER_VALIDATOR=true
- DUNITER_PRUNING_PROFILE=light # <--- stays light
- DUNITER_NODE_NAME=Nicolas80-GTest-smith
- DUNITER_PUBLIC_ADDR=/dns/smith.gtest.fr.brussels.ovh/tcp/443/wss
- DUNITER_LISTEN_ADDR=/ip4/0.0.0.0/tcp/30333/ws
logging:
driver: json-file
options:
max-size: 100m # rotate when the active log reaches ~100 MB
max-file: "5" # keep at most 5 files TOTAL (active + rotated)
networks:
dockge_dockge_net: null
distance-oracle:
# Should be same image as above !
image: duniter/duniter-v2s-gtest-1100:1000-0.12.0
container_name: duniter-gtest-distance-oracle
entrypoint: docker-distance-entrypoint # other entrypoint
environment:
ORACLE_RPC_URL: ws://duniter-gtest-smith:9944 # container_name from SMITH service above
# Apparently it doesn't really matter - it would create that directory...
ORACLE_RESULT_DIR: /var/lib/duniter/chains/gtest/distance/ # should match network
#This one needs to be low for now for GTtest...
ORACLE_EXECUTION_INTERVAL: 10 # <--- should be adjusted based on network
#Optional ones:
ORACLE_MAX_DEPTH: "5"
ORACLE_LOG_LEVEL: debug
volumes:
- data-validator:/var/lib/duniter/ # use same volume
logging:
driver: json-file
options:
max-size: 100m # rotate when the active log reaches ~100 MB
max-file: "5" # keep at most 5 files TOTAL (active + rotated)
networks:
dockge_dockge_net: null
# OPTIONAL: The presence of a profile can prevent running this service "distance-oracle" by default.
# Recommendation: Better to keep it commented out so that the distance oracle is started automatically
#
# When configuring a profile it needs to be specifically activated when starting the stack, like
# $ docker compose --profile oracle --profile other_profile up -d
#profiles:
# - oracle
volumes:
data-validator: null
# Using the same shared docker network `dockge_dockge_net` with all
# docker container stacks that we want to expose via that reverse proxy.
networks:
dockge_dockge_net:
external: true
```
#### Reverse Proxy configuration (within NGinx Proxy Manager)
Please adapt the dns names below with your own domain names.
In the Proxy Hosts, we use the `container_name` and *internal* port values defined in the compose file above to refer to the container we want to expose (**this works because they are on the same docker network**).
⚠️ RPC Access (port 9944/ws) should **NOT** be mapped for **SMITH** nodes ⚠️
It should only be used locally on the host (for example with `gcli`) to perform smith operations.
>P2P/IPFS Access (port 30333/ws)
Domain Names: smith.gtest.fr.brussels.ovh
Scheme: http
#Using "container_name"
Forward Hostname / IP: duniter-gtest-smith
Forward Port: 30333
Websockets Support: Enabled
SSL Certificate: Let's Encrypt (Force SSL: Enabled)
#### Testing RPC Access
Can only be tested locally on the remote host (for example with `gcli`) or via an SSH tunnel forwarding the local port 9964 to the remote host.
When configuring `gcli` on the remote host, we can use \"ws://127.0.0.1:9964\" as *duniter endpoint*.
#### Testing IPFS/P2P Access (port 30333/ws)
The same as for the Archive node above; just adapt the dns name to the one defined for the Smith node.
You need to retrieve the proper `node identity` from the Smith node startup logs (see explanation from Archive node if needed).
>In my case I had
duniter-gtest-smith | 2026-01-10 16:32:07 Local node identity is: 12D3KooWF4rQUkmpcedfLfYJUXYAEfxNc2wvUjsiyJRjZfLKDZRg
And we can test with ipfs command:
``` shell
ipfs swarm connect /dns/smith.gtest.fr.brussels.ovh/tcp/443/wss/p2p/12D3KooWF4rQUkmpcedfLfYJUXYAEfxNc2wvUjsiyJRjZfLKDZRg
>
connect 12D3KooWF4rQUkmpcedfLfYJUXYAEfxNc2wvUjsiyJRjZfLKDZRg success
```
### Squid node (Indexer)
Same as before, from `dockge` web UI, we can create a new stack for our Duniter v2s Squid node.
You can give for stack name something like `duniter-gtest-squid` (which will create a folder with that name in the stacks folder defined in **Dockge**).
>Then we edit the `compose.yaml` file in the UI (which will be created in that folder)
``` yaml
services:
# squid processor eating date from duniter archive (RPC_ENDPOINT) and putting them in db
processor:
image: duniter/squid-app-gtest:0.5.5 # Currently issue with 0.5.6 image
container_name: duniter-gtest-squid-processor
restart: unless-stopped
depends_on:
db:
condition: service_healthy
healthcheck:
test:
- CMD-SHELL
- pgrep -f 'lib/main.js'
interval: 5s
timeout: 2s
retries: 3
environment:
- DB_NAME=${DB_NAME}
- DB_PORT=5432
- DB_HOST=db
- DB_PASS=${DB_PASSWORD}
- RPC_ENDPOINT=${RPC_ENDPOINT}
command:
- sqd
- process:prod
networks:
- default # <--- allows the processor to connect to the postgres database & the server
- dockge_dockge_net # <--- allows the processor to connect to the duniter archive node
# postgres database with LDS support
db:
image: duniter/squid-postgres-gtest:0.5.5 # Currently issue with 0.5.6 image
container_name: duniter-gtest-squid-db
restart: unless-stopped
ports:
- 127.0.0.1:5432:5432
volumes:
- postgres-data:/var/lib/postgresql/data
healthcheck:
test:
- CMD-SHELL
- pg_isready -U postgres -d ${DB_NAME}
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
environment:
POSTGRES_DB: ${DB_NAME}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: postgres
# Faster startup
POSTGRES_INITDB_ARGS: --auth-host=md5
# LDS Performance tuning for GraphQL live subscriptions
LIVE_THROTTLE: 500 # 500ms throttle
LD_WAIT: 200 # 200ms wait before processing changes
command:
- postgres
- -c
- wal_level=logical
- -c
- max_replication_slots=4
- -c
- max_wal_senders=4
- -c
- log_min_messages=FATAL
- -c
- log_replication_commands=off
- -c
- log_statement=none
- -c
- log_min_duration_statement=1000
# PostGraphile GraphQL server
server:
image: duniter/squid-graphile-gtest:0.5.5 # Currently issue with 0.5.6 image
container_name: duniter-gtest-squid-server
restart: unless-stopped
depends_on:
processor:
condition: service_healthy
ports:
- ${GRAPHQL_LISTEN_PORT:-8080}:5678
environment:
DATABASE_URL: postgres://postgres:${DB_PASSWORD}@db:5432/${DB_NAME}
NODE_ENV: production
PORT: 5678
DB_HOST: db
DB_PORT: 5432
DB_NAME: ${DB_NAME}
DB_PASSWORD: ${DB_PASSWORD}
networks:
- default # <--- allows the server to connect to DB and processor
- dockge_dockge_net # <--- allows the server to be accessible from NGinx
volumes:
postgres-data: null
# Using the same shared docker network `dockge_dockge_net` with all
# docker container stacks that we want to expose via that reverse proxy.
networks:
dockge_dockge_net:
external: true
```
For this stack, we also need to fill in some environment variables in `.env` just below the `compose.yaml` editor in Dockge web UI.
>Environment variables to define in `.env` file
``` shell
# postgres
DB_NAME=squid
DB_PASSWORD=CHANGE_ME_DB_PASSWORD
# graphile
GRAPHQL_LISTEN_PORT=8081
GRAPHQL_ADMIN_SECRET=CHANGE_ME_GRAPHQL_PASSWORD
# Duniter endpoint
# Using dockge_dockge_net docker network; we use `container_name` and internal ports from our Archive node
# Squid needs an Archive node to work properly
RPC_ENDPOINT=ws://duniter-gtest-archive:9944
```
There are potentially more environment variables that can be defined; please check the official documentation of Duniter Squid for more details.
#### Reverse Proxy configuration (within NGinx Proxy Manager)
Please adapt the dns names below with your own domain names.
In the Proxy Hosts, we use the `container_name` and *internal* port values defined in the compose file above to refer to the container we want to expose (**this works because they are on the same docker network**).
>Server Access (port 5678)
Domain Names: squid.gtest.fr.brussels.ovh
Scheme: http
#Using "container_name"
Forward Hostname / IP: duniter-gtest-squid-server
Forward Port: 5678
Websockets Support: Enabled
SSL Certificate: Let's Encrypt (Force SSL: Enabled)
#### Testing GraphQL Access
You can access the \"GraphiQL\" interface to test the GraphQL API directly from your web browser.
<https://squid.gtest.fr.brussels.ovh/graphiql>
In there you can execute a simple query to check version of the server
``` graphql
query {
version {
type
version
}
}
```
Another query for the last known block
``` graphql
subscription lastBlock {
blocks(last: 1, orderBy: HEIGHT_ASC) {
nodes {
hash
height
}
}
}
```
>And the access URL for the other applications is with \"/v1/graphql\" at the end:
<https://squid.gtest.fr.brussels.ovh/v1/graphql>
It can also be accessed **\"locally\"** from the host directly since we exposed internal port 5678 to port 8081 (for example `gcli` where we could configure \"http://127.0.0.1:8081/v1/graphql\" as *indexer endpoint*)
##### Manual testing with `curl`
>For the \"/v1/graphql\" endpoint, it's possible to test with `curl` like this:
curl -X POST https://squid.gtest.fr.brussels.ovh/v1/graphql \
-H "Content-Type: application/json" \
-d '{"query":"query { version { type version } }"}'
>
{"data":{"version":{"type":"graphile","version":"0.5.6"}}}%
## What's next?
Now you basically can install any docker compose stack you want on that server; as long as you make sure to use the same shared docker network `dockge_dockge_net` so that the reverse proxy can access them.