Be careful when implementing Next.js SSR in a Docker environment.

Seito Horiguchi
3 min readNov 15, 2023

I was implementing SSR for Next.js in a Docker environment and got really stuck in the middle, so I’m leaving the solution as a reminder.
This may be useful for people like you.

  • You are using Docker and have separate containers for front-end and back-end
  • Implement SSR or SSG in Next.js

My development environment is as follows

  • Front-end: Next.js (v13.1.6)
  • Backend: Python, Django REST framework (v3.13.1), PostgreSQL (v14)

Docker containers are set up and developed for the frontend and backend respectively, and Nginx is deployed on the backend.

Background

We were developing without using SSR until the middle of the project, but we had to run SSR on the top page because of the need to implement additional functions.
The frontend was developed on localhost:3000, the backend on localhost:8000, and we added the following code…

export async function getServerSideProps(context:any) {
const { data } = await axios.get(“http://localhost:8000/api/foo/");
return {
props: {
data: data
},
};
}

I can no longer access localhost and it returns a 500 error.
The error message then is as follows.

Connect ECONNREFUSED 127.0.0.1:8000

After a lot of research, it looks like there are two causes that need to be resolved in two steps.

Cause and Solution 1

So, the first step.
First, when SSR is used in a Docker environment, it is necessary to specify the port on the container side, not on the host side. So, rewrite the API call destination in such a way.
For example, if you use Nginx on the backend, the docker-compose.yml might have the following description.

version: "3.9"

services:
app:
...(Omitted)

db:
...(Omitted)

nginx:
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- 8000:80
...(Omitted)

Currently, Nginx’s service name is Nginx, listening on port 80, so the following is used.

// Before 
const { data } = await axios.get("http://localhost:8000/api/foo/");

// After
const { data } = await axios.get("http://nginx:80/api/foo/");

This solves the first error.
When accessing the local host, the previous error message no longer appears.
But now I get the following error.

Error: getaddrinfo ENOTFOUND nginx

Cause and Solution 2

This error seems to be caused by using multiple containers, each on a different network.
Therefore, we need to modify the container configuration so that the front end and the back end are on the same network.

First, add the networks setting to the backend’s docker-compose.yml.

version: "3.9"

services:
app:
...(Omitted)
networks:
- front
- back

db:
...(Omitted)
networks:
- back

nginx:
...(Omitted)
networks:
- front

networks:
front:
external: false
back:
external: false

Here we define two networks, front and back respectively, meaning that nginx connects to front, db connects to back, and app connects to both front and back networks.
Defining this will allow communication between containers.
The external: false at the end indicates that these networks front and back are newly defined here.

If you want to use a network that is already defined in an external container, you can specify the name of that network and set external: true (more on this later). (More details on this later).

Then, add the networks setting to the front-end docker-compose.yml as well.

services:
app:
...(Omitted)
networks:
- foo_font

networks:
foo_font:
external: true

First, let’s explain foo_font (it’s a bit complicated).
We defined new networks front and back earlier, but since we need to use the same networks, we will configure them to use the existing ones.

So we will use the same network name that we defined for nginx in the backend container.
So the network name is front? But it is not.
By default, the project name (directory name) is used as the network name prefix, which is project_front.
So, if the directory name of this backend is, for example, foo, then foo will be prefixed with foo_front.

You can also check the network name with this command.

docker network ls

In addition, since we just defined a new network, we set external: false at the bottom of docker-compose.yml in the networks configuration.

This solved the problem.

--

--