Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Deploying Nuxt with Docker, env variables not registering and unexpect API call?

I am re-re-re-reading the docs on environment variables and am a bit confused.

MWE repo: https://gitlab.com/SumNeuron/docker-nf

I made a plugin /plugins/axios.js which creates a custom axios instance:

import axios from 'axios'

const apiVersion = 'v0'
const api = axios.create({
  baseURL: `${process.env.PUBLIC_API_URL}/api/${apiVersion}/`
})

export default api

and accordingly added it to nuxt.config.js

import colors from 'vuetify/es5/util/colors'

import bodyParser from 'body-parser'
import session from 'express-session'
console.log(process.env.PUBLIC_API_URL)
export default {
  mode: 'spa',
  env: {
    PUBLIC_API_URL: process.env.PUBLIC_API_URL || 'http://localhost:6091'
  },
  //  ...
   plugins: [
   //...
   '@/plugins/axios.js'
  ]
}

I set PUBLIC_API_URL to http://localhost:9061 in the .env file. Oddly, the log statement is correct (port 9061) but when trying to reach the site there is an api call to port 6091 (the fallback)

System setup

project/
|-- backend (flask api)
|-- frontend (npx create-nuxt-app frontend)
    |-- assets/
    |-- ...
    |-- plugins/
        |-- axios.js
    |-- restriced_pages
        |-- index.js (see other notes 3)
    |-- ...
    |-- nuxt.config.js
    |-- Dockerfile

|-- .env
|-- docker-compose.yml

Docker

docker-compose.yml

version: '3'

services:
  nuxt: # frontend
    image: frontend
    container_name: my_nuxt
    build:
      context: .
      dockerfile: ./frontend/Dockerfile
    restart: always
    ports:
      - "3000:3000"
    command: "npm run start"
    environment:
      - HOST
      - PUBLIC_API_URL

  flask: # backend
    image: backend
    container_name: my_flask
    build:
      context: .
      dockerfile: ./backend/Dockerfile
    command: bash deploy.sh

    environment:
      - REDIS_URL
      - PYTHONPATH
    ports:
      - "9061:9061"
    expose:
      - '9061'
    depends_on:
      - redis

  worker:
    image: backend
    container_name: my_worker
    command: python3 manage.py runworker
    depends_on:
      - redis
    environment:
      - REDIS_URL
      - PYTHONPATH

  redis: # for workers
    container_name: my_redis
    image: redis:5.0.3-alpine
    expose:
        - '6379'

Dockerfile

FROM node:10.15

ENV APP_ROOT /src

RUN mkdir ${APP_ROOT}
WORKDIR ${APP_ROOT}


COPY ./frontend ${APP_ROOT}

RUN npm install

RUN npm run build

Other notes:

  1. The reason the site fails to load is because the new axios plugin (@/plugins/axios.js) makes a weird call xhr call when the page is loaded, triggered by commons.app.js line 464. I do not know why, this call is no where explicitly in my code.

  2. I see this warning:

    WARN Warning: connect.session() MemoryStore is not designed for a production environment, as it will leak memory, and will not scale past a single process.

I do not know what caused it or how to correct it

  1. I have a "restricted" page:
// Create express router
const router = express.Router()

// Transform req & res to have the same API as express
// So we can use res.status() & res.json()
const app = express()
router.use((req, res, next) => {
  Object.setPrototypeOf(req, app.request)
  Object.setPrototypeOf(res, app.response)
  req.res = res
  res.req = req
  next()
})

// Add POST - /api/login
router.post('/login', (req, res) => {

  if (req.body.username === username && req.body.password === password) {
    req.session.authUser = { username }
    return res.json({ username })
  }
  res.status(401).json({ message: 'Bad credentials' })
})

// Add POST - /api/logout
router.post('/logout', (req, res) => {
  delete req.session.authUser
  res.json({ ok: true })
})

// Export the server middleware
export default {
  path: '/restricted_pages',
  handler: router

}

which is configured in nuxt.config.js as

serverMiddleware: [
    // body-parser middleware
    bodyParser.json(),
    // session middleware
    session({
      secret: 'super-secret-key',
      resave: false,
      saveUninitialized: false,
      cookie: { maxAge: 60000 }
    }),
    // Api middleware
    // We add /restricted_pages/login & /restricted_pages/logout routes
    '@/restricted_pages'
  ],

which uses the default axios module:

//store/index.js
import axios from 'axios'
import api from '@/plugins/axios.js'

//...

const actions = {
   async login(...) {
        // ....
        await axios.post('/restricted_pages/login', { username, password })
        // ....
    }
}

// ...
like image 555
SumNeuron Avatar asked Nov 19 '19 18:11

SumNeuron


Video Answer


1 Answers

As you are working in SPA mode, you need your environment variables to be available during build time.

The $ docker run command is therefore already too late to define these variables, and that is what you are doing with your docker-compose's 'environment' key.

So what you need to do to make these variables available during buildtime is to define them in your Dockerfile with ENV PUBLIC_API_URL http://localhost:9061. However, if you want them to be defined by your docker-compose, you need to pass them as build args. I.e. in your docker-compose :

nuxt:
  build:
    # ...
    args:
      PUBLIC_API_URL: http://localhost:9061

and in your Dockerfile, you catch that arg and pass it to your build environment like so :

ARG PUBLIC_API_URL
ENV PUBLIC_API_URL ${PUBLIC_API_URL}

If you don't want to define the variable's value directly in your docker-compose, but rather use locally (i.e. on the machine you're lauching the docker-compose command) defined environment variables (for instance with shell $ export PUBLIC_API_URL=http://localhost:9061), you can reference it as you would in a subsequent shell command, so your docker-compose ends up like this :

nuxt:
  build:
    # ...
    args:
      PUBLIC_API_URL: ${PUBLIC_API_URL}
like image 126
Ghalnas Avatar answered Sep 23 '22 19:09

Ghalnas