BeeCR Docker Compose configuration
This documentation section is devoted to running the BeeCR on-premises (i.e., on your local or virtual machine) via Docker.
⚠️ Note: If you are looking for a comprehensive step-by-step guide on how to deploy on-premises, please take a look at this tutorial.
📄 Prerequirements
To deploy the solution on a server, the following is required:
- A server based on Linux (ideally Ubuntu or Debian).
- A graphics card with at least 24 GB of memory (40 GB recommended).
- NVidia driver.
Hint: You can ensure that the driver is present and working by executing a command such asnvidia-smi
(the command should list the available GPUs). - Docker along with Docker Compose.
- NVIDIA Container Toolkit
🚀 Quick start
- Download the on-premises/compose.yml file to any location you prefer.
- Place your license file (provided to you by our team) in the same location where the
compose.yml
is located. - Run the following command in the directory containing
compose.yml
to start up AI and API containers in daemon mode:
🛠 Advanced configuration
Consider reading "Getting started" and the Docker Compose documentation if you are not familiar with it.
External API port
By default, the provided Docker Compose configuration exposes the API on port 8000
:
Replace 8000
with the port you prefer.
License file
It is assumed in the provided Docker Compose configuration that the license file is placed in the same directory where compose.yml
is located:
If you wish to keep it somewhere else, simply change ./beecr.lic
to the appropriate path on your file system or Docker volume.
Here are the relevant Docker documentations: Volumes, Bind Mounts.
GitLab host
By default, the provided Docker Compose configuration sets the target GitLab host to https://gitlab.com
via the GITLAB_HOST
environment variable:
If you are using a self-hosted GitLab instance, it is recommended to set this variable to the appropriate value.
However, this option takes effect only if incoming HTTP API requests do not explicitly specify the GitLab host.
Therefore, if you intend to use the provided CI/CD components/templates, there is no actual need to redefine GITLAB_HOST
because the CI/CD components/templates always pass the GitLab host via requests.
GitLab token
In most use cases, the BeeCR API server will receive the GitLab access token via HTTP query parameters in each request for review.
However, in some cases, you may wish to set the default token on the server side.
To do so, prepare a GitLab project or group access token and set the GITLAB_TOKEN
environment variable:
Ollama
The provided Docker Compose configuration assumes that AI model inference is performed via an Ollama-compatible API served by a sibling Docker container:
Update the OLLAMA_HOST
environment variable if you wish to run AI models on a different server.
If you are going to use OpenAI models, then this option is entirely meaningless.
OpenAI
If you are going to use AI models via the OpenAI-compatible API, then you may wish to set OPENAI_HOST
and/or OPENAI_API_KEY
.
Otherwise these options are entirely meaningless:
OPENAI_HOST
: OpenAI-compatible API host. Change it if you are using OpenAI via a proxy or some third-party OpenAI-compatible API.OPENAI_API_KEY
: Your OpenAI API key.
⚠️ Note: If you are going to use models via OpenAI (or an OpenAI-compatible API), you probably do not need to start the AI container locally. Therefore, you can simply remove (or comment out) the entire section
beecr-ai
in the Docker Compose configuration.
Model
By default, the provided Docker Compose configuration sets the AI model via the MODEL
environment variable.
This option takes effect only if incoming HTTP API requests do not explicitly specify the model.
Therefore, if you intend to use the provided CI/CD components/templates, there is no actual need to redefine MODEL
because the CI/CD components/templates always pass the model via requests.
⚠️ Note:
- For OpenAI-compatible APIs use model names as they are, e.g.
gpt-4o
.- For Ollama-compatible APIs use model names with the prefix
ollama/
, e.g.ollama/codestral:22b
.