Table of Contents
- Setup And Configuration
- Related Docs
- Setup Paths At A Glance
- What The Repo Ships Today
- Prerequisites
- Local Setup
- Running The Async Worker Outside Docker
- Running Maintenance
- Configuration Surface
- Django And Paths
- API Access Control
- Storage
- Translation Provider Runtime
- Global provider selection
- Per-task model overrides and thinking flags
- Workflow And Delivery Settings
- OpenSearch
- Celery
- Aspose
- Docker Compose
- Verification Commands
- Forgejo Actions CI
Setup And Configuration
This document describes the checked-in setup paths and configuration surface for the current repository.
Related Docs
Setup Paths At A Glance
flowchart LR
Start[Start here] --> Choice{How do you want to run the repo?}
Choice --> Local[Local Django process]
Choice --> Compose[Local Docker Compose stack]
Local --> Env[Copy .env.example to .env]
Env --> Sync[uv sync]
Sync --> Deps[./docker/compose.sh up -d postgres redis minio minio-init opensearch opensearch-dashboards]
Deps --> Migrate[uv run python manage.py migrate]
Migrate --> Seed[uv run python manage.py seed_dev_data --lm-studio --with-dev-admin]
Seed --> Run[uv run python manage.py runserver --noreload]
Compose --> Up[./docker/compose.sh up --build]
Up --> Services[web + worker + postgres + redis + opensearch + minio]
What The Repo Ships Today
- Python application: Django 6 on Python 3.12
- Package manager and task runner:
uv - Default local database in
.env.example: PostgreSQL at127.0.0.1:5433 - Default local storage backend in
.env.example: MinIO-backed S3-compatible storage - Default translation provider in
.env.example:mock - Default async setting in
.env.example:IRIS_ASYNC_WORKFLOW_ENABLED=true - Optional SQLite fallback still supported through
IRIS_DB_ENGINE=sqlite - Optional local Docker stack: Django web app, Celery worker, PostgreSQL, Redis, OpenSearch, OpenSearch Dashboards, MinIO, and a MinIO bucket init container
Prerequisites
- Python 3.12
uv- Docker and Docker Compose for the recommended local setup
- Optional: Aspose license file at
config/licenses/Aspose.Words.Python.NET.lic
Local Setup
cp .env.example .env
uv sync
./docker/compose.sh up -d postgres redis minio minio-init opensearch opensearch-dashboards
uv run python manage.py migrate
uv run python manage.py seed_dev_data --lm-studio --with-dev-admin
uv run python manage.py runserver --noreload
Notes:
Iris_translation/env.pyloads variables from the project.envfile.- the checked-in
.env.exampleexpects PostgreSQL on127.0.0.1:5433, Redis on127.0.0.1:6380, MinIO on127.0.0.1:9000, and OpenSearch on127.0.0.1:9200 --noreloadis important for long-running synchronous jobs because Django's auto-reloader restarts the process on Python file changes.- with
.env.examplecopied unchanged, the recommended local path is already async and uses the Dockerized dependency stack /console/uses Django session auth.seed_dev_data --with-dev-admincreates a local-only bootstrap login:dev-admin/dev-password-123.- the repository includes sample
.docxfiles underdataset/for quick console smoke tests. - shared PyCharm run configs under
.run/cover dependency startup, migrate, seed, backend, worker, and a compound local async stack
Running The Async Worker Outside Docker
To exercise the Celery path with the recommended host-mode setup:
uv run celery -A Iris_translation worker --queues=job_control,docx_extract,retrieve_context,translate_batch,qa_verify,review_io,docx_reassemble,maintenance
The checked-in worker startup script used by Docker adds --pool=solo --concurrency=1.
Running Maintenance
The repo now ships a maintenance command and a matching Celery task:
uv run python manage.py maintenance_tick --dry-run
uv run python manage.py maintenance_tick
The command runs the same logic as tasks.workflow.maintenance_tick and currently covers:
- expired artifact cleanup
- stale non-promoted candidate-memory cleanup
- review-coverage snapshot refresh
- integrity checks for missing artifact objects and completed jobs missing delivery artifacts
Configuration Surface
Django And Paths
| Variable | Current meaning | Default / example |
|---|---|---|
DJANGO_SECRET_KEY |
Django secret key | change-me in .env.example |
DJANGO_DEBUG |
Debug mode | true |
DJANGO_ALLOWED_HOSTS |
Allowed hosts list | * |
DJANGO_TIME_ZONE |
Django and Celery timezone | UTC |
IRIS_DB_ENGINE |
postgresql or sqlite |
postgresql |
IRIS_DB_NAME |
PostgreSQL database name | iris_translation |
IRIS_DB_USER |
PostgreSQL user | iris |
IRIS_DB_PASSWORD |
PostgreSQL password | iris |
IRIS_DB_HOST |
PostgreSQL host | 127.0.0.1 |
IRIS_DB_PORT |
PostgreSQL port | 5433 |
IRIS_DB_CONN_MAX_AGE |
PostgreSQL connection reuse in seconds | 60 |
IRIS_DB_PATH |
SQLite database path relative to repo root | var/db.sqlite3 |
IRIS_MEDIA_ROOT |
Local storage root for local storage mode |
var/media |
IRIS_API_ADMIN_GROUP |
Django group name that receives admin API privileges | iris_admin |
API Access Control
The checked-in API access model is:
/api/v1/*requires authentication by default/health/live,/health/ready, and signed artifact downloads remain public- Django session auth and HTTP Basic auth are both supported
operatoraccess is granted to any active authenticated useradminaccess is granted to staff, superusers, and users inIRIS_API_ADMIN_GROUP
The Django admin site remains an internal-only support surface. The supported operator workflows are the session-authenticated /console/ UI and the documented JSON API.
Storage
| Variable | Current meaning | Default / example |
|---|---|---|
IRIS_STORAGE_BACKEND |
local or s3 |
s3 |
IRIS_S3_ENDPOINT_URL |
Custom S3-compatible endpoint | http://127.0.0.1:9000 |
IRIS_S3_ACCESS_KEY_ID |
S3 access key | minioadmin |
IRIS_S3_SECRET_ACCESS_KEY |
S3 secret | minioadmin |
IRIS_S3_BUCKET_NAME |
Artifact bucket name | iris-artifacts |
IRIS_S3_REGION |
S3 region | us-east-1 |
IRIS_S3_USE_PATH_STYLE |
Path-style addressing toggle | true |
When IRIS_STORAGE_BACKEND=s3, settings.py switches Django's default storage to storages.backends.s3boto3.S3Boto3Storage.
Translation Provider Runtime
Global provider selection
| Variable | Current meaning | Default / example |
|---|---|---|
IRIS_TRANSLATION_PROVIDER |
Global provider mode | mock |
IRIS_TRANSLATION_ENDPOINT |
OpenAI-compatible endpoint override | empty |
IRIS_TRANSLATION_MODEL |
OpenAI-compatible model name | empty |
IRIS_TRANSLATION_API_KEY |
Bearer token for provider | empty |
IRIS_TRANSLATION_TIMEOUT_SECONDS |
Request timeout | 300 in .env.example |
IRIS_TRANSLATION_TEMPERATURE |
Chat completion temperature | 0.0 |
IRIS_TRANSLATION_SYSTEM_PROMPT |
Optional system prompt override | empty |
IRIS_LM_STUDIO_ENDPOINT |
LM Studio OpenAI-compatible endpoint | http://127.0.0.1:1234/v1 |
IRIS_LM_STUDIO_MODEL |
LM Studio model name | qwen3.5-27b@q4_k_m |
The global runtime supports:
mockopenai_compatiblelm_studio
Provider profiles in the database also support mock, openai_compatible, and private_gateway.
Per-task model overrides and thinking flags
| Variable | Current meaning |
|---|---|
IRIS_MODEL_TRANSLATION |
Overrides the model used for translation tasks |
IRIS_MODEL_SUMMARIZATION |
Overrides the model used for summarization tasks |
IRIS_MODEL_ENTITY |
Overrides the model used for entity extraction tasks |
IRIS_ENABLE_THINKING |
Global reasoning/thinking default |
IRIS_ENABLE_THINKING_TRANSLATION |
Translation-specific thinking toggle |
IRIS_ENABLE_THINKING_SUMMARIZATION |
Summarization-specific thinking toggle |
IRIS_ENABLE_THINKING_ENTITY |
Entity-extraction-specific thinking toggle |
Workflow And Delivery Settings
| Variable | Current meaning | Default / example |
|---|---|---|
IRIS_ASYNC_WORKFLOW_ENABLED |
Enables Celery dispatch from intake/review flows | true |
IRIS_TRANSLATION_BATCH_SIZE |
Translation batch size | 10 in .env.example |
IRIS_ARTIFACT_DOWNLOAD_TTL_SECONDS |
Artifact link TTL | 300 |
IRIS_ARTIFACT_DOWNLOAD_SALT |
Django signing salt for local artifact download links | iris-artifact-download |
IRIS_CANDIDATE_MEMORY_RETENTION_DAYS |
Retention window for non-promoted candidate memory tied to terminal jobs | 30 |
IRIS_MAINTENANCE_INTEGRITY_SAMPLE_LIMIT |
Maximum integrity-issue examples returned in one maintenance report | 25 |
settings.py falls back to a batch size of 25, but .env.example overrides it to 10 when copied unchanged.
OpenSearch
| Variable | Current meaning | Default / example |
|---|---|---|
IRIS_OPEN_SEARCH_ENABLED |
Enables OpenSearch-backed glossary retrieval and readiness checks | true |
IRIS_OPEN_SEARCH_URL |
OpenSearch base URL | http://127.0.0.1:9200 |
IRIS_OPEN_SEARCH_INDEX |
Glossary index name | iris-translation-glossary |
IRIS_OPEN_SEARCH_TIMEOUT_SECONDS |
OpenSearch request timeout | 5 |
When enabled, only glossary retrieval moves to OpenSearch. Approved and candidate memory retrieval remain database-backed.
Celery
| Variable | Current meaning | Default / example |
|---|---|---|
CELERY_BROKER_URL |
Celery broker URL | redis://127.0.0.1:6380/0 |
CELERY_RESULT_BACKEND |
Result backend | redis://127.0.0.1:6380/1 |
CELERY_TASK_ALWAYS_EAGER |
Run Celery tasks eagerly in-process | false |
CELERY_TASK_EAGER_PROPAGATES |
Re-raise eager task exceptions | true |
Aspose
| Variable | Current meaning | Default / example |
|---|---|---|
ASPOSE_WORDS_ENABLED |
Enables Aspose readiness/runtime usage | true |
ASPOSE_WORDS_LICENSE_PATH |
License file path | config/licenses/Aspose.Words.Python.NET.lic |
DOTNET_SYSTEM_GLOBALIZATION_INVARIANT |
.NET globalization flag set during env setup | 1 |
Docker Compose
The checked-in local Compose stack is defined in compose.yml and started through docker/compose.sh.
./docker/compose.sh up --build
./docker/compose.sh ps
./docker/compose.sh down
docker/compose.sh sets COMPOSE_DISABLE_ENV_FILE=1 before calling docker compose, so Compose does not auto-load a local project-root .env.
Compose services
webworkerpostgresredisopensearchopensearch-dashboardsminiominio-init
Compose overrides
Compared with .env.example, the checked-in Compose stack overrides these runtime choices:
- points the web and worker containers at the bundled
postgresservice on port5432 - points the broker and result backend at the bundled
redisservice - points S3-compatible storage at the bundled
minioservice - points OpenSearch retrieval at the bundled
opensearchservice
Local service URLs
- Web app:
http://127.0.0.1:8000 - PostgreSQL:
127.0.0.1:5433 - Redis:
127.0.0.1:6380 - OpenSearch Dashboards:
http://127.0.0.1:5601 - MinIO API:
http://127.0.0.1:9000 - MinIO Console:
http://127.0.0.1:9001
Verification Commands
uv run ruff check .
uv run ruff format --check .
uv run python manage.py check
uv run python manage.py makemigrations --check
uv run python manage.py maintenance_tick --dry-run
uv run python manage.py test tests
Forgejo Actions CI
The main repository now ships a Forgejo Actions workflow at .forgejo/workflows/ci.yml.
Current assumptions:
- the workflow targets a runner label named
iris-ci - that label should resolve to a Docker image built from
docker/ci-runner.Dockerfile - the runner image exists to keep the CI runtime aligned with the repo's Aspose requirement for an OpenSSL 1.1-compatible base
Example image build and push:
docker build -f docker/ci-runner.Dockerfile -t registry.example.com/iris-translation/forgejo-runner:latest .
docker push registry.example.com/iris-translation/forgejo-runner:latest
Register the runner label to that image, for example:
iris-ci:docker://registry.example.com/iris-translation/forgejo-runner:latest
The workflow runs these commands:
uv sync --frozenuv run ruff format --check .uv run ruff check .uv run python manage.py checkuv run python manage.py makemigrations --checkuv run python manage.py maintenance_tick --dry-runuv run python manage.py test tests
Full test execution requires the Aspose license file. The workflow accepts either:
- a pre-provisioned or mounted file at
config/licenses/Aspose.Words.Python.NET.licin the checked-out workspace - a Forgejo Actions secret named
ASPOSE_WORDS_LICENSE_BASE64
To populate the secret from a Linux shell:
base64 -w 0 config/licenses/Aspose.Words.Python.NET.lic
For the current runtime shape behind these configuration options, see Runtime Architecture. For queueing and job-stage behavior, see Jobs And Workflow.