alx-project-nexus

Project Nexus Documentation

CI OpenAPI Docs

Overview

This repository documents major learnings from the ProDev Backend Engineering program. It serves as a knowledge hub for backend technologies, concepts, challenges, and best practices covered during the program.

Database Schema (ERD)

The following Entity Relationship Diagram (ERD) shows the database schema for the Project Nexus e-commerce system:

Project Nexus E-commerce ERD

Objectives

Key Technologies

Backend Concepts

Challenges and Solutions

Best Practices and Takeaways

Collaboration

This project encourages collaboration with:

Communication and collaboration are supported through the #ProDevProjectNexus Discord channel.

Repository

GitHub Repository: alx-project-nexus

Live demo

The project is deployed and publicly available at:

https://alx-project-nexus-57m5.onrender.com/

Open the admin at https://alx-project-nexus-57m5.onrender.com/admin/ (use the admin credentials configured in the Render service environment variables).

Getting started (local dev)

This repo contains a Django project (nexus) and a catalog app implementing the product catalog APIs.

Performance profiling

There is a small helper script at scripts/seed_and_profile.py to seed products and profile the product-list endpoint.

Run it after starting a dev server (Postgres recommended for realistic results):

& .\venv\Scripts\Activate.ps1
# Seed via Django shell (1000 products)
python manage.py shell -c "import scripts.seed_and_profile as s; s.seed(1000)"
# Or run the script which will attempt to seed then profile
python scripts/seed_and_profile.py --host http://localhost:8000 --count 1000

The script prints simple latency stats (avg/min/max) for multiple iterations.

python -m venv venv; .\venv\Scripts\Activate.ps1
pip install -r requirements.txt
python manage.py migrate
python manage.py runserver

API docs (Swagger UI) will be available at http://127.0.0.1:8000/api/docs/.

Seeding the database (local dev)

  1. Activate your virtualenv and install deps:
python -m venv venv; .\venv\Scripts\Activate.ps1
pip install -r requirements.txt
  1. Run the seed command to populate sample data:
python manage.py seed

Docker Compose (Postgres + Django)

Start services with Docker Compose (requires Docker):

docker compose up --build

The Django app will run at http://127.0.0.1:8000 and Postgres at localhost:5432.

CI Secrets

This repository expects database and secret values to be provided via GitHub Actions secrets for CI jobs. Set the following in Settings → Secrets → Actions for the repository:

Rotation guidance

If any secret was accidentally committed, rotate it immediately:

  1. Generate a new secret value (DB password, API key, etc.).
  2. Update the service (rotate DB user/password in your Postgres host or managed DB).
  3. Update the corresponding GitHub Actions secret value.
  4. Re-run CI to ensure jobs succeed with the new secret.
  5. Optionally, remove the old value from git history using git filter-repo or BFG (coordinate with collaborators). Always rotate credentials even after history rewrite.

Docker build notes

docker compose build web
docker compose up web

Profiling with Docker Compose

After starting the stack with docker compose up --build, seed the database and run the profiling script from within the web container or from your host targeting the running server. Example (host):

# wait for migrations to finish, then on host
python scripts/seed_and_profile.py --host http://localhost:8000 --count 1000

Integration tests & CI

A minimal integration smoke test is included at tests/integration/test_smoke_db.py. It verifies the database connection and that a health endpoint responds.

Run the smoke tests locally (after migrations):

python manage.py migrate
python -m pytest tests/integration/test_smoke_db.py

To run the integration workflow on GitHub Actions, ensure the following repository secrets are set in Settings → Secrets → Actions:

You can trigger the workflow manually from the Actions tab (workflow_dispatch) or by pushing changes to the branch.

Media storage (optional: Amazon S3)

This project supports storing uploaded media (product images) either on local disk in development or on Amazon S3 in production via django-storages.

Quick setup (development - local media):

Quick setup (production - S3):

  1. Install dependencies:
pip install boto3 django-storages
  1. Set the following environment variables in your deployment environment:
  1. Confirm DEFAULT_FILE_STORAGE uses storages.backends.s3boto3.S3Boto3Storage when USE_S3=1 (this is handled by nexus/settings.py).

Security and considerations

Example IAM policy (least privilege)

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowS3AccessForMediaBucket",
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::your-media-bucket",
        "arn:aws:s3:::your-media-bucket/*"
      ]
    }
  ]
}

Notes:

Deployment checklist for S3

Verify

API documentation publishing

This repository generates and publishes OpenAPI documentation to GitHub Pages on pushes to main.

API docs (published)

python manage.py spectacular --file openapi.json
# bundle with redoc-cli (requires Node/npm)
npx redoc-cli bundle openapi.json -o openapi.html
# open openapi.html in your browser

View docs

Local preview

python manage.py spectacular --file openapi.json
npm install -g redoc-cli
npx redoc-cli bundle openapi.json -o openapi.html
# open openapi.html in your browser

Migrations included in this repository

Notes

PostgreSQL extensions and migration notes

The project includes a migration that creates the pg_trgm extension and adds trigram GIN indexes to accelerate substring/ILIKE searches on product name and description. A few important operational notes:

CREATE EXTENSION IF NOT EXISTS pg_trgm;
-- verify extension
SELECT extname FROM pg_extension WHERE extname = 'pg_trgm';

-- verify index
\d+ catalog_product  -- look for catalog_product_trgm_idx and catalog_product_description_trgm_idx

Deployment runbook: ensure pg_trgm and run migrations

When deploying to a PostgreSQL database, ensure the pg_trgm extension is present (required by the project’s trigram GIN index migration) or run migrations with a user that has the privilege to create extensions.

Options:

1) Create the extension manually (recommended for environments with restricted DB users)

-- connect as a superuser or a user with CREATE EXTENSION privilege
CREATE EXTENSION IF NOT EXISTS pg_trgm;
PGHOST=your-db-host PGPORT=5432 PGUSER=postgres PGPASSWORD=yourpw psql -d your_db -c "CREATE EXTENSION IF NOT EXISTS pg_trgm;"
# run this on the host while the postgres container is running
docker exec -i your_postgres_container psql -U postgres -d your_db -c "CREATE EXTENSION IF NOT EXISTS pg_trgm;"

2) Run migrations with an elevated DB user (simpler for automated deploys)

python manage.py migrate --noinput

Verification

SELECT extname FROM pg_extension WHERE extname = 'pg_trgm';
\d+ catalog_product
# or
SELECT indexname FROM pg_indexes WHERE tablename = 'catalog_product';

Notes