This repository documents major learnings from the ProDev Backend Engineering program. It serves as a knowledge hub for backend technologies, concepts, challenges, and best practices covered during the program.
The following Entity Relationship Diagram (ERD) shows the database schema for the Project Nexus e-commerce system:
This project encourages collaboration with:
Communication and collaboration are supported through the #ProDevProjectNexus Discord channel.
GitHub Repository: alx-project-nexus
The project is deployed and publicly available at:
https://alx-project-nexus-57m5.onrender.com/
Open the admin at https://alx-project-nexus-57m5.onrender.com/admin/ (use the admin credentials configured in the Render service environment variables).
This repo contains a Django project (nexus
) and a catalog
app implementing the product catalog APIs.
There is a small helper script at scripts/seed_and_profile.py
to seed products and profile the product-list endpoint.
Run it after starting a dev server (Postgres recommended for realistic results):
& .\venv\Scripts\Activate.ps1
# Seed via Django shell (1000 products)
python manage.py shell -c "import scripts.seed_and_profile as s; s.seed(1000)"
# Or run the script which will attempt to seed then profile
python scripts/seed_and_profile.py --host http://localhost:8000 --count 1000
The script prints simple latency stats (avg/min/max) for multiple iterations.
python -m venv venv; .\venv\Scripts\Activate.ps1
pip install -r requirements.txt
POSTGRES_HOST
, POSTGRES_DB
, POSTGRES_USER
, POSTGRES_PASSWORD
, POSTGRES_PORT
in a .env
file):python manage.py migrate
python manage.py runserver
API docs (Swagger UI) will be available at http://127.0.0.1:8000/api/docs/
.
Seeding the database (local dev)
python -m venv venv; .\venv\Scripts\Activate.ps1
pip install -r requirements.txt
python manage.py seed
Docker Compose (Postgres + Django)
Start services with Docker Compose (requires Docker):
docker compose up --build
The Django app will run at http://127.0.0.1:8000
and Postgres at localhost:5432
.
This repository expects database and secret values to be provided via GitHub Actions secrets for CI jobs. Set the following in Settings → Secrets → Actions
for the repository:
POSTGRES_PASSWORD
— password for the CI Postgres serviceDJANGO_SECRET_KEY
— Django secret key for CI (use a random 50+ character value)If any secret was accidentally committed, rotate it immediately:
git filter-repo
or BFG (coordinate with collaborators). Always rotate credentials even after history rewrite.Docker build notes
Dockerfile
and .dockerignore
are included to build the web image.docker compose build web
docker compose up web
After starting the stack with docker compose up --build
, seed the database and run the profiling script from within the web container or from your host targeting the running server. Example (host):
# wait for migrations to finish, then on host
python scripts/seed_and_profile.py --host http://localhost:8000 --count 1000
A minimal integration smoke test is included at tests/integration/test_smoke_db.py
. It verifies the database connection and that a health endpoint responds.
Run the smoke tests locally (after migrations):
python manage.py migrate
python -m pytest tests/integration/test_smoke_db.py
To run the integration workflow on GitHub Actions, ensure the following repository secrets are set in Settings → Secrets → Actions
:
POSTGRES_PASSWORD
DJANGO_SECRET_KEY
You can trigger the workflow manually from the Actions tab (workflow_dispatch
) or by pushing changes to the branch.
This project supports storing uploaded media (product images) either on local disk in development or on Amazon S3 in production via django-storages
.
Quick setup (development - local media):
MEDIA_ROOT
and MEDIA_URL
are set (already configured in nexus/settings.py
).mediafiles/
.Quick setup (production - S3):
pip install boto3 django-storages
USE_S3=1
# enable S3-backed mediaAWS_S3_BUCKET_NAME
— the S3 bucket name for mediaAWS_ACCESS_KEY_ID
— IAM access key IDAWS_SECRET_ACCESS_KEY
— IAM secret access keyAWS_S3_REGION_NAME
— AWS region (optional)AWS_S3_CUSTOM_DOMAIN
— optional custom domain for your bucketDEFAULT_FILE_STORAGE
uses storages.backends.s3boto3.S3Boto3Storage
when USE_S3=1
(this is handled by nexus/settings.py
).Security and considerations
Example IAM policy (least privilege)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowS3AccessForMediaBucket",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::your-media-bucket",
"arn:aws:s3:::your-media-bucket/*"
]
}
]
}
Notes:
your-media-bucket
with the intended S3 bucket name.Deployment checklist for S3
USE_S3=1
and provide AWS_*
env vars in the deployment environment.AWS_S3_CUSTOM_DOMAIN
to the distribution domain.Verify
image
field in API responses contains the expected URL.API documentation publishing
This repository generates and publishes OpenAPI documentation to GitHub Pages on pushes to main
.
/.github/workflows/publish-openapi.yml
builds the OpenAPI JSON using drf-spectacular
and bundles it with redoc-cli
into a single index.html
.GITHUB_TOKEN
to publish pages. To publish from a different account or with more permissions, set PAGES_PAT
secret.API docs (published)
python manage.py spectacular --file openapi.json
# bundle with redoc-cli (requires Node/npm)
npx redoc-cli bundle openapi.json -o openapi.html
# open openapi.html in your browser
View docs
https://<your-github-username>.github.io/alx-project-nexus/
(replace with the repository owner if needed).Local preview
python manage.py spectacular --file openapi.json
npm install -g redoc-cli
npx redoc-cli bundle openapi.json -o openapi.html
# open openapi.html in your browser
Migrations included in this repository
catalog
app. Key migrations:
0001_initial.py
— initial models for Product/Category.0002_product_catalog_pro_categor_7c1c1f_idx_and_more.py
— additional indexes and constraints.0003_add_product_image.py
— adds the image
field to Product
.0004_add_name_index.py
— additional name index.0005_add_trigram_index.py
— creates pg_trgm
extension and trigram GIN indexes (PostgreSQL only).Notes
python manage.py migrate
to apply these migrations.0005_add_trigram_index.py
migration is guarded so it will no-op on non-Postgres databases; for Postgres you may need permission to run CREATE EXTENSION IF NOT EXISTS pg_trgm
(see the “PostgreSQL extensions and migration notes” section above).PostgreSQL extensions and migration notes
The project includes a migration that creates the pg_trgm
extension and adds trigram GIN indexes to accelerate substring/ILIKE searches on product name
and description
. A few important operational notes:
Privileges: Creating PostgreSQL extensions requires database privileges (typically a superuser or a user with CREATE EXTENSION rights). If your deployment uses a restricted DB user, the migration that attempts to CREATE EXTENSION IF NOT EXISTS pg_trgm
may fail with a permissions error.
Managed Postgres (RDS / Cloud SQL / etc.): Many managed services allow CREATE EXTENSION
for extensions like pg_trgm
, but you may need to run the command as the master/superuser or enable the extension through the provider UI. Check your provider documentation.
Recommended options to ensure success:
CREATE EXTENSION IF NOT EXISTS pg_trgm;
If you cannot create extensions in your environment, the migration is safe to skip — the migration code is guarded to only attempt the extension/index creation on PostgreSQL. However, search performance will not benefit from trigram indexing in that case.
Verification: After running migrations on a Postgres database, verify the extension exists and the indexes are present:
-- verify extension
SELECT extname FROM pg_extension WHERE extname = 'pg_trgm';
-- verify index
\d+ catalog_product -- look for catalog_product_trgm_idx and catalog_product_description_trgm_idx
Deployment runbook: ensure pg_trgm and run migrations
When deploying to a PostgreSQL database, ensure the pg_trgm
extension is present (required by the project’s trigram GIN index migration) or run migrations with a user that has the privilege to create extensions.
Options:
1) Create the extension manually (recommended for environments with restricted DB users)
-- connect as a superuser or a user with CREATE EXTENSION privilege
CREATE EXTENSION IF NOT EXISTS pg_trgm;
PGHOST=your-db-host PGPORT=5432 PGUSER=postgres PGPASSWORD=yourpw psql -d your_db -c "CREATE EXTENSION IF NOT EXISTS pg_trgm;"
# run this on the host while the postgres container is running
docker exec -i your_postgres_container psql -U postgres -d your_db -c "CREATE EXTENSION IF NOT EXISTS pg_trgm;"
2) Run migrations with an elevated DB user (simpler for automated deploys)
python manage.py migrate --noinput
Verification
SELECT extname FROM pg_extension WHERE extname = 'pg_trgm';
\d+ catalog_product
# or
SELECT indexname FROM pg_indexes WHERE tablename = 'catalog_product';
Notes
CREATE EXTENSION
as the master user. Consult your provider docs.