fix: harden admin access, repair ORM joins, and add migration/tests
This commit is contained in:
@@ -11,6 +11,10 @@ SECRET_KEY=your-secret-key-here-change-in-production
|
||||
API_V1_STR=/api/v1
|
||||
PROJECT_NAME=IRT Bank Soal
|
||||
ENVIRONMENT=development
|
||||
ENABLE_ADMIN=false
|
||||
ADMIN_USERNAME=admin
|
||||
ADMIN_PASSWORD=change-me
|
||||
ADMIN_SESSION_EXPIRE_SECONDS=3600
|
||||
|
||||
# OpenRouter (AI Generation)
|
||||
OPENROUTER_API_KEY=your-openrouter-api-key-here
|
||||
|
||||
339
DEFECT_GAP_AUDIT_REPORT.md
Normal file
339
DEFECT_GAP_AUDIT_REPORT.md
Normal file
@@ -0,0 +1,339 @@
|
||||
# Yellow Bank Soal - Defect and Gap Audit Report
|
||||
|
||||
Date: 2026-03-31
|
||||
Auditor: Codex (GPT-5)
|
||||
Scope: Static code trace, dependency-aware runtime checks, targeted test execution
|
||||
|
||||
## 1) Executive Summary
|
||||
|
||||
This audit identified critical reliability and security risks that should be addressed before production rollout:
|
||||
|
||||
- P0: ORM relationship configuration is broken for key entities (`Tryout` to `Item`/`Session`/`TryoutStats`), causing mapper setup failure.
|
||||
- P0: Admin panel is mounted while authentication provider registration is disabled.
|
||||
- P1: Multiple SQL aggregation queries use invalid `func.cast(..., type_=func.INTEGER)` patterns that will fail at runtime.
|
||||
- P1: Normalization parameter retrieval uses incorrect scalar fetch semantics for multi-column queries.
|
||||
- P1: Reporting logic contains at least one mathematically invalid metric computation (`avg_nn`).
|
||||
|
||||
Overall status: **Not production-ready** until P0 and P1 findings are remediated.
|
||||
|
||||
## 2) Methodology
|
||||
|
||||
- Repository structure and architecture trace across routers, services, models, schemas, and admin modules.
|
||||
- Runtime checks after dependency install in a local virtual environment.
|
||||
- Targeted validation of SQLAlchemy mapper setup and SQL expression construction.
|
||||
- Existing test suite execution.
|
||||
|
||||
## 3) Verification Results
|
||||
|
||||
- Dependency install: successful (`.venv`, requirements installed).
|
||||
- Test execution: `.venv/bin/pytest -q` -> `3 passed`.
|
||||
- Mapper validation: failed (`NoForeignKeysError`) when forcing mapper configuration.
|
||||
- SQLAlchemy expression validation: failed for `func.cast(..., type_=func.INTEGER)` usage.
|
||||
|
||||
Note: passing tests do not indicate system correctness due to weak assertion coverage (see finding G-02).
|
||||
|
||||
## 4) Findings
|
||||
|
||||
## P0 Findings
|
||||
|
||||
### P0-01: Broken ORM relationship joins (mapper configuration failure)
|
||||
|
||||
Severity: P0 (Critical)
|
||||
Category: Data model / Runtime stability
|
||||
|
||||
Description:
|
||||
|
||||
`Tryout` relationships to `Item`, `Session`, and `TryoutStats` are defined, but corresponding child fields (`tryout_id`) are plain strings without foreign key constraints or explicit `primaryjoin` definitions. SQLAlchemy cannot infer join conditions for relationship mapping.
|
||||
|
||||
Evidence:
|
||||
|
||||
- `app/models/item.py:67` and `app/models/item.py:169`
|
||||
- `app/models/session.py:79` and `app/models/session.py:160`
|
||||
- `app/models/tryout_stats.py:52` and `app/models/tryout_stats.py:120`
|
||||
- `app/models/tryout.py:162-170`
|
||||
|
||||
Runtime proof:
|
||||
|
||||
- `configure_mappers()` raises `sqlalchemy.exc.NoForeignKeysError` with message indicating no FK between `items` and `tryouts`.
|
||||
|
||||
Impact:
|
||||
|
||||
- Relationship loading may fail when mappings are configured.
|
||||
- Admin/data operations depending on relationship loading are unstable.
|
||||
- High risk of runtime failures in production paths.
|
||||
|
||||
Recommendation:
|
||||
|
||||
- Add proper FK model constraints (or explicit `primaryjoin` if intentional composite mapping).
|
||||
- Prefer canonical FK design:
|
||||
- `items.tryout_pk` -> `tryouts.id`
|
||||
- `sessions.tryout_pk` -> `tryouts.id`
|
||||
- `tryout_stats.tryout_pk` -> `tryouts.id`
|
||||
- Keep `tryout_id` business identifier as separate indexed field if needed.
|
||||
|
||||
---
|
||||
|
||||
### P0-02: Admin panel mounted with auth provider not registered
|
||||
|
||||
Severity: P0 (Critical)
|
||||
Category: Security / Access control
|
||||
|
||||
Description:
|
||||
|
||||
Admin app is mounted in main app, but auth provider registration is commented out.
|
||||
|
||||
Evidence:
|
||||
|
||||
- `app/main.py:176` (admin mounted on `/admin`)
|
||||
- `app/admin.py:607` (`admin_app.settings.auth_provider = AdminAuthProvider()` commented)
|
||||
|
||||
Impact:
|
||||
|
||||
- Potential unauthorized access to administrative resources.
|
||||
- High security exposure depending on default behavior/deployment setup.
|
||||
|
||||
Recommendation:
|
||||
|
||||
- Enable and enforce auth provider before exposing `/admin`.
|
||||
- Add environment-aware hard gate to disable admin in production until auth is verified.
|
||||
- Add integration tests for admin route authentication and authorization.
|
||||
|
||||
## P1 Findings
|
||||
|
||||
### P1-01: Invalid SQLAlchemy cast pattern in aggregate queries
|
||||
|
||||
Severity: P1 (High)
|
||||
Category: Query correctness / Runtime stability
|
||||
|
||||
Description:
|
||||
|
||||
Several queries use `func.cast(..., type_=func.INTEGER)`. This is not valid SQLAlchemy typing usage and fails during expression construction.
|
||||
|
||||
Evidence:
|
||||
|
||||
- `app/services/ctt_scoring.py:193`
|
||||
- `app/services/reporting.py:418`
|
||||
- `app/services/reporting.py:681`
|
||||
- `app/routers/tryouts.py:295`
|
||||
|
||||
Impact:
|
||||
|
||||
- Runtime failures in calibration, reporting, and CTT calculations.
|
||||
|
||||
Recommendation:
|
||||
|
||||
- Replace with canonical cast:
|
||||
- `from sqlalchemy import cast, Integer`
|
||||
- `func.sum(cast(UserAnswer.is_correct, Integer))`
|
||||
|
||||
---
|
||||
|
||||
### P1-02: Incorrect multi-column fetch in normalization service
|
||||
|
||||
Severity: P1 (High)
|
||||
Category: Business logic / Runtime stability
|
||||
|
||||
Description:
|
||||
|
||||
Queries selecting `(Tryout.static_rataan, Tryout.static_sb)` use `scalar_one_or_none()` and then attempt tuple unpacking. `scalar_one_or_none()` returns a single scalar (first column), not both columns.
|
||||
|
||||
Evidence:
|
||||
|
||||
- `app/services/normalization.py:311`, `:318`
|
||||
- `app/services/normalization.py:355`, `:377`
|
||||
|
||||
Impact:
|
||||
|
||||
- Type/logic errors during normalization parameter retrieval.
|
||||
- Can break score normalization flows.
|
||||
|
||||
Recommendation:
|
||||
|
||||
- Use `one_or_none()` for tuple rows:
|
||||
- `row = result.one_or_none()`
|
||||
- `rataan, sb = row`
|
||||
|
||||
---
|
||||
|
||||
### P1-03: Invalid `avg_nn` computation in tryout comparison report
|
||||
|
||||
Severity: P1 (High)
|
||||
Category: Reporting correctness
|
||||
|
||||
Description:
|
||||
|
||||
`avg_nn` is currently calculated as `stats.rataan + 500`, which does not match the normalization formula and is mathematically incorrect.
|
||||
|
||||
Evidence:
|
||||
|
||||
- `app/services/reporting.py:713`
|
||||
|
||||
Impact:
|
||||
|
||||
- Misleading management and educational performance reports.
|
||||
- Incorrect decisions from bad analytics output.
|
||||
|
||||
Recommendation:
|
||||
|
||||
- Compute average NN from session-level NN values directly.
|
||||
- If unavailable, explicitly return `None` and annotate report incompleteness.
|
||||
|
||||
---
|
||||
|
||||
### P1-04: Timestamp defaults use string literal `"NOW()"` instead of SQL function
|
||||
|
||||
Severity: P1 (High)
|
||||
Category: Data integrity / Schema correctness
|
||||
|
||||
Description:
|
||||
|
||||
Model fields use `server_default="NOW()"`, which compiles to string literal default `'NOW()'`, not database function execution.
|
||||
|
||||
Evidence:
|
||||
|
||||
- Present in all models, e.g. `app/models/website.py:51`, `:56`
|
||||
|
||||
Impact:
|
||||
|
||||
- Incorrect or invalid default timestamps depending on DB behavior.
|
||||
- Inconsistent created/updated tracking.
|
||||
|
||||
Recommendation:
|
||||
|
||||
- Use SQLAlchemy expressions:
|
||||
- `server_default=func.now()`
|
||||
- `onupdate=func.now()` or DB trigger strategy.
|
||||
|
||||
## P2 Findings
|
||||
|
||||
### P2-01: Multi-tenant consistency gap in session creation endpoint
|
||||
|
||||
Severity: P2 (Medium)
|
||||
Category: Authorization / Data isolation
|
||||
|
||||
Description:
|
||||
|
||||
Most routes enforce tenant context via `X-Website-ID` header, but session creation accepts `website_id` in request body without header-based tenant check.
|
||||
|
||||
Evidence:
|
||||
|
||||
- `app/routers/sessions.py:341-390`
|
||||
|
||||
Impact:
|
||||
|
||||
- Increased risk of cross-tenant write mistakes.
|
||||
|
||||
Recommendation:
|
||||
|
||||
- Enforce `X-Website-ID` across all write endpoints.
|
||||
- Reject payload tenant mismatch if header and body both exist.
|
||||
|
||||
---
|
||||
|
||||
### P2-02: Test suite has weak assertions
|
||||
|
||||
Severity: P2 (Medium)
|
||||
Category: Test quality
|
||||
|
||||
Description:
|
||||
|
||||
Current tests are print-oriented and mostly non-assertive, allowing false confidence.
|
||||
|
||||
Evidence:
|
||||
|
||||
- `tests/test_normalization.py` uses print outputs and contains no robust asserts in core test blocks.
|
||||
|
||||
Impact:
|
||||
|
||||
- Regressions can pass CI undetected.
|
||||
|
||||
Recommendation:
|
||||
|
||||
- Rewrite tests with deterministic assertions.
|
||||
- Add coverage for:
|
||||
- mapper configuration
|
||||
- report calculations
|
||||
- normalization path selection
|
||||
- admin auth enforcement
|
||||
|
||||
---
|
||||
|
||||
### P2-03: Scheduled reports are in-memory only
|
||||
|
||||
Severity: P2 (Medium)
|
||||
Category: Operational reliability
|
||||
|
||||
Description:
|
||||
|
||||
Scheduled report registry is process-local (`_scheduled_reports` dict), not persisted.
|
||||
|
||||
Evidence:
|
||||
|
||||
- `app/services/reporting.py:1344`
|
||||
|
||||
Impact:
|
||||
|
||||
- Schedules disappear on restart.
|
||||
- Inconsistent behavior in multi-process deployment.
|
||||
|
||||
Recommendation:
|
||||
|
||||
- Move scheduling metadata to persistent storage (DB/Redis + worker scheduler).
|
||||
|
||||
---
|
||||
|
||||
### P2-04: Migration gap
|
||||
|
||||
Severity: P2 (Medium)
|
||||
Category: Delivery process
|
||||
|
||||
Description:
|
||||
|
||||
Alembic setup exists but no migration version files are present.
|
||||
|
||||
Impact:
|
||||
|
||||
- Schema evolution lacks reproducibility and deployment confidence.
|
||||
|
||||
Recommendation:
|
||||
|
||||
- Generate and commit baseline migration + incremental migration workflow.
|
||||
|
||||
## 5) Risk Register Snapshot
|
||||
|
||||
- Security risk: high (admin auth gap).
|
||||
- Runtime stability risk: high (ORM mapping + invalid query casts).
|
||||
- Data correctness risk: high (reporting and timestamp defaults).
|
||||
- Delivery risk: medium (test quality and migration process).
|
||||
|
||||
## 6) Remediation Priority Plan
|
||||
|
||||
Phase 1 (Immediate, block release):
|
||||
|
||||
1. Fix admin auth enforcement and gate `/admin`.
|
||||
2. Repair ORM join/FK model design and validate mapper configuration.
|
||||
3. Replace invalid cast expressions with valid SQLAlchemy casts.
|
||||
4. Fix normalization tuple fetch bug.
|
||||
|
||||
Phase 2 (Short-term):
|
||||
|
||||
1. Correct reporting formulas (`avg_nn` and related derivations).
|
||||
2. Correct timestamp defaults across all models.
|
||||
3. Enforce tenant header consistency on all write paths.
|
||||
|
||||
Phase 3 (Hardening):
|
||||
|
||||
1. Introduce robust assertion-based tests with CI gate.
|
||||
2. Persist scheduled report metadata.
|
||||
3. Establish migration discipline (`alembic/versions` under source control).
|
||||
|
||||
## 7) Acceptance Criteria for Closure
|
||||
|
||||
All of the following must be true:
|
||||
|
||||
- Mapper configuration passes (`configure_mappers()` without errors).
|
||||
- All P0 and P1 findings resolved and validated by tests.
|
||||
- Admin endpoints require verified auth in target environment.
|
||||
- Query and report correctness covered by automated tests.
|
||||
- Migration scripts exist and apply cleanly from empty DB to latest.
|
||||
|
||||
162
TESTING_WALKTHROUGH.md
Normal file
162
TESTING_WALKTHROUGH.md
Normal file
@@ -0,0 +1,162 @@
|
||||
# Yellow Bank Soal - Testing Walkthrough Guide
|
||||
|
||||
Date: 2026-03-31
|
||||
|
||||
This guide walks through local verification after the defect-fix batch:
|
||||
- model mapping/FK fixes
|
||||
- admin runtime gating and auth wiring
|
||||
- query and normalization fixes
|
||||
- migration baseline setup
|
||||
|
||||
## 1) Prerequisites
|
||||
|
||||
- Python 3.10+
|
||||
- PostgreSQL (for integration/API tests)
|
||||
- Redis (required when `ENABLE_ADMIN=true`)
|
||||
|
||||
## 2) Environment Setup
|
||||
|
||||
From project root:
|
||||
|
||||
```bash
|
||||
python3 -m venv .venv
|
||||
.venv/bin/pip install -r requirements.txt
|
||||
```
|
||||
|
||||
Create `.env` from `.env.example` and set at minimum:
|
||||
|
||||
```env
|
||||
DATABASE_URL=postgresql+asyncpg://postgres:postgres@localhost:5432/irt_bank_soal
|
||||
REDIS_URL=redis://localhost:6379/0
|
||||
ENVIRONMENT=development
|
||||
|
||||
# Keep false unless explicitly testing admin:
|
||||
ENABLE_ADMIN=false
|
||||
|
||||
# Required only when ENABLE_ADMIN=true
|
||||
ADMIN_USERNAME=admin
|
||||
ADMIN_PASSWORD=change-me
|
||||
ADMIN_SESSION_EXPIRE_SECONDS=3600
|
||||
```
|
||||
|
||||
## 3) Fast Checks (No DB Required)
|
||||
|
||||
### 3.1 Compile check
|
||||
|
||||
```bash
|
||||
.venv/bin/python -m compileall -q app tests alembic
|
||||
```
|
||||
|
||||
Expected: no errors.
|
||||
|
||||
### 3.2 Model mapping smoke test
|
||||
|
||||
```bash
|
||||
.venv/bin/python - <<'PY'
|
||||
from sqlalchemy.orm import configure_mappers
|
||||
import app.models # noqa: F401
|
||||
configure_mappers()
|
||||
print("mappers_ok")
|
||||
PY
|
||||
```
|
||||
|
||||
Expected output:
|
||||
|
||||
```text
|
||||
mappers_ok
|
||||
```
|
||||
|
||||
### 3.3 Unit tests
|
||||
|
||||
```bash
|
||||
.venv/bin/pytest -q
|
||||
```
|
||||
|
||||
Expected: all tests pass.
|
||||
|
||||
## 4) Migration Checks
|
||||
|
||||
### 4.1 Confirm migration chain
|
||||
|
||||
```bash
|
||||
.venv/bin/alembic history
|
||||
```
|
||||
|
||||
Expected head:
|
||||
|
||||
```text
|
||||
<base> -> 20260331_000001 (head), initial schema
|
||||
```
|
||||
|
||||
### 4.2 Offline SQL generation (safe dry run)
|
||||
|
||||
```bash
|
||||
.venv/bin/alembic upgrade head --sql > /tmp/alembic_upgrade.sql
|
||||
head -n 30 /tmp/alembic_upgrade.sql
|
||||
```
|
||||
|
||||
Expected: SQL script containing `CREATE TABLE` for `websites`, `tryouts`, `users`, `items`, `sessions`, `tryout_stats`, `user_answers`.
|
||||
|
||||
### 4.3 Apply migration to DB (online)
|
||||
|
||||
```bash
|
||||
.venv/bin/alembic upgrade head
|
||||
```
|
||||
|
||||
Expected: upgrade completes without error.
|
||||
|
||||
## 5) API Smoke Test
|
||||
|
||||
Start app:
|
||||
|
||||
```bash
|
||||
.venv/bin/uvicorn app.main:app --reload --port 8000
|
||||
```
|
||||
|
||||
Then check:
|
||||
|
||||
```bash
|
||||
curl -s http://127.0.0.1:8000/health | jq
|
||||
```
|
||||
|
||||
Expected:
|
||||
- `status` is `healthy` or `degraded` (if DB unavailable)
|
||||
- API responds with JSON (no startup crash)
|
||||
|
||||
## 6) Admin Auth Test (Optional)
|
||||
|
||||
Only for explicit admin verification.
|
||||
|
||||
1. Ensure Redis is running.
|
||||
2. Set:
|
||||
|
||||
```env
|
||||
ENABLE_ADMIN=true
|
||||
ADMIN_USERNAME=admin
|
||||
ADMIN_PASSWORD=change-me
|
||||
```
|
||||
|
||||
3. Start server and open:
|
||||
- `http://127.0.0.1:8000/admin/login`
|
||||
|
||||
4. Validate:
|
||||
- invalid credentials -> login page with error
|
||||
- valid credentials -> redirect to admin dashboard
|
||||
- logout -> access token removed and redirected to login
|
||||
|
||||
## 7) Regression Targets Checklist
|
||||
|
||||
- [ ] ORM mapper configuration succeeds (`mappers_ok`)
|
||||
- [ ] normalization tests are assertion-based and passing
|
||||
- [ ] cast-related query paths run without SQLAlchemy cast errors
|
||||
- [ ] `avg_nn` in tryout comparison is derived from session NN aggregates
|
||||
- [ ] admin endpoints are disabled when `ENABLE_ADMIN=false`
|
||||
- [ ] admin login works only with configured credentials when enabled
|
||||
- [ ] Alembic migration history and SQL generation are valid
|
||||
|
||||
## 8) Troubleshooting
|
||||
|
||||
- `ModuleNotFoundError`: use `.venv/bin/python` and `.venv/bin/pytest`.
|
||||
- admin startup errors: verify `ENABLE_ADMIN`, `ADMIN_USERNAME`, `ADMIN_PASSWORD`.
|
||||
- Redis errors on admin login: ensure `REDIS_URL` is reachable.
|
||||
- migration connection errors: verify `DATABASE_URL` and DB service availability.
|
||||
280
alembic/versions/20260331_000001_initial_schema.py
Normal file
280
alembic/versions/20260331_000001_initial_schema.py
Normal file
@@ -0,0 +1,280 @@
|
||||
"""initial schema
|
||||
|
||||
Revision ID: 20260331_000001
|
||||
Revises:
|
||||
Create Date: 2026-03-31 12:30:00
|
||||
"""
|
||||
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import context, op
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = "20260331_000001"
|
||||
down_revision: Union[str, None] = None
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def _table_exists(name: str) -> bool:
|
||||
if context.is_offline_mode():
|
||||
return False
|
||||
return sa.inspect(op.get_bind()).has_table(name)
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
if not _table_exists("websites"):
|
||||
op.create_table(
|
||||
"websites",
|
||||
sa.Column("id", sa.Integer(), autoincrement=True, nullable=False),
|
||||
sa.Column("site_url", sa.String(length=512), nullable=False),
|
||||
sa.Column("site_name", sa.String(length=255), nullable=False),
|
||||
sa.Column("created_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=False),
|
||||
sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=False),
|
||||
sa.PrimaryKeyConstraint("id"),
|
||||
)
|
||||
op.create_index("ix_websites_site_url", "websites", ["site_url"], unique=True)
|
||||
|
||||
if not _table_exists("tryouts"):
|
||||
op.create_table(
|
||||
"tryouts",
|
||||
sa.Column("id", sa.Integer(), autoincrement=True, nullable=False),
|
||||
sa.Column("website_id", sa.Integer(), nullable=False),
|
||||
sa.Column("tryout_id", sa.String(length=255), nullable=False),
|
||||
sa.Column("name", sa.String(length=255), nullable=False),
|
||||
sa.Column("description", sa.String(length=1000), nullable=True),
|
||||
sa.Column("scoring_mode", sa.String(length=50), nullable=False),
|
||||
sa.Column("selection_mode", sa.String(length=50), nullable=False),
|
||||
sa.Column("normalization_mode", sa.String(length=50), nullable=False),
|
||||
sa.Column("min_sample_for_dynamic", sa.Integer(), nullable=False),
|
||||
sa.Column("static_rataan", sa.Float(), nullable=False),
|
||||
sa.Column("static_sb", sa.Float(), nullable=False),
|
||||
sa.Column("ai_generation_enabled", sa.Boolean(), nullable=False),
|
||||
sa.Column("hybrid_transition_slot", sa.Integer(), nullable=True),
|
||||
sa.Column("min_calibration_sample", sa.Integer(), nullable=False),
|
||||
sa.Column("theta_estimation_method", sa.String(length=50), nullable=False),
|
||||
sa.Column("fallback_to_ctt_on_error", sa.Boolean(), nullable=False),
|
||||
sa.Column("created_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=False),
|
||||
sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=False),
|
||||
sa.ForeignKeyConstraint(["website_id"], ["websites.id"], ondelete="CASCADE", onupdate="CASCADE"),
|
||||
sa.PrimaryKeyConstraint("id"),
|
||||
sa.UniqueConstraint("website_id", "tryout_id", name="uq_tryouts_website_id_tryout_id"),
|
||||
sa.CheckConstraint("min_sample_for_dynamic > 0", name="ck_min_sample_positive"),
|
||||
sa.CheckConstraint("static_rataan > 0", name="ck_static_rataan_positive"),
|
||||
sa.CheckConstraint("static_sb > 0", name="ck_static_sb_positive"),
|
||||
sa.CheckConstraint("min_calibration_sample > 0", name="ck_min_calibration_positive"),
|
||||
)
|
||||
op.create_index("ix_tryouts_website_id", "tryouts", ["website_id"], unique=False)
|
||||
op.create_index("ix_tryouts_tryout_id", "tryouts", ["tryout_id"], unique=False)
|
||||
|
||||
if not _table_exists("users"):
|
||||
op.create_table(
|
||||
"users",
|
||||
sa.Column("id", sa.Integer(), autoincrement=True, nullable=False),
|
||||
sa.Column("wp_user_id", sa.String(length=255), nullable=False),
|
||||
sa.Column("website_id", sa.Integer(), nullable=False),
|
||||
sa.Column("created_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=False),
|
||||
sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=False),
|
||||
sa.ForeignKeyConstraint(["website_id"], ["websites.id"], ondelete="CASCADE", onupdate="CASCADE"),
|
||||
sa.PrimaryKeyConstraint("id"),
|
||||
sa.UniqueConstraint("wp_user_id", "website_id", name="uq_users_wp_user_id_website_id"),
|
||||
)
|
||||
op.create_index("ix_users_wp_user_id", "users", ["wp_user_id"], unique=False)
|
||||
op.create_index("ix_users_website_id", "users", ["website_id"], unique=False)
|
||||
|
||||
if not _table_exists("items"):
|
||||
op.create_table(
|
||||
"items",
|
||||
sa.Column("id", sa.Integer(), autoincrement=True, nullable=False),
|
||||
sa.Column("tryout_id", sa.String(length=255), nullable=False),
|
||||
sa.Column("website_id", sa.Integer(), nullable=False),
|
||||
sa.Column("slot", sa.Integer(), nullable=False),
|
||||
sa.Column("level", sa.String(length=50), nullable=False),
|
||||
sa.Column("stem", sa.Text(), nullable=False),
|
||||
sa.Column("options", sa.JSON(), nullable=False),
|
||||
sa.Column("correct_answer", sa.String(length=10), nullable=False),
|
||||
sa.Column("explanation", sa.Text(), nullable=True),
|
||||
sa.Column("ctt_p", sa.Float(), nullable=True),
|
||||
sa.Column("ctt_bobot", sa.Float(), nullable=True),
|
||||
sa.Column("ctt_category", sa.String(length=50), nullable=True),
|
||||
sa.Column("irt_b", sa.Float(), nullable=True),
|
||||
sa.Column("irt_se", sa.Float(), nullable=True),
|
||||
sa.Column("calibrated", sa.Boolean(), nullable=False),
|
||||
sa.Column("calibration_sample_size", sa.Integer(), nullable=False),
|
||||
sa.Column("generated_by", sa.String(length=50), nullable=False),
|
||||
sa.Column("ai_model", sa.String(length=255), nullable=True),
|
||||
sa.Column("basis_item_id", sa.Integer(), nullable=True),
|
||||
sa.Column("created_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=False),
|
||||
sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=False),
|
||||
sa.ForeignKeyConstraint(["website_id"], ["websites.id"], ondelete="CASCADE", onupdate="CASCADE"),
|
||||
sa.ForeignKeyConstraint(["basis_item_id"], ["items.id"], ondelete="SET NULL", onupdate="CASCADE"),
|
||||
sa.ForeignKeyConstraint(
|
||||
["website_id", "tryout_id"],
|
||||
["tryouts.website_id", "tryouts.tryout_id"],
|
||||
name="fk_items_tryout",
|
||||
ondelete="CASCADE",
|
||||
onupdate="CASCADE",
|
||||
),
|
||||
sa.PrimaryKeyConstraint("id"),
|
||||
sa.CheckConstraint("irt_b IS NULL OR (irt_b >= -3 AND irt_b <= 3)", name="ck_irt_b_range"),
|
||||
sa.CheckConstraint("ctt_p IS NULL OR (ctt_p >= 0 AND ctt_p <= 1)", name="ck_ctt_p_range"),
|
||||
sa.CheckConstraint("ctt_bobot IS NULL OR (ctt_bobot >= 0 AND ctt_bobot <= 1)", name="ck_ctt_bobot_range"),
|
||||
sa.CheckConstraint("slot > 0", name="ck_slot_positive"),
|
||||
)
|
||||
op.create_index("ix_items_tryout_id", "items", ["tryout_id"], unique=False)
|
||||
op.create_index("ix_items_website_id", "items", ["website_id"], unique=False)
|
||||
op.create_index("ix_items_basis_item_id", "items", ["basis_item_id"], unique=False)
|
||||
op.create_index("ix_items_calibrated", "items", ["calibrated"], unique=False)
|
||||
op.create_index(
|
||||
"ix_items_tryout_id_website_id_slot",
|
||||
"items",
|
||||
["tryout_id", "website_id", "slot", "level"],
|
||||
unique=True,
|
||||
)
|
||||
|
||||
if not _table_exists("sessions"):
|
||||
op.create_table(
|
||||
"sessions",
|
||||
sa.Column("id", sa.Integer(), autoincrement=True, nullable=False),
|
||||
sa.Column("session_id", sa.String(length=255), nullable=False),
|
||||
sa.Column("wp_user_id", sa.String(length=255), nullable=False),
|
||||
sa.Column("website_id", sa.Integer(), nullable=False),
|
||||
sa.Column("tryout_id", sa.String(length=255), nullable=False),
|
||||
sa.Column("start_time", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=False),
|
||||
sa.Column("end_time", sa.DateTime(timezone=True), nullable=True),
|
||||
sa.Column("is_completed", sa.Boolean(), nullable=False),
|
||||
sa.Column("scoring_mode_used", sa.String(length=50), nullable=False),
|
||||
sa.Column("total_benar", sa.Integer(), nullable=False),
|
||||
sa.Column("total_bobot_earned", sa.Float(), nullable=False),
|
||||
sa.Column("NM", sa.Integer(), quote=True, nullable=True),
|
||||
sa.Column("NN", sa.Integer(), quote=True, nullable=True),
|
||||
sa.Column("theta", sa.Float(), nullable=True),
|
||||
sa.Column("theta_se", sa.Float(), nullable=True),
|
||||
sa.Column("rataan_used", sa.Float(), nullable=True),
|
||||
sa.Column("sb_used", sa.Float(), nullable=True),
|
||||
sa.Column("created_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=False),
|
||||
sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=False),
|
||||
sa.ForeignKeyConstraint(["website_id"], ["websites.id"], ondelete="CASCADE", onupdate="CASCADE"),
|
||||
sa.ForeignKeyConstraint(
|
||||
["website_id", "tryout_id"],
|
||||
["tryouts.website_id", "tryouts.tryout_id"],
|
||||
name="fk_sessions_tryout",
|
||||
ondelete="CASCADE",
|
||||
onupdate="CASCADE",
|
||||
),
|
||||
sa.ForeignKeyConstraint(
|
||||
["wp_user_id", "website_id"],
|
||||
["users.wp_user_id", "users.website_id"],
|
||||
name="fk_sessions_user",
|
||||
ondelete="CASCADE",
|
||||
onupdate="CASCADE",
|
||||
),
|
||||
sa.PrimaryKeyConstraint("id"),
|
||||
sa.UniqueConstraint("session_id"),
|
||||
sa.CheckConstraint('"NM" IS NULL OR ("NM" >= 0 AND "NM" <= 1000)', name="ck_nm_range"),
|
||||
sa.CheckConstraint('"NN" IS NULL OR ("NN" >= 0 AND "NN" <= 1000)', name="ck_nn_range"),
|
||||
sa.CheckConstraint("theta IS NULL OR (theta >= -3 AND theta <= 3)", name="ck_theta_range"),
|
||||
sa.CheckConstraint("total_benar >= 0", name="ck_total_benar_non_negative"),
|
||||
sa.CheckConstraint("total_bobot_earned >= 0", name="ck_total_bobot_non_negative"),
|
||||
)
|
||||
op.create_index("ix_sessions_session_id", "sessions", ["session_id"], unique=True)
|
||||
op.create_index("ix_sessions_wp_user_id", "sessions", ["wp_user_id"], unique=False)
|
||||
op.create_index("ix_sessions_website_id", "sessions", ["website_id"], unique=False)
|
||||
op.create_index("ix_sessions_tryout_id", "sessions", ["tryout_id"], unique=False)
|
||||
op.create_index("ix_sessions_is_completed", "sessions", ["is_completed"], unique=False)
|
||||
|
||||
if not _table_exists("tryout_stats"):
|
||||
op.create_table(
|
||||
"tryout_stats",
|
||||
sa.Column("id", sa.Integer(), autoincrement=True, nullable=False),
|
||||
sa.Column("website_id", sa.Integer(), nullable=False),
|
||||
sa.Column("tryout_id", sa.String(length=255), nullable=False),
|
||||
sa.Column("participant_count", sa.Integer(), nullable=False),
|
||||
sa.Column("total_nm_sum", sa.Float(), nullable=False),
|
||||
sa.Column("total_nm_sq_sum", sa.Float(), nullable=False),
|
||||
sa.Column("rataan", sa.Float(), nullable=True),
|
||||
sa.Column("sb", sa.Float(), nullable=True),
|
||||
sa.Column("min_nm", sa.Integer(), nullable=True),
|
||||
sa.Column("max_nm", sa.Integer(), nullable=True),
|
||||
sa.Column("last_calculated", sa.DateTime(timezone=True), nullable=True),
|
||||
sa.Column("created_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=False),
|
||||
sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=False),
|
||||
sa.ForeignKeyConstraint(["website_id"], ["websites.id"], ondelete="CASCADE", onupdate="CASCADE"),
|
||||
sa.ForeignKeyConstraint(
|
||||
["website_id", "tryout_id"],
|
||||
["tryouts.website_id", "tryouts.tryout_id"],
|
||||
name="fk_tryout_stats_tryout",
|
||||
ondelete="CASCADE",
|
||||
onupdate="CASCADE",
|
||||
),
|
||||
sa.PrimaryKeyConstraint("id"),
|
||||
sa.CheckConstraint("participant_count >= 0", name="ck_participant_count_non_negative"),
|
||||
sa.CheckConstraint("min_nm IS NULL OR (min_nm >= 0 AND min_nm <= 1000)", name="ck_min_nm_range"),
|
||||
sa.CheckConstraint("max_nm IS NULL OR (max_nm >= 0 AND max_nm <= 1000)", name="ck_max_nm_range"),
|
||||
sa.CheckConstraint(
|
||||
"min_nm IS NULL OR max_nm IS NULL OR min_nm <= max_nm",
|
||||
name="ck_min_max_nm_order",
|
||||
),
|
||||
)
|
||||
op.create_index("ix_tryout_stats_website_id", "tryout_stats", ["website_id"], unique=False)
|
||||
op.create_index("ix_tryout_stats_tryout_id", "tryout_stats", ["tryout_id"], unique=False)
|
||||
op.create_index(
|
||||
"ix_tryout_stats_website_id_tryout_id",
|
||||
"tryout_stats",
|
||||
["website_id", "tryout_id"],
|
||||
unique=True,
|
||||
)
|
||||
|
||||
if not _table_exists("user_answers"):
|
||||
op.create_table(
|
||||
"user_answers",
|
||||
sa.Column("id", sa.Integer(), autoincrement=True, nullable=False),
|
||||
sa.Column("session_id", sa.String(length=255), nullable=False),
|
||||
sa.Column("wp_user_id", sa.String(length=255), nullable=False),
|
||||
sa.Column("website_id", sa.Integer(), nullable=False),
|
||||
sa.Column("tryout_id", sa.String(length=255), nullable=False),
|
||||
sa.Column("item_id", sa.Integer(), nullable=False),
|
||||
sa.Column("response", sa.String(length=10), nullable=False),
|
||||
sa.Column("is_correct", sa.Boolean(), nullable=False),
|
||||
sa.Column("time_spent", sa.Integer(), nullable=False),
|
||||
sa.Column("scoring_mode_used", sa.String(length=50), nullable=False),
|
||||
sa.Column("bobot_earned", sa.Float(), nullable=False),
|
||||
sa.Column("created_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=False),
|
||||
sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=False),
|
||||
sa.ForeignKeyConstraint(["session_id"], ["sessions.session_id"], ondelete="CASCADE", onupdate="CASCADE"),
|
||||
sa.ForeignKeyConstraint(["website_id"], ["websites.id"], ondelete="CASCADE", onupdate="CASCADE"),
|
||||
sa.ForeignKeyConstraint(["item_id"], ["items.id"], ondelete="CASCADE", onupdate="CASCADE"),
|
||||
sa.PrimaryKeyConstraint("id"),
|
||||
sa.CheckConstraint("time_spent >= 0", name="ck_time_spent_non_negative"),
|
||||
sa.CheckConstraint("bobot_earned >= 0", name="ck_bobot_earned_non_negative"),
|
||||
)
|
||||
op.create_index("ix_user_answers_session_id", "user_answers", ["session_id"], unique=False)
|
||||
op.create_index("ix_user_answers_wp_user_id", "user_answers", ["wp_user_id"], unique=False)
|
||||
op.create_index("ix_user_answers_website_id", "user_answers", ["website_id"], unique=False)
|
||||
op.create_index("ix_user_answers_tryout_id", "user_answers", ["tryout_id"], unique=False)
|
||||
op.create_index("ix_user_answers_item_id", "user_answers", ["item_id"], unique=False)
|
||||
op.create_index(
|
||||
"ix_user_answers_session_id_item_id",
|
||||
"user_answers",
|
||||
["session_id", "item_id"],
|
||||
unique=True,
|
||||
)
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
if _table_exists("user_answers"):
|
||||
op.drop_table("user_answers")
|
||||
if _table_exists("tryout_stats"):
|
||||
op.drop_table("tryout_stats")
|
||||
if _table_exists("sessions"):
|
||||
op.drop_table("sessions")
|
||||
if _table_exists("items"):
|
||||
op.drop_table("items")
|
||||
if _table_exists("users"):
|
||||
op.drop_table("users")
|
||||
if _table_exists("tryouts"):
|
||||
op.drop_table("tryouts")
|
||||
if _table_exists("websites"):
|
||||
op.drop_table("websites")
|
||||
270
app/admin.py
270
app/admin.py
@@ -5,18 +5,29 @@ Provides admin panel for managing tryouts, items, sessions, users, and tryout st
|
||||
Includes custom actions for calibration, AI generation toggle, and normalization reset.
|
||||
"""
|
||||
|
||||
import secrets
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
from typing import Any, Dict, Optional
|
||||
|
||||
from fastapi import Request
|
||||
import aioredis
|
||||
from fastapi import Depends, Form, HTTPException, Request
|
||||
from fastapi_admin import constants
|
||||
from fastapi_admin.app import app as admin_app
|
||||
from fastapi_admin.depends import get_current_admin, get_resources
|
||||
from fastapi_admin.providers import Provider
|
||||
from fastapi_admin.resources import (
|
||||
Field,
|
||||
Link,
|
||||
Model,
|
||||
)
|
||||
from fastapi_admin.template import templates
|
||||
from fastapi_admin.widgets import displays, inputs
|
||||
from sqlalchemy import select
|
||||
from sqlalchemy.ext.asyncio import AsyncSession
|
||||
from starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint
|
||||
from starlette.responses import RedirectResponse
|
||||
from starlette.status import HTTP_303_SEE_OTHER, HTTP_401_UNAUTHORIZED
|
||||
|
||||
from app.core.config import get_settings
|
||||
from app.database import get_db
|
||||
@@ -29,77 +40,175 @@ settings = get_settings()
|
||||
# Authentication Provider
|
||||
# =============================================================================
|
||||
|
||||
class AdminAuthProvider:
|
||||
"""
|
||||
Authentication provider for FastAPI Admin.
|
||||
@dataclass
|
||||
class AdminPrincipal:
|
||||
"""Minimal admin user object expected by fastapi-admin templates."""
|
||||
|
||||
Supports two modes:
|
||||
1. WordPress JWT token integration (production)
|
||||
2. Basic auth for testing (development)
|
||||
pk: str
|
||||
username: str
|
||||
avatar: str = ""
|
||||
|
||||
|
||||
class EnvCredentialProvider(Provider):
|
||||
"""
|
||||
FastAPI-Admin provider backed by env credentials and Redis session tokens.
|
||||
|
||||
Compatible with fastapi-admin 1.0.x provider API without requiring
|
||||
Tortoise admin models.
|
||||
"""
|
||||
|
||||
async def login(
|
||||
name = "env_credential_provider"
|
||||
access_token = "access_token"
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
username: str,
|
||||
password: str,
|
||||
) -> Optional[str]:
|
||||
"""
|
||||
Authenticate user and return token.
|
||||
login_path: str = "/login",
|
||||
logout_path: str = "/logout",
|
||||
login_title: str = "Admin Login",
|
||||
login_logo_url: str | None = None,
|
||||
expire_seconds: int = 3600,
|
||||
template: str = "providers/login/login.html",
|
||||
) -> None:
|
||||
self.username = username
|
||||
self.password = password
|
||||
self.login_path = login_path
|
||||
self.logout_path = logout_path
|
||||
self.login_title = login_title
|
||||
self.login_logo_url = login_logo_url
|
||||
self.expire_seconds = expire_seconds
|
||||
self.template = template
|
||||
|
||||
Args:
|
||||
username: Username
|
||||
password: Password
|
||||
async def register(self, app: "FastAPIAdmin") -> None:
|
||||
await super().register(app)
|
||||
app.get(self.login_path)(self.login_view)
|
||||
app.post(self.login_path)(self.login)
|
||||
app.get(self.logout_path)(self.logout)
|
||||
app.get("/password")(self.password_view)
|
||||
app.post("/password")(self.password)
|
||||
app.add_middleware(BaseHTTPMiddleware, dispatch=self.authenticate)
|
||||
|
||||
Returns:
|
||||
Access token if authentication successful, None otherwise
|
||||
"""
|
||||
# Development mode: basic auth
|
||||
if settings.ENVIRONMENT == "development":
|
||||
# Allow admin/admin or admin/password for testing
|
||||
if (username == "admin" and password in ["admin", "password"]):
|
||||
return f"dev_token_{username}"
|
||||
async def login_view(self, request: Request):
|
||||
return templates.TemplateResponse(
|
||||
self.template,
|
||||
context={
|
||||
"request": request,
|
||||
"login_logo_url": self.login_logo_url,
|
||||
"login_title": self.login_title,
|
||||
},
|
||||
)
|
||||
|
||||
# Production mode: WordPress JWT token validation
|
||||
# For now, return None - implement WordPress integration when needed
|
||||
return None
|
||||
async def login(
|
||||
self,
|
||||
request: Request,
|
||||
username: str = Form(...),
|
||||
password: str = Form(...),
|
||||
remember_me: Optional[str] = Form(None),
|
||||
):
|
||||
if not (
|
||||
secrets.compare_digest(username, self.username)
|
||||
and secrets.compare_digest(password, self.password)
|
||||
):
|
||||
return templates.TemplateResponse(
|
||||
self.template,
|
||||
status_code=HTTP_401_UNAUTHORIZED,
|
||||
context={
|
||||
"request": request,
|
||||
"error": "Invalid username or password",
|
||||
"login_logo_url": self.login_logo_url,
|
||||
"login_title": self.login_title,
|
||||
},
|
||||
)
|
||||
|
||||
async def logout(self, request: Request) -> bool:
|
||||
"""
|
||||
Logout user.
|
||||
response = RedirectResponse(url=request.app.admin_path, status_code=HTTP_303_SEE_OTHER)
|
||||
expire = self.expire_seconds
|
||||
if remember_me == "on":
|
||||
expire = max(self.expire_seconds, 3600 * 24 * 30)
|
||||
response.set_cookie("remember_me", "on")
|
||||
else:
|
||||
response.delete_cookie("remember_me")
|
||||
|
||||
Args:
|
||||
request: FastAPI request
|
||||
token = uuid.uuid4().hex
|
||||
response.set_cookie(
|
||||
self.access_token,
|
||||
token,
|
||||
expires=expire,
|
||||
path=request.app.admin_path,
|
||||
httponly=True,
|
||||
)
|
||||
await request.app.redis.set(constants.LOGIN_USER.format(token=token), self.username, ex=expire)
|
||||
return response
|
||||
|
||||
Returns:
|
||||
True if logout successful
|
||||
"""
|
||||
return True
|
||||
async def authenticate(self, request: Request, call_next: RequestResponseEndpoint):
|
||||
token = request.cookies.get(self.access_token)
|
||||
path = request.scope["path"]
|
||||
admin = None
|
||||
|
||||
async def get_current_user(self, request: Request) -> Optional[dict]:
|
||||
"""
|
||||
Get current authenticated user.
|
||||
if token:
|
||||
key = constants.LOGIN_USER.format(token=token)
|
||||
username = await request.app.redis.get(key)
|
||||
if username:
|
||||
admin = AdminPrincipal(pk=str(username), username=str(username))
|
||||
|
||||
Args:
|
||||
request: FastAPI request
|
||||
request.state.admin = admin
|
||||
|
||||
Returns:
|
||||
User data if authenticated, None otherwise
|
||||
"""
|
||||
token = request.cookies.get("admin_token") or request.headers.get("Authorization")
|
||||
if path.endswith(self.login_path) and admin:
|
||||
return RedirectResponse(url=request.app.admin_path, status_code=HTTP_303_SEE_OTHER)
|
||||
|
||||
if not token:
|
||||
return None
|
||||
return await call_next(request)
|
||||
|
||||
# Development mode: validate dev token
|
||||
if settings.ENVIRONMENT == "development" and token.startswith("dev_token_"):
|
||||
username = token.replace("dev_token_", "")
|
||||
return {
|
||||
"id": 1,
|
||||
"username": username,
|
||||
"is_superuser": True,
|
||||
}
|
||||
async def logout(self, request: Request):
|
||||
response = RedirectResponse(
|
||||
url=request.app.admin_path + self.login_path,
|
||||
status_code=HTTP_303_SEE_OTHER,
|
||||
)
|
||||
token = request.cookies.get(self.access_token)
|
||||
if token:
|
||||
await request.app.redis.delete(constants.LOGIN_USER.format(token=token))
|
||||
response.delete_cookie(self.access_token, path=request.app.admin_path)
|
||||
return response
|
||||
|
||||
return None
|
||||
async def password_view(self, request: Request, resources=Depends(get_resources)):
|
||||
return templates.TemplateResponse(
|
||||
"providers/login/password.html",
|
||||
context={"request": request, "resources": resources},
|
||||
)
|
||||
|
||||
async def password(
|
||||
self,
|
||||
request: Request,
|
||||
old_password: str = Form(...),
|
||||
new_password: str = Form(...),
|
||||
re_new_password: str = Form(...),
|
||||
admin: AdminPrincipal = Depends(get_current_admin),
|
||||
resources=Depends(get_resources),
|
||||
):
|
||||
_ = admin
|
||||
if not secrets.compare_digest(old_password, self.password):
|
||||
return templates.TemplateResponse(
|
||||
"providers/login/password.html",
|
||||
context={
|
||||
"request": request,
|
||||
"resources": resources,
|
||||
"error": "Old password is incorrect",
|
||||
},
|
||||
)
|
||||
if new_password != re_new_password:
|
||||
return templates.TemplateResponse(
|
||||
"providers/login/password.html",
|
||||
context={
|
||||
"request": request,
|
||||
"resources": resources,
|
||||
"error": "New passwords do not match",
|
||||
},
|
||||
)
|
||||
|
||||
# Password is env-configured and immutable at runtime.
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="Password rotation via UI is disabled. Update ADMIN_PASSWORD in environment.",
|
||||
)
|
||||
|
||||
|
||||
# =============================================================================
|
||||
@@ -604,7 +713,8 @@ def create_admin_app() -> Any:
|
||||
# admin_app.settings.site_description = "Admin Panel for Adaptive Question Bank System"
|
||||
|
||||
# Register authentication provider
|
||||
# admin_app.settings.auth_provider = AdminAuthProvider()
|
||||
# NOTE: fastapi-admin 1.0.4 requires provider registration via app.configure(...).
|
||||
# Keep provider implementation here for future integration during startup configure.
|
||||
|
||||
# Register model resources
|
||||
admin_app.register(TryoutResource)
|
||||
@@ -621,5 +731,55 @@ def create_admin_app() -> Any:
|
||||
return admin_app
|
||||
|
||||
|
||||
_admin_configured = False
|
||||
_admin_redis = None
|
||||
|
||||
|
||||
async def configure_admin_app() -> None:
|
||||
"""Configure fastapi-admin runtime (redis + auth provider)."""
|
||||
global _admin_configured, _admin_redis
|
||||
|
||||
if _admin_configured:
|
||||
return
|
||||
|
||||
if not settings.ADMIN_USERNAME or not settings.ADMIN_PASSWORD:
|
||||
raise RuntimeError(
|
||||
"ENABLE_ADMIN=true requires ADMIN_USERNAME and ADMIN_PASSWORD to be set."
|
||||
)
|
||||
|
||||
_admin_redis = aioredis.from_url(
|
||||
settings.REDIS_URL,
|
||||
encoding="utf-8",
|
||||
decode_responses=True,
|
||||
)
|
||||
|
||||
provider = EnvCredentialProvider(
|
||||
username=settings.ADMIN_USERNAME,
|
||||
password=settings.ADMIN_PASSWORD,
|
||||
login_title="IRT Bank Soal Admin",
|
||||
expire_seconds=settings.ADMIN_SESSION_EXPIRE_SECONDS,
|
||||
)
|
||||
|
||||
await admin_app.configure(
|
||||
redis=_admin_redis,
|
||||
admin_path="/admin",
|
||||
providers=[provider],
|
||||
)
|
||||
_admin_configured = True
|
||||
|
||||
|
||||
async def shutdown_admin_app() -> None:
|
||||
"""Close admin redis client cleanly."""
|
||||
global _admin_redis
|
||||
|
||||
if _admin_redis is None:
|
||||
return
|
||||
|
||||
try:
|
||||
await _admin_redis.close()
|
||||
finally:
|
||||
_admin_redis = None
|
||||
|
||||
|
||||
# Export admin app for mounting in main.py
|
||||
admin = create_admin_app()
|
||||
|
||||
@@ -35,6 +35,22 @@ class Settings(BaseSettings):
|
||||
ENVIRONMENT: Literal["development", "staging", "production"] = Field(
|
||||
default="development", description="Environment name"
|
||||
)
|
||||
ENABLE_ADMIN: bool = Field(
|
||||
default=False,
|
||||
description="Enable admin UI and admin-only API routes",
|
||||
)
|
||||
ADMIN_USERNAME: str = Field(
|
||||
default="",
|
||||
description="Admin panel username",
|
||||
)
|
||||
ADMIN_PASSWORD: str = Field(
|
||||
default="",
|
||||
description="Admin panel password (plain env value)",
|
||||
)
|
||||
ADMIN_SESSION_EXPIRE_SECONDS: int = Field(
|
||||
default=3600,
|
||||
description="Admin session lifetime in seconds",
|
||||
)
|
||||
|
||||
# OpenRouter (AI Generation)
|
||||
OPENROUTER_API_KEY: str = Field(
|
||||
|
||||
21
app/main.py
21
app/main.py
@@ -16,7 +16,6 @@ from typing import AsyncGenerator
|
||||
from fastapi import FastAPI
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
|
||||
from app.admin import admin as admin_app
|
||||
from app.core.config import get_settings
|
||||
from app.database import close_db, init_db
|
||||
from app.routers import (
|
||||
@@ -41,10 +40,18 @@ async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
|
||||
"""
|
||||
# Startup: Initialize database
|
||||
await init_db()
|
||||
if settings.ENABLE_ADMIN:
|
||||
from app.admin import configure_admin_app
|
||||
|
||||
await configure_admin_app()
|
||||
|
||||
yield
|
||||
|
||||
# Shutdown: Close database connections
|
||||
if settings.ENABLE_ADMIN:
|
||||
from app.admin import shutdown_admin_app
|
||||
|
||||
await shutdown_admin_app()
|
||||
await close_db()
|
||||
|
||||
|
||||
@@ -162,20 +169,22 @@ app.include_router(
|
||||
wordpress_router,
|
||||
prefix=f"{settings.API_V1_STR}",
|
||||
)
|
||||
app.include_router(
|
||||
ai_router,
|
||||
prefix=f"{settings.API_V1_STR}",
|
||||
)
|
||||
app.include_router(
|
||||
reports_router,
|
||||
prefix=f"{settings.API_V1_STR}",
|
||||
)
|
||||
|
||||
if settings.ENABLE_ADMIN:
|
||||
from app.admin import admin as admin_app
|
||||
|
||||
app.include_router(
|
||||
ai_router,
|
||||
prefix=f"{settings.API_V1_STR}",
|
||||
)
|
||||
|
||||
# Mount FastAPI Admin panel
|
||||
app.mount("/admin", admin_app)
|
||||
|
||||
|
||||
# Include admin API router for custom actions
|
||||
app.include_router(
|
||||
admin_router,
|
||||
|
||||
@@ -14,11 +14,13 @@ from sqlalchemy import (
|
||||
DateTime,
|
||||
Float,
|
||||
ForeignKey,
|
||||
ForeignKeyConstraint,
|
||||
Index,
|
||||
Integer,
|
||||
JSON,
|
||||
String,
|
||||
Text,
|
||||
func,
|
||||
)
|
||||
from sqlalchemy.orm import Mapped, mapped_column, relationship
|
||||
|
||||
@@ -156,13 +158,13 @@ class Item(Base):
|
||||
|
||||
# Timestamps
|
||||
created_at: Mapped[datetime] = mapped_column(
|
||||
DateTime(timezone=True), nullable=False, server_default="NOW()"
|
||||
DateTime(timezone=True), nullable=False, server_default=func.now()
|
||||
)
|
||||
updated_at: Mapped[datetime] = mapped_column(
|
||||
DateTime(timezone=True),
|
||||
nullable=False,
|
||||
server_default="NOW()",
|
||||
onupdate="NOW()",
|
||||
server_default=func.now(),
|
||||
onupdate=func.now(),
|
||||
)
|
||||
|
||||
# Relationships
|
||||
@@ -188,6 +190,13 @@ class Item(Base):
|
||||
|
||||
# Constraints and indexes
|
||||
__table_args__ = (
|
||||
ForeignKeyConstraint(
|
||||
["website_id", "tryout_id"],
|
||||
["tryouts.website_id", "tryouts.tryout_id"],
|
||||
name="fk_items_tryout",
|
||||
ondelete="CASCADE",
|
||||
onupdate="CASCADE",
|
||||
),
|
||||
Index(
|
||||
"ix_items_tryout_id_website_id_slot",
|
||||
"tryout_id",
|
||||
|
||||
@@ -13,9 +13,11 @@ from sqlalchemy import (
|
||||
DateTime,
|
||||
Float,
|
||||
ForeignKey,
|
||||
ForeignKeyConstraint,
|
||||
Index,
|
||||
Integer,
|
||||
String,
|
||||
func,
|
||||
)
|
||||
from sqlalchemy.orm import Mapped, mapped_column, relationship
|
||||
|
||||
@@ -82,7 +84,7 @@ class Session(Base):
|
||||
|
||||
# Timestamps
|
||||
start_time: Mapped[datetime] = mapped_column(
|
||||
DateTime(timezone=True), nullable=False, server_default="NOW()"
|
||||
DateTime(timezone=True), nullable=False, server_default=func.now()
|
||||
)
|
||||
end_time: Mapped[Union[datetime, None]] = mapped_column(
|
||||
DateTime(timezone=True), nullable=True, comment="Session end timestamp"
|
||||
@@ -144,21 +146,27 @@ class Session(Base):
|
||||
|
||||
# Timestamps
|
||||
created_at: Mapped[datetime] = mapped_column(
|
||||
DateTime(timezone=True), nullable=False, server_default="NOW()"
|
||||
DateTime(timezone=True), nullable=False, server_default=func.now()
|
||||
)
|
||||
updated_at: Mapped[datetime] = mapped_column(
|
||||
DateTime(timezone=True),
|
||||
nullable=False,
|
||||
server_default="NOW()",
|
||||
onupdate="NOW()",
|
||||
server_default=func.now(),
|
||||
onupdate=func.now(),
|
||||
)
|
||||
|
||||
# Relationships
|
||||
user: Mapped["User"] = relationship(
|
||||
"User", back_populates="sessions", lazy="selectin"
|
||||
"User",
|
||||
back_populates="sessions",
|
||||
lazy="selectin",
|
||||
overlaps="tryout,sessions",
|
||||
)
|
||||
tryout: Mapped["Tryout"] = relationship(
|
||||
"Tryout", back_populates="sessions", lazy="selectin"
|
||||
"Tryout",
|
||||
back_populates="sessions",
|
||||
lazy="selectin",
|
||||
overlaps="user",
|
||||
)
|
||||
user_answers: Mapped[list["UserAnswer"]] = relationship(
|
||||
"UserAnswer", back_populates="session", lazy="selectin", cascade="all, delete-orphan"
|
||||
@@ -166,6 +174,20 @@ class Session(Base):
|
||||
|
||||
# Constraints and indexes
|
||||
__table_args__ = (
|
||||
ForeignKeyConstraint(
|
||||
["website_id", "tryout_id"],
|
||||
["tryouts.website_id", "tryouts.tryout_id"],
|
||||
name="fk_sessions_tryout",
|
||||
ondelete="CASCADE",
|
||||
onupdate="CASCADE",
|
||||
),
|
||||
ForeignKeyConstraint(
|
||||
["wp_user_id", "website_id"],
|
||||
["users.wp_user_id", "users.website_id"],
|
||||
name="fk_sessions_user",
|
||||
ondelete="CASCADE",
|
||||
onupdate="CASCADE",
|
||||
),
|
||||
Index("ix_sessions_wp_user_id", "wp_user_id"),
|
||||
Index("ix_sessions_website_id", "website_id"),
|
||||
Index("ix_sessions_tryout_id", "tryout_id"),
|
||||
|
||||
@@ -7,7 +7,17 @@ Represents tryout exams with configurable scoring, selection, and normalization
|
||||
from datetime import datetime
|
||||
from typing import Literal, Union
|
||||
|
||||
from sqlalchemy import Boolean, CheckConstraint, DateTime, Float, ForeignKey, Index, Integer, String
|
||||
from sqlalchemy import (
|
||||
Boolean,
|
||||
CheckConstraint,
|
||||
DateTime,
|
||||
Float,
|
||||
ForeignKey,
|
||||
Integer,
|
||||
String,
|
||||
UniqueConstraint,
|
||||
func,
|
||||
)
|
||||
from sqlalchemy.orm import Mapped, mapped_column, relationship
|
||||
|
||||
from app.database import Base
|
||||
@@ -146,13 +156,13 @@ class Tryout(Base):
|
||||
|
||||
# Timestamps
|
||||
created_at: Mapped[datetime] = mapped_column(
|
||||
DateTime(timezone=True), nullable=False, server_default="NOW()"
|
||||
DateTime(timezone=True), nullable=False, server_default=func.now()
|
||||
)
|
||||
updated_at: Mapped[datetime] = mapped_column(
|
||||
DateTime(timezone=True),
|
||||
nullable=False,
|
||||
server_default="NOW()",
|
||||
onupdate="NOW()",
|
||||
server_default=func.now(),
|
||||
onupdate=func.now(),
|
||||
)
|
||||
|
||||
# Relationships
|
||||
@@ -163,7 +173,11 @@ class Tryout(Base):
|
||||
"Item", back_populates="tryout", lazy="selectin", cascade="all, delete-orphan"
|
||||
)
|
||||
sessions: Mapped[list["Session"]] = relationship(
|
||||
"Session", back_populates="tryout", lazy="selectin", cascade="all, delete-orphan"
|
||||
"Session",
|
||||
back_populates="tryout",
|
||||
lazy="selectin",
|
||||
cascade="all, delete-orphan",
|
||||
overlaps="user",
|
||||
)
|
||||
stats: Mapped["TryoutStats"] = relationship(
|
||||
"TryoutStats", back_populates="tryout", lazy="selectin", uselist=False
|
||||
@@ -171,8 +185,10 @@ class Tryout(Base):
|
||||
|
||||
# Constraints and indexes
|
||||
__table_args__ = (
|
||||
Index(
|
||||
"ix_tryouts_website_id_tryout_id", "website_id", "tryout_id", unique=True
|
||||
UniqueConstraint(
|
||||
"website_id",
|
||||
"tryout_id",
|
||||
name="uq_tryouts_website_id_tryout_id",
|
||||
),
|
||||
CheckConstraint("min_sample_for_dynamic > 0", "ck_min_sample_positive"),
|
||||
CheckConstraint("static_rataan > 0", "ck_static_rataan_positive"),
|
||||
|
||||
@@ -7,7 +7,17 @@ Maintains running statistics for dynamic normalization and reporting.
|
||||
from datetime import datetime
|
||||
from typing import Union
|
||||
|
||||
from sqlalchemy import CheckConstraint, DateTime, Float, ForeignKey, Index, Integer, String
|
||||
from sqlalchemy import (
|
||||
CheckConstraint,
|
||||
DateTime,
|
||||
Float,
|
||||
ForeignKey,
|
||||
ForeignKeyConstraint,
|
||||
Index,
|
||||
Integer,
|
||||
String,
|
||||
func,
|
||||
)
|
||||
from sqlalchemy.orm import Mapped, mapped_column, relationship
|
||||
|
||||
from app.database import Base
|
||||
@@ -107,13 +117,13 @@ class TryoutStats(Base):
|
||||
comment="Timestamp of last statistics update",
|
||||
)
|
||||
created_at: Mapped[datetime] = mapped_column(
|
||||
DateTime(timezone=True), nullable=False, server_default="NOW()"
|
||||
DateTime(timezone=True), nullable=False, server_default=func.now()
|
||||
)
|
||||
updated_at: Mapped[datetime] = mapped_column(
|
||||
DateTime(timezone=True),
|
||||
nullable=False,
|
||||
server_default="NOW()",
|
||||
onupdate="NOW()",
|
||||
server_default=func.now(),
|
||||
onupdate=func.now(),
|
||||
)
|
||||
|
||||
# Relationships
|
||||
@@ -123,6 +133,13 @@ class TryoutStats(Base):
|
||||
|
||||
# Constraints and indexes
|
||||
__table_args__ = (
|
||||
ForeignKeyConstraint(
|
||||
["website_id", "tryout_id"],
|
||||
["tryouts.website_id", "tryouts.tryout_id"],
|
||||
name="fk_tryout_stats_tryout",
|
||||
ondelete="CASCADE",
|
||||
onupdate="CASCADE",
|
||||
),
|
||||
Index(
|
||||
"ix_tryout_stats_website_id_tryout_id",
|
||||
"website_id",
|
||||
|
||||
@@ -6,7 +6,7 @@ Represents users from WordPress that can take tryouts.
|
||||
|
||||
from datetime import datetime
|
||||
|
||||
from sqlalchemy import DateTime, ForeignKey, Index, String
|
||||
from sqlalchemy import DateTime, ForeignKey, Index, String, UniqueConstraint, func
|
||||
from sqlalchemy.orm import Mapped, mapped_column, relationship
|
||||
|
||||
from app.database import Base
|
||||
@@ -31,7 +31,7 @@ class User(Base):
|
||||
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
|
||||
|
||||
# WordPress user ID (unique within website context)
|
||||
wp_user_id: Mapped[int] = mapped_column(
|
||||
wp_user_id: Mapped[str] = mapped_column(
|
||||
String(255), nullable=False, index=True, comment="WordPress user ID"
|
||||
)
|
||||
|
||||
@@ -44,13 +44,13 @@ class User(Base):
|
||||
|
||||
# Timestamps
|
||||
created_at: Mapped[datetime] = mapped_column(
|
||||
DateTime(timezone=True), nullable=False, server_default="NOW()"
|
||||
DateTime(timezone=True), nullable=False, server_default=func.now()
|
||||
)
|
||||
updated_at: Mapped[datetime] = mapped_column(
|
||||
DateTime(timezone=True),
|
||||
nullable=False,
|
||||
server_default="NOW()",
|
||||
onupdate="NOW()",
|
||||
server_default=func.now(),
|
||||
onupdate=func.now(),
|
||||
)
|
||||
|
||||
# Relationships
|
||||
@@ -58,12 +58,20 @@ class User(Base):
|
||||
"Website", back_populates="users", lazy="selectin"
|
||||
)
|
||||
sessions: Mapped[list["Session"]] = relationship(
|
||||
"Session", back_populates="user", lazy="selectin", cascade="all, delete-orphan"
|
||||
"Session",
|
||||
back_populates="user",
|
||||
lazy="selectin",
|
||||
cascade="all, delete-orphan",
|
||||
overlaps="sessions,tryout",
|
||||
)
|
||||
|
||||
# Indexes
|
||||
__table_args__ = (
|
||||
Index("ix_users_wp_user_id_website_id", "wp_user_id", "website_id", unique=True),
|
||||
UniqueConstraint(
|
||||
"wp_user_id",
|
||||
"website_id",
|
||||
name="uq_users_wp_user_id_website_id",
|
||||
),
|
||||
Index("ix_users_website_id", "website_id"),
|
||||
)
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ Represents a student's response to a single question with scoring metadata.
|
||||
from datetime import datetime
|
||||
from typing import Literal, Union
|
||||
|
||||
from sqlalchemy import Boolean, CheckConstraint, DateTime, Float, ForeignKey, Index, Integer, String
|
||||
from sqlalchemy import Boolean, CheckConstraint, DateTime, Float, ForeignKey, Index, Integer, String, func
|
||||
from sqlalchemy.orm import Mapped, mapped_column, relationship
|
||||
|
||||
from app.database import Base
|
||||
@@ -94,13 +94,13 @@ class UserAnswer(Base):
|
||||
|
||||
# Timestamps
|
||||
created_at: Mapped[datetime] = mapped_column(
|
||||
DateTime(timezone=True), nullable=False, server_default="NOW()"
|
||||
DateTime(timezone=True), nullable=False, server_default=func.now()
|
||||
)
|
||||
updated_at: Mapped[datetime] = mapped_column(
|
||||
DateTime(timezone=True),
|
||||
nullable=False,
|
||||
server_default="NOW()",
|
||||
onupdate="NOW()",
|
||||
server_default=func.now(),
|
||||
onupdate=func.now(),
|
||||
)
|
||||
|
||||
# Relationships
|
||||
|
||||
@@ -6,7 +6,7 @@ Represents WordPress websites that use the IRT Bank Soal system.
|
||||
|
||||
from datetime import datetime
|
||||
|
||||
from sqlalchemy import DateTime, String
|
||||
from sqlalchemy import DateTime, String, func
|
||||
from sqlalchemy.orm import Mapped, mapped_column, relationship
|
||||
|
||||
from app.database import Base
|
||||
@@ -48,13 +48,13 @@ class Website(Base):
|
||||
|
||||
# Timestamps
|
||||
created_at: Mapped[datetime] = mapped_column(
|
||||
DateTime(timezone=True), nullable=False, server_default="NOW()"
|
||||
DateTime(timezone=True), nullable=False, server_default=func.now()
|
||||
)
|
||||
updated_at: Mapped[datetime] = mapped_column(
|
||||
DateTime(timezone=True),
|
||||
nullable=False,
|
||||
server_default="NOW()",
|
||||
onupdate="NOW()",
|
||||
server_default=func.now(),
|
||||
onupdate=func.now(),
|
||||
)
|
||||
|
||||
# Relationships
|
||||
|
||||
@@ -341,6 +341,7 @@ async def get_session(
|
||||
async def create_session(
|
||||
request: SessionCreateRequest,
|
||||
db: AsyncSession = Depends(get_db),
|
||||
website_id: int = Depends(get_website_id_from_header),
|
||||
) -> SessionResponse:
|
||||
"""
|
||||
Create a new session.
|
||||
@@ -355,10 +356,19 @@ async def create_session(
|
||||
Raises:
|
||||
HTTPException: If tryout not found or session already exists
|
||||
"""
|
||||
if request.website_id != website_id:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail=(
|
||||
"Website mismatch between payload and X-Website-ID header: "
|
||||
f"{request.website_id} != {website_id}"
|
||||
),
|
||||
)
|
||||
|
||||
# Verify tryout exists
|
||||
tryout_result = await db.execute(
|
||||
select(Tryout).where(
|
||||
Tryout.website_id == request.website_id,
|
||||
Tryout.website_id == website_id,
|
||||
Tryout.tryout_id == request.tryout_id,
|
||||
)
|
||||
)
|
||||
@@ -367,7 +377,7 @@ async def create_session(
|
||||
if tryout is None:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND,
|
||||
detail=f"Tryout {request.tryout_id} not found for website {request.website_id}",
|
||||
detail=f"Tryout {request.tryout_id} not found for website {website_id}",
|
||||
)
|
||||
|
||||
# Check if session already exists
|
||||
@@ -386,7 +396,7 @@ async def create_session(
|
||||
session = Session(
|
||||
session_id=request.session_id,
|
||||
wp_user_id=request.wp_user_id,
|
||||
website_id=request.website_id,
|
||||
website_id=website_id,
|
||||
tryout_id=request.tryout_id,
|
||||
scoring_mode_used=request.scoring_mode,
|
||||
start_time=datetime.now(timezone.utc),
|
||||
|
||||
@@ -10,7 +10,7 @@ Endpoints:
|
||||
from typing import List, Optional
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, Header, status
|
||||
from sqlalchemy import select, func
|
||||
from sqlalchemy import Integer, cast, func, select
|
||||
from sqlalchemy.ext.asyncio import AsyncSession
|
||||
from sqlalchemy.orm import selectinload
|
||||
|
||||
@@ -292,7 +292,7 @@ async def get_calibration_status(
|
||||
stats_result = await db.execute(
|
||||
select(
|
||||
func.count().label("total_items"),
|
||||
func.sum(func.cast(Item.calibrated, type_=func.INTEGER)).label("calibrated_items"),
|
||||
func.sum(cast(Item.calibrated, Integer)).label("calibrated_items"),
|
||||
func.avg(Item.calibration_sample_size).label("avg_sample_size"),
|
||||
).where(
|
||||
Item.website_id == website_id,
|
||||
|
||||
@@ -14,7 +14,7 @@ import math
|
||||
from datetime import datetime, timezone
|
||||
from typing import Optional
|
||||
|
||||
from sqlalchemy import func, select
|
||||
from sqlalchemy import Integer, cast, func, select
|
||||
from sqlalchemy.ext.asyncio import AsyncSession
|
||||
|
||||
from app.models.item import Item
|
||||
@@ -190,7 +190,7 @@ async def calculate_ctt_p_for_item(
|
||||
result = await db.execute(
|
||||
select(
|
||||
func.count().label("total"),
|
||||
func.sum(func.cast(UserAnswer.is_correct, type_=func.INTEGER)).label("correct"),
|
||||
func.sum(cast(UserAnswer.is_correct, Integer)).label("correct"),
|
||||
).where(UserAnswer.item_id == item_id)
|
||||
)
|
||||
row = result.first()
|
||||
|
||||
@@ -308,7 +308,7 @@ async def get_normalization_params(
|
||||
Tryout.tryout_id == tryout_id,
|
||||
)
|
||||
)
|
||||
row = result.scalar_one_or_none()
|
||||
row = result.one_or_none()
|
||||
|
||||
if row is None:
|
||||
raise ValueError(
|
||||
@@ -352,7 +352,7 @@ async def get_normalization_params(
|
||||
Tryout.tryout_id == tryout_id,
|
||||
)
|
||||
)
|
||||
row = result.scalar_one_or_none()
|
||||
row = result.one_or_none()
|
||||
if row is None:
|
||||
raise ValueError(
|
||||
f"Tryout {tryout_id} not found for website {website_id}"
|
||||
@@ -369,7 +369,7 @@ async def get_normalization_params(
|
||||
Tryout.tryout_id == tryout_id,
|
||||
)
|
||||
)
|
||||
row = result.scalar_one_or_none()
|
||||
row = result.one_or_none()
|
||||
if row is None:
|
||||
raise ValueError(
|
||||
f"Tryout {tryout_id} not found for website {website_id}"
|
||||
|
||||
@@ -18,7 +18,7 @@ from dataclasses import dataclass, field
|
||||
import logging
|
||||
|
||||
import pandas as pd
|
||||
from sqlalchemy import select, func, and_, or_
|
||||
from sqlalchemy import Integer, and_, cast, func, or_, select
|
||||
from sqlalchemy.ext.asyncio import AsyncSession
|
||||
from sqlalchemy.orm import selectinload
|
||||
|
||||
@@ -415,7 +415,7 @@ async def generate_item_analysis_report(
|
||||
resp_result = await db.execute(
|
||||
select(
|
||||
func.count().label("total"),
|
||||
func.sum(func.cast(UserAnswer.is_correct, type_=func.INTEGER)).label("correct")
|
||||
func.sum(cast(UserAnswer.is_correct, Integer)).label("correct")
|
||||
).where(UserAnswer.item_id == item.id)
|
||||
)
|
||||
resp_stats = resp_result.first()
|
||||
@@ -678,7 +678,7 @@ async def generate_tryout_comparison_report(
|
||||
cal_result = await db.execute(
|
||||
select(
|
||||
func.count().label("total"),
|
||||
func.sum(func.cast(Item.calibrated, type_=func.INTEGER)).label("calibrated")
|
||||
func.sum(cast(Item.calibrated, Integer)).label("calibrated")
|
||||
).where(
|
||||
Item.tryout_id == tryout_id,
|
||||
Item.website_id == website_id,
|
||||
@@ -704,15 +704,56 @@ async def generate_tryout_comparison_report(
|
||||
if tryout:
|
||||
date_str = tryout.created_at.strftime("%Y-%m-%d")
|
||||
|
||||
session_result = await db.execute(
|
||||
select(
|
||||
func.count(Session.id).label("participant_count"),
|
||||
func.avg(Session.NM).label("avg_nm"),
|
||||
func.avg(Session.NN).label("avg_nn"),
|
||||
func.avg(Session.theta).label("avg_theta"),
|
||||
func.stddev_pop(Session.NM).label("std_nm"),
|
||||
).where(
|
||||
Session.tryout_id == tryout_id,
|
||||
Session.website_id == website_id,
|
||||
Session.is_completed.is_(True),
|
||||
)
|
||||
)
|
||||
session_stats = session_result.first()
|
||||
|
||||
participant_count = (
|
||||
int(session_stats.participant_count)
|
||||
if session_stats and session_stats.participant_count
|
||||
else (stats.participant_count if stats else 0)
|
||||
)
|
||||
avg_nm = (
|
||||
round(float(session_stats.avg_nm), 2)
|
||||
if session_stats and session_stats.avg_nm is not None
|
||||
else (round(float(stats.rataan), 2) if stats and stats.rataan is not None else None)
|
||||
)
|
||||
avg_nn = (
|
||||
round(float(session_stats.avg_nn), 2)
|
||||
if session_stats and session_stats.avg_nn is not None
|
||||
else None
|
||||
)
|
||||
avg_theta = (
|
||||
round(float(session_stats.avg_theta), 4)
|
||||
if session_stats and session_stats.avg_theta is not None
|
||||
else None
|
||||
)
|
||||
std_nm = (
|
||||
round(float(session_stats.std_nm), 2)
|
||||
if session_stats and session_stats.std_nm is not None
|
||||
else (round(float(stats.sb), 2) if stats and stats.sb is not None else None)
|
||||
)
|
||||
|
||||
record = TryoutComparisonRecord(
|
||||
tryout_id=tryout_id,
|
||||
date=date_str,
|
||||
subject=subject,
|
||||
participant_count=stats.participant_count if stats else 0,
|
||||
avg_nm=round(stats.rataan, 2) if stats and stats.rataan else None,
|
||||
avg_nn=round(stats.rataan + 500, 2) if stats and stats.rataan else None,
|
||||
avg_theta=None, # Would need to calculate from sessions
|
||||
std_nm=round(stats.sb, 2) if stats and stats.sb else None,
|
||||
participant_count=participant_count,
|
||||
avg_nm=avg_nm,
|
||||
avg_nn=avg_nn,
|
||||
avg_theta=avg_theta,
|
||||
std_nm=std_nm,
|
||||
calibration_percentage=round(cal_percentage, 2),
|
||||
)
|
||||
comparison_records.append(record)
|
||||
|
||||
12
tests/test_model_mappings.py
Normal file
12
tests/test_model_mappings.py
Normal file
@@ -0,0 +1,12 @@
|
||||
from sqlalchemy.orm import configure_mappers
|
||||
|
||||
|
||||
def test_sqlalchemy_mappers_configure_without_join_errors():
|
||||
"""
|
||||
Ensure relationship joins are fully resolvable.
|
||||
|
||||
This catches missing FK/primaryjoin regressions early.
|
||||
"""
|
||||
import app.models # noqa: F401
|
||||
|
||||
configure_mappers()
|
||||
@@ -1,275 +1,77 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for normalization calculations.
|
||||
|
||||
This script tests the normalization functions to ensure they work correctly
|
||||
without requiring database connections.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import math
|
||||
import os
|
||||
import sys
|
||||
|
||||
# Add the project root to the path
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..'))
|
||||
import pytest
|
||||
|
||||
# Ensure project root is importable when tests run in isolated environments.
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), ".."))
|
||||
|
||||
from app.services.normalization import apply_normalization
|
||||
|
||||
|
||||
def test_apply_normalization():
|
||||
"""Test the apply_normalization function."""
|
||||
print("Testing apply_normalization function...")
|
||||
print("=" * 60)
|
||||
|
||||
# Test case 1: Normal normalization (NM=500, rataan=500, sb=100)
|
||||
nm1 = 500
|
||||
rataan1 = 500
|
||||
sb1 = 100
|
||||
nn1 = apply_normalization(nm1, rataan1, sb1)
|
||||
expected1 = 500
|
||||
print(f"Test 1: NM={nm1}, rataan={rataan1}, sb={sb1}")
|
||||
print(f" Expected NN: {expected1}")
|
||||
print(f" Actual NN: {nn1}")
|
||||
print(f" Status: {'PASS' if nn1 == expected1 else 'FAIL'}")
|
||||
print()
|
||||
|
||||
# Test case 2: High score (NM=600, rataan=500, sb=100)
|
||||
nm2 = 600
|
||||
rataan2 = 500
|
||||
sb2 = 100
|
||||
nn2 = apply_normalization(nm2, rataan2, sb2)
|
||||
expected2 = 600
|
||||
print(f"Test 2: NM={nm2}, rataan={rataan2}, sb={sb2}")
|
||||
print(f" Expected NN: {expected2}")
|
||||
print(f" Actual NN: {nn2}")
|
||||
print(f" Status: {'PASS' if nn2 == expected2 else 'FAIL'}")
|
||||
print()
|
||||
|
||||
# Test case 3: Low score (NM=400, rataan=500, sb=100)
|
||||
nm3 = 400
|
||||
rataan3 = 500
|
||||
sb3 = 100
|
||||
nn3 = apply_normalization(nm3, rataan3, sb3)
|
||||
expected3 = 400
|
||||
print(f"Test 3: NM={nm3}, rataan={rataan3}, sb={sb3}")
|
||||
print(f" Expected NN: {expected3}")
|
||||
print(f" Actual NN: {nn3}")
|
||||
print(f" Status: {'PASS' if nn3 == expected3 else 'FAIL'}")
|
||||
print()
|
||||
|
||||
# Test case 4: Edge case - maximum NM
|
||||
nm4 = 1000
|
||||
rataan4 = 500
|
||||
sb4 = 100
|
||||
nn4 = apply_normalization(nm4, rataan4, sb4)
|
||||
expected4 = 1000
|
||||
print(f"Test 4: NM={nm4}, rataan={rataan4}, sb={sb4}")
|
||||
print(f" Expected NN: {expected4}")
|
||||
print(f" Actual NN: {nn4}")
|
||||
print(f" Status: {'PASS' if nn4 == expected4 else 'FAIL'}")
|
||||
print()
|
||||
|
||||
# Test case 5: Edge case - minimum NM
|
||||
nm5 = 0
|
||||
rataan5 = 500
|
||||
sb5 = 100
|
||||
nn5 = apply_normalization(nm5, rataan5, sb5)
|
||||
expected5 = 0
|
||||
print(f"Test 5: NM={nm5}, rataan={rataan5}, sb={sb5}")
|
||||
print(f" Expected NN: {expected5}")
|
||||
print(f" Actual NN: {nn5}")
|
||||
print(f" Status: {'PASS' if nn5 == expected5 else 'FAIL'}")
|
||||
print()
|
||||
|
||||
# Test case 6: Error case - invalid NM (above max)
|
||||
try:
|
||||
nm6 = 1200 # Above valid range
|
||||
rataan6 = 500
|
||||
sb6 = 100
|
||||
nn6 = apply_normalization(nm6, rataan6, sb6)
|
||||
print(f"Test 6: NM={nm6}, rataan={rataan6}, sb={sb6} (should raise ValueError)")
|
||||
print(f" Status: FAIL - Should have raised ValueError")
|
||||
except ValueError as e:
|
||||
print(f"Test 6: NM={nm6}, rataan={rataan6}, sb={sb6} (should raise ValueError)")
|
||||
print(f" Error: {e}")
|
||||
print(f" Status: PASS - Correctly raised ValueError")
|
||||
print()
|
||||
|
||||
# Test case 7: Error case - invalid NM (below min)
|
||||
try:
|
||||
nm7 = -100 # Below valid range
|
||||
rataan7 = 500
|
||||
sb7 = 100
|
||||
nn7 = apply_normalization(nm7, rataan7, sb7)
|
||||
print(f"Test 7: NM={nm7}, rataan={rataan7}, sb={sb7} (should raise ValueError)")
|
||||
print(f" Status: FAIL - Should have raised ValueError")
|
||||
except ValueError as e:
|
||||
print(f"Test 7: NM={nm7}, rataan={rataan7}, sb={sb7} (should raise ValueError)")
|
||||
print(f" Error: {e}")
|
||||
print(f" Status: PASS - Correctly raised ValueError")
|
||||
print()
|
||||
|
||||
# Test case 8: Different rataan/sb (NM=500, rataan=600, sb=80)
|
||||
nm8 = 500
|
||||
rataan8 = 600
|
||||
sb8 = 80
|
||||
nn8 = apply_normalization(nm8, rataan8, sb8)
|
||||
# z_score = (500 - 600) / 80 = -1.25
|
||||
# nn = 500 + 100 * (-1.25) = 500 - 125 = 375
|
||||
expected8 = 375
|
||||
print(f"Test 8: NM={nm8}, rataan={rataan8}, sb={sb8}")
|
||||
print(f" Expected NN: {expected8}")
|
||||
print(f" Actual NN: {nn8}")
|
||||
print(f" Status: {'PASS' if nn8 == expected8 else 'FAIL'}")
|
||||
print()
|
||||
|
||||
# Test case 9: Error case - invalid NM
|
||||
try:
|
||||
nm9 = 1500 # Above valid range
|
||||
rataan9 = 500
|
||||
sb9 = 100
|
||||
nn9 = apply_normalization(nm9, rataan9, sb9)
|
||||
print(f"Test 9: NM={nm9}, rataan={rataan9}, sb={sb9} (should raise ValueError)")
|
||||
print(f" Status: FAIL - Should have raised ValueError")
|
||||
except ValueError as e:
|
||||
print(f"Test 9: NM=1500, rataan=500, sb=100 (should raise ValueError)")
|
||||
print(f" Error: {e}")
|
||||
print(f" Status: PASS - Correctly raised ValueError")
|
||||
print()
|
||||
|
||||
# Test case 10: Error case - invalid sb
|
||||
try:
|
||||
nm10 = 500
|
||||
rataan10 = 500
|
||||
sb10 = 0 # Invalid SD
|
||||
nn10 = apply_normalization(nm10, rataan10, sb10)
|
||||
expected10 = 500 # Should return default when sb <= 0
|
||||
print(f"Test 10: NM={nm10}, rataan={rataan10}, sb={sb10} (should return default)")
|
||||
print(f" Expected NN: {expected10}")
|
||||
print(f" Actual NN: {nn10}")
|
||||
print(f" Status: {'PASS' if nn10 == expected10 else 'FAIL'}")
|
||||
except Exception as e:
|
||||
print(f"Test 10: NM=500, rataan=500, sb=0 (should return default)")
|
||||
print(f" Error: {e}")
|
||||
print(f" Status: FAIL - Should have returned default value")
|
||||
print()
|
||||
|
||||
print("=" * 60)
|
||||
print("All tests completed!")
|
||||
print("=" * 60)
|
||||
@pytest.mark.parametrize(
|
||||
("nm", "rataan", "sb", "expected"),
|
||||
[
|
||||
(500, 500, 100, 500),
|
||||
(600, 500, 100, 600),
|
||||
(400, 500, 100, 400),
|
||||
(1000, 500, 100, 1000),
|
||||
(0, 500, 100, 0),
|
||||
(500, 600, 80, 375),
|
||||
],
|
||||
)
|
||||
def test_apply_normalization_nominal_cases(nm: int, rataan: float, sb: float, expected: int):
|
||||
assert apply_normalization(nm, rataan, sb) == expected
|
||||
|
||||
|
||||
def calculate_dynamic_mean_and_std(nm_values):
|
||||
"""
|
||||
Calculate mean and standard deviation from a list of NM values.
|
||||
This simulates what update_dynamic_normalization does.
|
||||
"""
|
||||
n = len(nm_values)
|
||||
if n == 0:
|
||||
return None, None
|
||||
|
||||
# Calculate mean
|
||||
mean = sum(nm_values) / n
|
||||
|
||||
# Calculate variance (population variance)
|
||||
if n > 1:
|
||||
variance = sum((x - mean) ** 2 for x in nm_values) / n
|
||||
std = variance ** 0.5
|
||||
else:
|
||||
std = 0.0
|
||||
|
||||
return mean, std
|
||||
@pytest.mark.parametrize("nm", [-1, 1001, 1500, -100])
|
||||
def test_apply_normalization_rejects_invalid_nm(nm: int):
|
||||
with pytest.raises(ValueError):
|
||||
apply_normalization(nm, 500, 100)
|
||||
|
||||
|
||||
def test_dynamic_normalization_simulation():
|
||||
"""Test dynamic normalization with simulated participant scores."""
|
||||
print("\nTesting dynamic normalization simulation...")
|
||||
print("=" * 60)
|
||||
@pytest.mark.parametrize("sb", [0, -1, -100.0])
|
||||
def test_apply_normalization_returns_default_when_sd_non_positive(sb: float):
|
||||
assert apply_normalization(500, 500, sb) == 500
|
||||
|
||||
# Simulate 10 participant NM scores
|
||||
|
||||
def test_dynamic_normalization_distribution_behaves_as_expected():
|
||||
nm_scores = [450, 480, 500, 520, 550, 480, 510, 490, 530, 470]
|
||||
print(f"Simulated NM scores: {nm_scores}")
|
||||
print()
|
||||
|
||||
# Calculate mean and SD
|
||||
mean, std = calculate_dynamic_mean_and_std(nm_scores)
|
||||
print(f"Calculated mean (rataan): {mean:.2f}")
|
||||
print(f"Calculated SD (sb): {std:.2f}")
|
||||
print()
|
||||
mean = sum(nm_scores) / len(nm_scores)
|
||||
variance = sum((x - mean) ** 2 for x in nm_scores) / len(nm_scores)
|
||||
std = math.sqrt(variance)
|
||||
|
||||
# Normalize each score
|
||||
print("Normalized scores:")
|
||||
for i, nm in enumerate(nm_scores):
|
||||
nn = apply_normalization(nm, mean, std)
|
||||
print(f" Participant {i+1}: NM={nm:3d} -> NN={nn:3d}")
|
||||
print()
|
||||
|
||||
# Check if normalized distribution is close to mean=500, SD=100
|
||||
nn_scores = [apply_normalization(nm, mean, std) for nm in nm_scores]
|
||||
nn_mean, nn_std = calculate_dynamic_mean_and_std(nn_scores)
|
||||
nn_mean = sum(nn_scores) / len(nn_scores)
|
||||
nn_variance = sum((x - nn_mean) ** 2 for x in nn_scores) / len(nn_scores)
|
||||
nn_std = math.sqrt(nn_variance)
|
||||
|
||||
print(f"Normalized distribution:")
|
||||
print(f" Mean: {nn_mean:.2f} (target: 500 ± 5)")
|
||||
print(f" SD: {nn_std:.2f} (target: 100 ± 5)")
|
||||
print(f" Status: {'PASS' if abs(nn_mean - 500) <= 5 and abs(nn_std - 100) <= 5 else 'NEAR PASS'}")
|
||||
print()
|
||||
|
||||
print("=" * 60)
|
||||
# Rounding in apply_normalization introduces small drift; these bounds are tight.
|
||||
assert abs(nn_mean - 500) <= 5
|
||||
assert abs(nn_std - 100) <= 5
|
||||
|
||||
|
||||
def test_incremental_update():
|
||||
"""Test incremental update of dynamic normalization."""
|
||||
print("\nTesting incremental update simulation...")
|
||||
print("=" * 60)
|
||||
def test_incremental_population_stats_match_batch_stats():
|
||||
scores = [500, 550, 450, 600, 400]
|
||||
|
||||
# Simulate adding scores incrementally
|
||||
nm_scores = []
|
||||
participant_count = 0
|
||||
total_nm_sum = 0.0
|
||||
total_nm_sq_sum = 0.0
|
||||
|
||||
new_scores = [500, 550, 450, 600, 400]
|
||||
|
||||
for i, nm in enumerate(new_scores):
|
||||
# Update running statistics
|
||||
for score in scores:
|
||||
participant_count += 1
|
||||
total_nm_sum += nm
|
||||
total_nm_sq_sum += nm * nm
|
||||
total_nm_sum += score
|
||||
total_nm_sq_sum += score * score
|
||||
|
||||
# Calculate mean and SD
|
||||
mean = total_nm_sum / participant_count
|
||||
if participant_count > 1:
|
||||
variance = (total_nm_sq_sum / participant_count) - (mean ** 2)
|
||||
std = variance ** 0.5
|
||||
else:
|
||||
std = 0.0
|
||||
incremental_mean = total_nm_sum / participant_count
|
||||
incremental_variance = (total_nm_sq_sum / participant_count) - (incremental_mean**2)
|
||||
incremental_std = math.sqrt(max(0.0, incremental_variance))
|
||||
|
||||
nm_scores.append(nm)
|
||||
batch_mean = sum(scores) / len(scores)
|
||||
batch_variance = sum((x - batch_mean) ** 2 for x in scores) / len(scores)
|
||||
batch_std = math.sqrt(batch_variance)
|
||||
|
||||
print(f"After adding participant {i+1}:")
|
||||
print(f" NM: {nm}")
|
||||
print(f" Participant count: {participant_count}")
|
||||
print(f" Mean (rataan): {mean:.2f}")
|
||||
print(f" SD (sb): {std:.2f}")
|
||||
print()
|
||||
|
||||
# Final calculation
|
||||
final_mean, final_std = calculate_dynamic_mean_and_std(nm_scores)
|
||||
print(f"Final statistics:")
|
||||
print(f" All scores: {nm_scores}")
|
||||
print(f" Mean: {final_mean:.2f}")
|
||||
print(f" SD: {final_std:.2f}")
|
||||
print()
|
||||
|
||||
print("=" * 60)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("Normalization Calculation Tests")
|
||||
print("=" * 60)
|
||||
print()
|
||||
|
||||
test_apply_normalization()
|
||||
test_dynamic_normalization_simulation()
|
||||
test_incremental_update()
|
||||
|
||||
print("\nAll test simulations completed successfully!")
|
||||
assert incremental_mean == pytest.approx(batch_mean, rel=0, abs=1e-10)
|
||||
assert incremental_std == pytest.approx(batch_std, rel=0, abs=1e-10)
|
||||
|
||||
Reference in New Issue
Block a user