01
Introduction
Why I Built This
As someone who consumes many different types of media, I found it difficult to track everything across multiple tracker sites—especially since none of them support all media types in one place. I also enjoy leaving reviews and personal notes on the media I consume. Because of this, I decided to build MyChronicle to solve that problem.
My previous workflow: tracking media reviews in Google Docs
What It Does
MyChronicle pulls data from third-party APIs and allows users to search across all media types in one place. Users can maintain personal lists, leave quick notes, and track all their media consumption seamlessly, regardless of the media type.
How It Works
- FastAPI backend handles data requests from multiple third-party APIs
- MongoDB stores user data, authentication, and reviews
- React frontend provides responsive interface
02
Tech Highlights
Third-Party APIs
I used Pydantic for data modeling and asynchronous requests to efficiently fetch metadata from third-party APIs.
Backend
- • I used Pydantic for data modeling due to its seamless compatibility with FastAPI.
- • The backend is built with FastAPI, handling API requests and CORS configuration.
- • Since the final data schema was not fully defined upfront, I chose MongoDB for its flexibility, scalability, and ease of evolving data models.
- • JWT-based authentication is implemented to ensure secure user access.
- • Structured logging is implemented using structlog.
- • I leveraged FastAPI's dependency injection system to manage shared dependencies cleanly and efficiently.
Frontend
- • The frontend is built using React + Vite for a modern and fast development experience.
- • I used TanStack Query to manage server state and prevent unnecessary or repetitive network requests.
- • Styling is implemented using Tailwind CSS for its strong synergy with React and utility-first workflow.
Deployment
- • The application is Dockerized and deployed on Google Cloud Run, a serverless platform with auto-scaling capabilities.
- • Implemented a CI/CD pipeline using GitHub Actions for automated testing, building, and deployment.
- • Leveraged Workload Identity Federation for keyless authentication, eliminating the need for service account keys.
- • Migrated from a self-managed Google Cloud VM to serverless Cloud Run, enabling zero-downtime deployments and automatic scaling based on traffic.
- • The frontend is deployed on Cloudflare.
- • The application is served at linazze.com.
03
Tech Stack
Backend
Frontend
DevOps
Architecture
04
Challenges
Avoiding rate limits from third-party APIs
To prevent hitting rate limits, data is fetched using pagination and bulk requests for the featured media page.
Handling multiple metadata schemas
Supporting different media types required extra care, as each type has unique fields and response structures. This was addressed by defining separate response models and handling variations in metadata consistently.
05
Infrastructure & Deployment
Google Cloud Setup
- Deployed in us-central1 region (free tier eligible)
- Created Artifact Registry repository to store Docker images
- Enabled required APIs: Cloud Run, Artifact Registry, and IAM Credentials
Workload Identity Federation
Instead of storing JSON keys, I implemented Workload Identity Federation for keyless authentication:
- Created a service account with roles: Artifact Registry writer, Cloud Run admin, Service Account User
- Linked GitHub repository to the service account for direct authentication
- Eliminates need for stored passwords or API keys in secrets
GitHub Actions CI/CD Pipeline
Automated deployment workflow on every push to main:
- Runs pytest tests (deployment blocked if tests fail)
- Authenticates to Google Cloud via Workload Identity Federation
- Builds Docker image tagged with git commit SHA
- Pushes image to Artifact Registry
- Deploys to Cloud Run with all environment variables
- Cleans up old images, retaining only the latest 3 to stay under free tier limits
Cloud Run Configuration
- --min-instances=0 — scales to zero when idle (keeps it free)
- --max-instances=2 — caps scaling to prevent unexpected costs
- 512Mi memory, 1 CPU — minimal resources appropriate for the workload
- --allow-unauthenticated — public API access
Preventing Cold Starts
Set up UptimeRobot to ping the /health endpoint every 5 minutes, preventing cold starts during active hours while staying under the 2M free requests monthly limit.
Cost Optimization
- Added .dockerignore to reduce Docker image size
- Image cleanup keeps only 3 versions, staying under 0.5GB Artifact Registry free limit
- Entire application runs on Google Cloud's free tier
06
Demo & Live Site
See MyChronicle in action! Watch the demo below to explore the platform's features, or visit the live website to try it yourself.