Internal Deployment Platform on Cloud Run
Internal PaaS on GCP: git-push Cloud Run deploys with IAP and Secret Manager, Vercel-style UX without team-owned GKE.
At a large enterprise, shipping a small web app through the standard path meant GKE: CI/CD wiring, Helm charts, and a heavy Backstage-driven provisioning flow. Teams could stand up a proof of concept in days, then wait weeks before a URL existed outside a laptop. That delay hit engineers and non-engineers alike. Anyone who needed a simple internal app hit the same operational wall.
Problem
The default path to production ran through GKE and a multi-step pipeline most teams did not want to own for a brochure site or an internal tool. CI/CD was not a single toggle; it was repository scaffolding, chart maintenance, and coordination across the platform org. Backstage helped with discovery, but it did not remove the YAML and approval tax.
Velocity was asymmetric. Building was fast; deploying was slow. The gap showed up in demos, hack weeks, and cross-functional pilots where the blocker was never the code. It was getting a live URL with authentication in front so only the right users could access the app.
Solution
We built a Vercel-shaped experience on Google Cloud: connect a GitHub repository in a UI, push to a branch, and let the platform build and deploy. Webhooks triggered Cloud Build on every push. Cloud Build detected the stack where it could; if a Dockerfile existed we used it, otherwise we fell back to buildpacks so teams without container expertise could still ship.
Secrets lived in Google Secret Manager and were injected at build time and at runtime so credentials never sat in repo config. Runtimes were Cloud Run services fronted by a load balancer. We attached friendly hostnames using Serverless NEGs so each deployment had a predictable URL. Cloud Identity-Aware Proxy sat in front by default so every app required sign-in until someone explicitly changed that posture.
Hard Parts
URL maps and Terraform
Friendly URLs on a shared load balancer meant updating URL maps per project. That does not scale as a manual step. We used Firestore as the source of truth for projects and environments, then generated Terraform (and applied it) so URL maps stayed aligned with what the control plane thought existed.
Tenancy
Everything started in one GCP project. That simplified bootstrapping and made billing opaque in the wrong way: noisy neighbors, unclear ownership, and hard deletes. We introduced pool projects to isolate workloads and sketched a path to bring-your-own GCP project for teams that outgrew the pool.
Fast to bootstrap; noisy neighbors, fuzzy ownership, and painful deletes.
Workloads isolated per pool; clearer billing and operational boundaries.
A sketched path for teams that outgrew the pool and needed full isolation.
Logs
Developers expected build logs and runtime logs in one place. Build output lived in Cloud Build; runtime lived under Cloud Run. We shipped a Terraform module teams could run in their own project to forward logs through Pub/Sub. The platform subscribed, merged streams, and pushed updates to the UI with server-sent events so the console felt like a single tail.
Outcome
The product goal was time-to-first-deploy measured in minutes instead of weeks for typical internal apps. Here is what that looked like in practice:
Time to deploy
Under 5 minutes
End to end from push to a live URL, including for people without an engineering background. The previous path had meant days.
Adoption
100+
Users on the platform in under a month.
What I would do differently
noteCloud Run Domain Mapping would have removed most of the load-balancer and URL-map machinery. It was not available in Canada when we built this, so we paid that complexity up front instead of deferring it to a regional GA.
We looked at Cloudflare Workers for Platforms after the fact. It could have simplified some edge routing, but it would also have fractured the zero-config story. Non-technical users were a core audience; anything that required another control plane or DNS dance would have undercut the product.
Keeping a single GCP-shaped path mattered more than shaving routing complexity for power users. A second edge control plane would have shifted cost from infra YAML to product education and support.
Cloudflare Workers are not a drop-in for arbitrary repos. You shape a project for Workers: wrangler.jsonc or wrangler.toml, entry bindings, and the rest of the Workers packaging model. Our non-technical users would not have known how to do that, and we did not want a support line for "make my export deploy." That mattered especially for apps spun up in Google AI Studio and similar tools, where the artifact is a web bundle, not a Worker-shaped codebase.
What I would do the same
I would keep the git-centric workflow and the buildpack-or-Docker split. Letting teams arrive with or without a Dockerfile preserved accessibility without blocking power users.
I would keep IAP as the default posture for internal deployments. Security should be the path of least resistance, not a separate project.
I would keep separating the control plane data model (Firestore) from how we rendered infrastructure, even if the Terraform layer was heavier than I would choose today.
What I would do next
Given an unbounded GCP project budget in the org sense, I would automate project creation and API enablement (Cloud Run, Artifact Registry, Cloud Build) so Terraform was not the bottleneck for every new tenant.
I would add a first-class GKE path for the teams whose production standard remained Kubernetes for external, customer-facing workloads. That audience was always going to be larger than the Cloud Run sweet spot, and meeting them where they deploy would widen adoption without diluting the simple path.
- GCP
- Cloud Run
- Platform engineering
- Internal tools