Back to portfolio

Internal Deployment Platform on Cloud Run

Internal PaaS on GCP: git-push Cloud Run deploys with managed secrets and identity-aware access, Vercel-style UX without team-owned GKE.

At a large enterprise, shipping a small web app through the standard path meant GKE: CI/CD wiring, Helm charts, and a heavy Backstage-driven provisioning flow. Teams could stand up a proof of concept in days, then wait weeks before a URL existed outside a laptop. That delay hit engineers and non-engineers alike. Anyone who needed a simple internal app hit the same operational wall.

Problem

The default path to production ran through GKE and a multi-step pipeline most teams did not want to own for a brochure site or an internal tool. CI/CD was not a single toggle; it was repository scaffolding, chart maintenance, and coordination across the platform org. Backstage helped with discovery, but it did not remove the YAML and approval tax.

Velocity was asymmetric. Building was fast; deploying was slow. The gap showed up in demos, hack weeks, and cross-functional pilots where the blocker was never the code. It was getting a live URL with authentication in front so only the right users could access the app.

Solution

We built a Vercel-shaped experience on Google Cloud: connect a repository in a UI, push to a branch, and let the platform build and deploy. Pushes triggered managed builds that detected the stack where they could; when a Dockerfile existed we used it, otherwise we fell back to buildpacks so teams without container expertise could still ship.

Secrets lived in the cloud provider's secret store and were injected at build time and at runtime so credentials never sat in repo config. Workloads ran on Cloud Run behind a shared HTTPS entry path with automated hostname wiring, so each deployment had a predictable URL without hand-maintained DNS for every app. Identity-aware access sat in front by default so every app required sign-in until someone explicitly changed that posture.

Hard Parts

URL maps and Terraform

Friendly URLs on a shared load balancer meant updating URL maps per project. That does not scale as a manual step. We used Firestore as the source of truth for projects and environments, then generated Terraform (and applied it) so URL maps stayed aligned with what the control plane thought existed.

Tenancy

We started with a single shared footprint to move fast. That simplified bootstrapping and created the usual downsides: noisy neighbors, fuzzy ownership, and painful teardown. As adoption grew, we evaluated several tenancy models before settling on a progression that balanced isolation, cost, and operational clarity—including a path for teams that eventually needed their own cloud boundary.

Stage 1
Shared footprint

Fast to bootstrap; noisy neighbors, fuzzy ownership, and harder clean teardown.

Stage 2
Pooled isolation

Workloads grouped into isolated pools; clearer billing and operational boundaries.

Stage 3
Dedicated boundary

A path for teams that outgrew the pool and needed their own cloud account boundary.

Logs

Developers expected build logs and runtime logs in one place. Build output lived in Cloud Build; runtime lived under Cloud Run. We shipped a Terraform module teams could run in their own project to forward logs through Pub/Sub. The platform subscribed, merged streams, and pushed updates to the UI with server-sent events so the console felt like a single tail.

Outcome

The product goal was time-to-first-deploy measured in minutes instead of weeks for typical internal apps. Here is what that looked like in practice:

Time to deploy

Minutes end-to-end

From push to a live URL, including for people without an engineering background. The previous path had meant days or weeks.

Adoption

Broad early adoption

People across roles started using the platform soon after launch, without a long internal rollout campaign.

What I would do differently

Cloud Run Domain Mapping would have removed most of the load-balancer and URL-map machinery. It was not available in Canada when we built this, so we paid that complexity up front instead of deferring it to a regional GA.

We considered additional edge and routing options after the core path worked. Some would have simplified parts of the topology, but they would also have split the product across control planes or packaging models. Non-technical users were a core audience; keeping one cohesive, repo-push deploy story mattered more than optimizing edge mechanics for power users.

What I would do the same

I would keep the git-centric workflow and the buildpack-or-Docker split. Letting teams arrive with or without a Dockerfile preserved accessibility without blocking power users.

I would keep IAP as the default posture for internal deployments. Security should be the path of least resistance, not a separate project.

I would keep separating the control plane data model (Firestore) from how we rendered infrastructure, even if the Terraform layer was heavier than I would choose today.

What I would do next

Given an unbounded GCP project budget in the org sense, I would automate project creation and API enablement (Cloud Run, Artifact Registry, Cloud Build) so Terraform was not the bottleneck for every new tenant.

I would add a first-class GKE path for the teams whose production standard remained Kubernetes for external, customer-facing workloads. That audience was always going to be larger than the Cloud Run sweet spot, and meeting them where they deploy would widen adoption without diluting the simple path.


  • GCP
  • Cloud Run
  • Platform engineering
  • Internal tools
Made with ❤️ in 🇨🇦 · Copyright © 2026 Valentin Prugnaud
Foxy seeing you here!
Wondering if I'd fit your role?
Logo