CI/CD for Unity WebGL: The Pipeline That Saved My Sanity

A CI/CD pipeline for Unity WebGL uses GitHub Actions and GameCI to automate builds, run EditMode/PlayMode tests before the build stage, deploy to S3 + CloudFront per environment, and enable instant rollbacks via commit SHA-tagged artifacts. The key difference from standard web CI/CD is that Unity WebGL builds take 60+ minutes, so the pipeline must front-load every possible check to avoid wasting that time on doomed builds.
Unity WebGL builds are slow. Not "grab a coffee" slow. More like "go to lunch, come back, and check if it's done" slow. A clean build on a moderately complex project takes over an hour. Sometimes closer to two.
Without automation, that means someone on the team is manually triggering builds, babysitting them, uploading artifacts to a server, pinging QA on Slack with a link, and praying nothing broke since the last deploy. Multiply that by multiple environments (dev, demo, production) and you've got a full-time job that produces zero features.
I've shipped several WebGL projects over the years: multiplayer platforms, virtual showrooms, architectural walkthroughs. The specifics change, but the pipeline bones are the same. Here's the CI/CD setup I've settled on, organized by the problems it solves.
Why CI/CD Is Different for Unity WebGL
If you're coming from a typical web project, Unity WebGL will surprise you. The usual CI/CD assumptions don't hold.
Build times are brutal. A Next.js app builds in seconds. A Unity WebGL project builds in 60+ minutes. You can't pre-commit hook your way out of that. The feedback loop is fundamentally different, and your pipeline needs to account for it.
Binary assets dominate the repo. Textures, meshes, audio files, animation clips. Git doesn't diff binaries well, and your build cache strategy needs to be smarter than "cache node_modules." The Library folder (Unity's intermediate build cache) can be tens of gigabytes, and whether you cache it correctly is the difference between a 90-minute build and a 40-minute build.
There's no native browser test story. You can't Cypress a Unity WebGL build. There's no DOM to query, no network requests to intercept. The WebGL canvas is a black box to browser testing tools. Testing needs a completely different mindset, one that leans heavily on engine-side unit and integration tests rather than end-to-end browser tests.
These constraints mean you can't just copy a web CI/CD template and swap in a Unity build step. The pipeline needs to be designed around long build times, expensive failures, and a testing model that lives inside the engine rather than outside it.
The Pipeline Architecture
I'm going to walk through this by problem solved, not by config file. The platform is GitHub Actions with GameCI for the Unity build steps, but the concepts apply regardless of your CI provider.
| Stage | What it does | Why it matters |
|---|---|---|
| Test | EditMode/PlayMode tests via GameCI | Catches regressions in minutes, not hours |
| Build | GameCI unity-builder with Library caching | Consistent WebGL compilation with license handling |
| Deploy | S3 + CloudFront, per-environment paths | QA gets a link on the PR automatically |
| Notify | PR comments + Slack alerts on failure | No silent failures, no "where's the build?" |
| Rollback | Commit SHA-tagged artifacts, alias swap | Revert in under a minute, no rebuild needed |
Every merge should produce a testable build
The first rule: builds trigger on PR merges to protected branches, not on every push. Unity builds are too expensive to waste on work-in-progress commits. A developer might push ten times while iterating on a feature. Running a 60+ minute build on each push is burning compute for no reason.
When a PR merges to main, the pipeline kicks off:
- GameCI's unity-builder handles the WebGL compilation, including Unity license activation
- The Library folder is cached, keyed on a hash of the project's assets, packages, and settings files. If none of those changed, the cache hits and shaves significant time off the build
- The output artifact is tagged with the commit SHA, plus a
latestalias that always points to the most recent successful build
- uses: game-ci/unity-builder@v4
with:
targetPlatform: WebGL
buildMethod: BuildScript.PerformBuild
- uses: actions/cache@v5
with:
path: Library
key: library-webgl-${{ hashFiles('Assets/**', 'Packages/**', 'ProjectSettings/**') }}
The cache key is the important part. Keying on the hash of assets, packages, and project settings means the cache invalidates when something that actually affects the build changes, not on every commit.
QA shouldn't have to ask "where's the build?"
A build that exists but nobody can find is barely better than no build at all. After compilation, the pipeline automatically deploys to S3 behind CloudFront and posts a comment on the PR with direct links.
Not just one link. Three: the normal build, a debug build with console overlays enabled, and a standalone mode for testing the WebGL player outside the application shell. QA opens the PR, clicks the link, and starts testing. No Slack messages, no "which server is it on," no hunting through artifact buckets.
This might sound like a small thing. It isn't. Before automation, the deploy-and-notify cycle was a recurring time sink. Someone builds locally, uploads to S3 manually, grabs the CloudFront URL, posts it in Slack, and hopes QA sees it before the message gets buried. Then QA finds a bug and reports it in a separate channel. Now there's context scattered across Slack threads, S3 buckets, and whatever project tracker you're using.
With automated PR comments, QA tests and comments on the same PR that produced the build. Feedback stays attached to the code that generated it. When the PR eventually merges, the full QA conversation goes with it. Six months later, when someone asks "why did we change the loading behavior?", the answer is right there in the PR history.
One pipeline, multiple environments
The same workflow handles production, dev, and demo deployments. The difference is parameterization, not separate pipelines. Maintaining separate workflow files per environment is a maintenance trap. Every improvement has to be replicated across all of them, and they inevitably drift.
A custom build method in Unity receives environment flags that control feature toggles, API endpoints, debug overlays, and analytics configuration. The pipeline passes these flags based on which branch triggered the build. A merge to main produces a production build. A merge to release-dev produces a dev build with debug tools enabled. A merge to release-demo produces a client-facing demo with analytics and watermarks.
Each environment deploys to its own path on the CDN. Production goes to the root. Dev and demo go to their own prefixed paths. Nothing overwrites anything else. You can always access any environment's current build without worrying about one deploy stomping another. This also means you can have production, dev, and demo all live simultaneously, which is surprisingly useful when a client wants to compare behavior across environments.
Tests gate the merge, not the deploy
Here's where the long build time really shapes the pipeline design. If tests only run after the build, you've potentially waited over an hour to discover that a null reference in a manager script broke everything.
Instead, EditMode and PlayMode tests run as the first stage, before the build even starts. GameCI's test runner handles this. EditMode tests cover pure logic: utility functions, data validation, serialization. PlayMode tests cover runtime behavior: scene loading, component initialization, API contract verification.
The test stage finishes in minutes. If it fails, the pipeline stops immediately. No 60+ minute build gets queued for code that can't even pass its unit tests. The feedback loop stays short where it matters most.
This ordering is critical and easy to get wrong. The natural instinct is to put tests after the build ("test the thing you built"). But for Unity WebGL, the build is so expensive that you want to front-load every possible check. Any test that can run without a compiled build should run before the build starts.
When things break, you know immediately
Failure notifications go to Slack and the PR itself. The error logs are attached, not just a "build failed" message. When someone sees the notification, they can diagnose without opening the Actions tab.
Rollback is simple because every artifact is tagged with its commit SHA. If the latest deploy has a problem, you point the latest alias back to the previous SHA. No rebuild required. The old artifact is still sitting in storage, ready to serve. A full rollback takes under a minute.
This matters more for Unity WebGL than for most web projects. If your Next.js deploy has a bug, you can rebuild and redeploy in minutes. If your Unity WebGL deploy has a bug, a clean rebuild might take over an hour. Having the previous working artifact ready to serve instantly is the difference between a one-minute incident and a two-hour one.
What I'd add next time
No pipeline is finished. Here's what I'd improve in the next iteration, framed as recommendations if you're building yours from scratch:
Pre-build validation. A lightweight step that checks for broken scene references, missing script references, and shader compilation errors before committing to the full build. Catching a missing asset reference in 30 seconds instead of discovering it 90 minutes into a build is worth the effort.
Build size tracking. WebGL bundle size creeps up over time. An automated check that compares the current build size against the previous one and flags regressions above a threshold would catch the "someone imported a 50MB texture" problem before it ships.
Action version auditing. Pinned action versions are a security and reliability best practice, but they also go stale. A periodic check that flags outdated action versions would keep the pipeline current.
Timeout configuration review. If your build timeout is set to 10 hours "just in case," that's a symptom. It means a stuck build burns compute for 10 hours before anyone notices. Timeouts should be set to slightly above your actual build time, with alerts when builds approach the limit.
Branching Strategy
The pipeline architecture assumes a branching model. Here's the one I use.
Trunk-based with environment branches. main is the trunk. It's always deployable, always protected, PRs only. Feature branches (feature/*) are short-lived: days, not weeks. They branch off main and PR back. Environment branches (release-dev, release-demo) are deploy targets, not development branches. You merge main into them to trigger environment-specific pipelines.
Why not GitFlow? Because long-lived branches and Unity's serialization format don't mix. Unity scenes and prefabs serialize as YAML, and three-way merges on those files are unreliable at best. The longer a branch lives, the more scene and prefab changes accumulate, and the more likely your merge produces a corrupt file that Unity silently loads with missing references.
Short branches minimize that pain. A feature branch that lives for two days has two days of scene drift to resolve. One that lives for three weeks has three weeks. The math is straightforward.
Three-way merging a 4,000-line scene YAML. What could go wrong.
PR rules enforce the discipline: require at least one review, require tests to pass, squash merge to keep the history clean. Squash merging is especially important because Unity developers tend to commit frequently during iteration ("WIP scene layout", "testing materials", "undo that"). Squashing collapses that into a single meaningful commit.
The Pipeline Is the Product
For projects with 60+ minute build times, the CI/CD pipeline isn't infrastructure overhead. It's what makes the team functional. Without it, developers are stuck in a manual loop of build, upload, notify, test, repeat. With it, a merge to main automatically produces a tested, deployed, QA-accessible build with zero human intervention.
The bones of this pipeline have stayed consistent across every WebGL project I've worked on. The specifics change (different environments, different test suites, different deployment targets), but the structure holds: test fast, build once, deploy automatically, fail loudly.
And it keeps evolving. The pipeline I'm running today is better than the first version I set up. The next one will be better still. That's the nature of it. You learn what breaks, you add a check. You find a bottleneck, you optimize it. The pipeline grows with the project.
Frequently Asked Questions
How long do Unity WebGL builds take?
A clean Unity WebGL build on a moderately complex project takes 60 to 120 minutes on a CI runner. With proper Library folder caching (keyed on asset, package, and settings hashes), subsequent builds can drop to 30 to 50 minutes depending on what changed.
Can you use Cypress or Playwright to test Unity WebGL?
No. The Unity WebGL canvas is opaque to browser testing tools. There's no DOM to query inside the canvas, so tools like Cypress and Playwright can't interact with or assert on the 3D content. Testing for Unity WebGL relies on Unity's built-in Test Framework: EditMode tests for pure logic and PlayMode tests for runtime behavior, both run before the build stage in CI.
What branching strategy works best for Unity projects?
Trunk-based development with short-lived feature branches (days, not weeks). Unity scenes and prefabs serialize as YAML, and three-way merges on those files are unreliable. Long-lived branches accumulate more scene drift and produce more corrupt merges. Short branches minimize that risk.
Should I use GitFlow for a Unity project?
Generally no. GitFlow's long-lived develop and release branches conflict with Unity's YAML serialization format. The longer branches live, the more likely you'll hit unresolvable merge conflicts in scene and prefab files. Trunk-based development with environment branches for deployment targets is a better fit.
How do you handle rollbacks for Unity WebGL deployments?
Tag every build artifact with its commit SHA and maintain a latest alias pointing to the current production build. To roll back, point the alias to the previous SHA. The old artifact is already in storage, so rollback takes under a minute with no rebuild required.
If you're building a Unity WebGL product and hitting these bottlenecks, I've helped teams set up pipelines like this from scratch. Feel free to reach out.

Senior Unity Engineer & Technical Artist building real-time 3D experiences. Shaders, multiplayer systems, environment art, and WebGL optimization.