Routing Rules
Route incoming requests to different local upstreams based on path prefixes or HTTP headers. This lets you split traffic across multiple services behind a single tunnel URL.
Overview
By default, all traffic through a Jetty tunnel goes to a single local target (e.g. 127.0.0.1:3000). Routing rules let you override this on a per-request basis. For example, you might send /api requests to a backend on port 8080 while everything else stays on port 3000.
Rules are evaluated top to bottom. The first matching rule wins. If no rule matches, traffic falls through to the tunnel's primary upstream.
Plan requirements
Routing rules are available on Captain and Fleet plans. On the Dinghy (free) plan, the dashboard shows a read-only view of any existing rules with an upgrade prompt.
How matching works
Path prefix
The request path must start with the prefix you specify. A rule with prefix /api matches /api, /api/users, /api/v2/orders, etc.
Header
The request must contain a specific header with an exact value. Header names are matched case-insensitively. Only headers from an allowlist can be used for matching (the allowlist is shown in the dropdown when creating a rule).
Creating rules
- Open Bridge and navigate to the Tunnels tab.
- Click Monitor on the tunnel you want to configure.
- Scroll to the Routing Rules (Advanced) section.
- Click Add Rule.
- Choose a match type:
- Path Prefix -- enter a path like
/apior/webhooks. - Header -- select a header name and enter the expected value.
- Path Prefix -- enter a path like
- Set the Local Host and Local Port where matching requests should be forwarded.
- Click Save Rules.
CLI routing flags
You can also create routing rules directly from the command line when starting a tunnel.
Inline route flags
Pass one or more --route flags to map path prefixes to local ports:
jetty share 3000 --route=/api=8080 --route=/admin=3001
Each --route value is a path=port pair. Requests matching the path prefix are forwarded to 127.0.0.1 on the specified port. Unmatched requests go to the primary upstream (port 3000 in this example).
File-based route configuration
For more complex setups, store routes in a JSON file and reference it with --routes-file:
jetty share 3000 --routes-file=.jetty/routes.json
Auto-loading: If a .jetty/routes.json file exists in the current project directory, the CLI loads it automatically. You do not need to pass --routes-file in that case. Explicit --routes-file and --route flags take precedence over the auto-loaded file.
Example .jetty/routes.json:
{
"routes": [
{
"match": {"type": "path_prefix", "value": "/api"},
"upstream": {"host": "127.0.0.1", "port": 8080}
},
{
"match": {"type": "path_prefix", "value": "/admin"},
"upstream": {"host": "127.0.0.1", "port": 3001}
},
{
"match": {"type": "header", "name": "X-Debug", "value": "true"},
"upstream": {"host": "127.0.0.1", "port": 9090}
}
]
}
Rules defined via CLI flags or file are stored server-side and persist across reconnects, just like rules created in the dashboard.
Weighted routing (A/B testing)
Routing rules support an optional weight field for percentage-based traffic splitting. This is useful for A/B testing, canary deployments, or gradual rollouts during development.
When multiple rules match the same criteria, traffic is distributed according to the configured weights. Weights are relative -- they do not need to add up to 100, but using values that sum to 100 makes the percentages easier to reason about.
Example: 90/10 traffic split
Send 90% of /api traffic to port 8080 and 10% to port 8081:
{
"routes": [
{
"match": {"type": "path_prefix", "value": "/api"},
"upstream": {"host": "127.0.0.1", "port": 8080},
"weight": 90
},
{
"match": {"type": "path_prefix", "value": "/api"},
"upstream": {"host": "127.0.0.1", "port": 8081},
"weight": 10
}
]
}
If no weight is set on a rule, it defaults to 100 (meaning it receives all traffic unless another weighted rule competes). In the dashboard, weights appear as editable fields on each rule.
Rule ordering
Rules are matched in order from top to bottom. Use the arrow buttons to reorder rules. The first rule that matches a request is used; remaining rules are skipped.
Example
| Priority | Match | Upstream | Note |
|---|---|---|---|
| 1 | /api |
localhost:8080 |
Backend API |
| 2 | /webhooks |
localhost:9000 |
Webhook handler |
| 3 | (none -- fallback) | localhost:3000 |
Frontend (primary) |
A POST /api/users request goes to port 8080. A POST /webhooks/stripe request goes to port 9000. A GET /about request falls through to the primary upstream on port 3000.
Enabling and disabling rules
Each rule has an Enabled checkbox. Disabled rules are stored but not applied to incoming traffic. This is useful when you want to temporarily stop routing without deleting the rule.
Limits
There is a maximum number of routing rules per tunnel (shown in the UI when you reach the cap). If you need more, contact support or consider splitting into separate tunnels.
Permissions
You need the Developer role or higher on the team that owns the tunnel to create, edit, or delete routing rules. Members with lower roles see a read-only view.
Tips
- Use routing rules to run a microservices stack behind a single public URL during development.
- Combine with reserved subdomains for stable webhook URLs that route to the right service.
- Routing rules are stored server-side. They persist across CLI sessions and reconnects.
Send feedback
Found an issue or have a suggestion? Let us know.