Flavored Tokens
I figured since this seems to be scaling up, now would be a good time to introduce some new functionality into Cloud Cuts. What is a product, must inevitably, become a platform.
And so I am proud to announce that, now, and forever, we will begin supporting the Cloud Cuts API.
If you've used Cloud Cuts before, you'll notice a new section entitled API Key under your dashboard. Since last time we talked about tokens, I talked a little about a red team strategy for botnets and scrapers, I think it's only appropriate to now talk about a blue team strategy. Scopes.
The Problem with All-or-Nothing Access
If you've ever worked with API keys, you've probably encountered the classic security antipattern: a single key that grants access to everything. In preparation for two more experimental protocols that we will be rolling out soon, we figured security should begin to be locked in. If your CI/CD pipeline needs to read account data, typical keys could also delete your billing information, rotate credentials, and nuke your entire organization. Sleep well.
This is the authentication equivalent of giving your house keys to someone who just needs to water your plants. Sure, it works. But it's not exactly what we'd call "defense in depth."
Enter: Scoped API Keys
The CloudCuts API implements a scope-based permission model that lets you mint keys with precisely the access they need. Let's break down how this works.
The Scope Hierarchy
Our scope system follows a simple resource:action pattern:
aws:read → Read AWS account data
aws:write → Manage AWS connections
aws:spending → View AWS spending data
aws:* → Full access to AWS endpoints
contracts:read → View contracts
contracts:write → Create and sign contracts
contracts:* → Full access to contract endpoints
billing:read → View billing information
billing:write → Manage payment methods
billing:* → Full access to billing endpoints
all → Full access to everything (use sparingly)
The wildcard operator (*) provides a convenient way to grant full access within a resource category without explicitly listing every permission. This becomes useful as we add new endpoints—a key with aws:* will automatically gain access to future AWS-related functionality.
How Scope Verification Works
When a request hits a protected endpoint, the authentication middleware performs a three-step verification:
def require_scope(required_scope: str):
async def scope_checker(
authorization: Optional[str] = Header(None),
x_api_key: Optional[str] = Header(None)
):
user = await get_current_user(authorization, x_api_key)
# JWT tokens have full access
if user.get('auth_type') == 'jwt':
return user
scopes = user.get('scopes', [])
# 'all' scope grants everything
if 'all' in scopes:
return user
# Check for exact scope match
if required_scope in scopes:
return user
# Check for wildcard scope
if ':' in required_scope:
scope_prefix = required_scope.split(':')[0]
if f"{scope_prefix}:*" in scopes:
return user
raise HTTPException(403, f"API key does not have required scope: {required_scope}")
return scope_checker
The logic follows a clear precedence:
- JWT tokens bypass scope checks entirely. When you're authenticated via the web dashboard, you have full access. Scopes are specifically a constraint on API keys.
- The
allscope is a master key. If you genuinely need unrestricted access (and you've thought carefully about it), this is your escape hatch. - Exact matches take priority. If you request
aws:readand the key hasaws:read, you're in. - Wildcard expansion happens last. If exact matching fails, we check if the key has
{resource}:*permissions.
Key Generation and Storage
API keys follow the format cc_live_{random_bytes}, where the random portion is generated using Python's secrets module:
def generate_api_key() -> tuple[str, str, str]:
random_part = secrets.token_urlsafe(32)
full_key = f"cc_live_{random_part}"
prefix = full_key[:16] # For identification
key_hash = hashlib.sha256(full_key.encode()).hexdigest()
return full_key, prefix, key_hash
We store three distinct values:
| Value | Purpose | Stored? |
|---|---|---|
full_key |
Shown to user exactly once | ❌ Never |
prefix |
Allows users to identify keys | ✅ Plaintext |
key_hash |
Used for verification | ✅ SHA-256 |
The full key is never persisted. When a user creates a key, they see it once in the response, with a stern warning to save it immediately. After that, it's gone forever. This is intentional, if our database is compromised, attackers get hashes, not keys.
Authentication Priority
The system supports dual authentication methods with clear priority:
async def get_current_user_optional(
authorization: Optional[str] = Header(None),
x_api_key: Optional[str] = Header(None)
):
# API key takes priority
if x_api_key:
user = await verify_api_key(x_api_key)
if user:
return user
# Fall back to JWT
if authorization and authorization.startswith('Bearer '):
# ... JWT verification
API keys are checked first via the X-API-Key header. If that's absent or invalid, we fall back to the standard Authorization: Bearer {token} flow. This lets programmatic clients use keys while the web dashboard continues using session tokens.
Usage Tracking
Every API key hit updates usage statistics:
supabase.table('api_keys').update({
'last_used_at': datetime.now().isoformat(),
'usage_count': key_record.get('usage_count', 0) + 1
}).eq('id', key_record['id']).execute()
This is wrapped in a try/except and intentionally fire-and-forget. We don't want usage tracking failures to break actual API calls. The data feeds into the dashboard so you can see which keys are active, which are dormant, and which might be candidates for rotation or deletion.
Key Lifecycle Operations
Beyond creation, the API supports the full key lifecycle:
Rotation generates a fresh key while preserving all metadata (name, description, scopes, expiration). The old key is invalidated immediately:
@router.post("/{key_id}/rotate")
async def rotate_api_key(key_id: str, user=Depends(get_current_user)):
# ... ownership verification ...
full_key, prefix, key_hash = generate_api_key()
supabase.table('api_keys').update({
'key_prefix': prefix,
'key_hash': key_hash,
'usage_count': 0,
'last_used_at': None
}).eq('id', key_id).execute()
Expiration is optional but recommended for keys with broad access. The verification logic checks expiration on every request:
if key_record.get('expires_at'):
expires_at = datetime.fromisoformat(key_record['expires_at'])
if datetime.now(expires_at.tzinfo) > expires_at:
return None # Key rejected
Practical Scope Assignments
Here are some common patterns:
| Use Case | Recommended Scopes |
|---|---|
| Read-only dashboard | aws:read, contracts:read |
| CI/CD deployment | aws:read, aws:write |
| Billing automation | billing:* |
| Full programmatic access | all (with short expiration) |
The principle is simple: grant the minimum access required for the task. A monitoring script doesn't need write access. A deployment pipeline doesn't need billing access. And nothing should have all unless absolutely necessary.
What's Next
This scope system provides the foundation for more granular controls down the road. As we role out new features, and add programmatic extensibility for all AWS resources, future iterations will include:
- Resource-level scopes (
aws:read:account-123for specific accounts) - Rate limiting per scope (different limits for read vs. write operations)
- Scope inheritance (organization-level keys that cascade to child resources)
For now, the current model strikes a balance between security and usability. Create your first key, assign it the scopes it needs, and nothing more.
Your plants will get watered. Your house will stay locked.
And Cloud Cuts will still get you the best savings on your cloud bill securely, if you want to permanently improve your business margins.