devops

DevSecOps β€” Practical Security

Secrets hygiene, image scanning, dependency awareness, K8s RBAC, least privilege, and SBOM


Secrets Hygiene

Secrets in the wrong place are the most common cause of security incidents. The rule is simple: secrets never enter your codebase, ever.

Where Secrets Go Wrong

Terminal window
# WRONG β€” secret in code
DATABASE_URL="postgres://admin:password123@db.example.com/prod"
# WRONG β€” secret in Dockerfile
ENV DATABASE_URL=postgres://admin:password123@db.example.com/prod
# WRONG β€” secret in docker-compose.yml committed to git
environment:
- DATABASE_URL=postgres://admin:password123@db.example.com/prod
# WRONG β€” secret in CI logs
echo "Running with key: $API_KEY" # logs are often public!

What to Do Instead

Terminal window
# 1. Environment injection at runtime
docker run -e DATABASE_URL="$DATABASE_URL" myimage
# 2. Secrets manager reference
aws secretsmanager get-secret-value --secret-id prod/myapp/database-url
# 3. K8s Secret (from external secrets operator β€” not manual yaml with secret value)
kubectl get secret myapp-secrets -o jsonpath='{.data.database-url}' | base64 -d
# 4. Vault dynamic credentials (most secure β€” short-lived, auto-rotated)
vault read database/creds/myapp-role

Detect Leaked Secrets

Terminal window
# Scan git history for secrets
trufflehog git https://github.com/myorg/myapp --only-verified
# Pre-commit hook to prevent committing secrets
pip install detect-secrets
detect-secrets scan > .secrets.baseline
detect-secrets audit .secrets.baseline
# Add to .pre-commit-config.yaml
repos:
- repo: https://github.com/Yelp/detect-secrets
rev: v1.4.0
hooks:
- id: detect-secrets

If a Secret Is Leaked

  1. Immediately rotate/revoke the secret (assume it’s compromised)
  2. Check access logs for unauthorized use
  3. Remove from git history (this is hard β€” better to rotate first)
  4. Add to .gitignore and secrets baseline
Terminal window
# Remove sensitive file from git history (complex β€” use BFG or git-filter-repo)
git filter-repo --path secrets.txt --invert-paths
# Requires force push β€” coordinate with your team

Image Scanning

Every Docker image you build contains software with known vulnerabilities. Scan before you push.

Trivy (most common)

Terminal window
# Install
brew install trivy # macOS
apt install trivy # Ubuntu
docker pull aquasec/trivy # Docker
# Scan a local image
trivy image myapp:latest
# Scan with specific severity (fail on critical/high)
trivy image --severity CRITICAL,HIGH --exit-code 1 myapp:latest
# Scan a remote image
trivy image nginx:1.25
# Scan in CI (GitHub Actions)
- uses: aquasecurity/trivy-action@master
with:
image-ref: myapp:${{ github.sha }}
severity: 'CRITICAL,HIGH'
exit-code: '1'
ignore-unfixed: true # ignore vulns with no fix available
format: 'sarif'
output: 'trivy-results.sarif'
# Upload to GitHub Security tab
- uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: trivy-results.sarif

What to Do With Scan Results

CVE SeverityAction
CriticalFix immediately β€” update base image or dependency
HighFix within sprint
MediumAdd to backlog, track
LowAccept risk or add to ignore list with justification
Terminal window
# Update base image (most common fix)
FROM node:20-alpine # β†’ check for newer patch version
FROM node:20.11-alpine # pin exact version after verifying it's clean
# Check which package has the vulnerability
trivy image --format json myapp:latest | jq '.Results[].Vulnerabilities[] | select(.Severity == "CRITICAL")'

Registry Scanning

Terminal window
# AWS ECR β€” enable scan on push (automatic)
aws ecr put-image-scanning-configuration \
--repository-name myapp \
--image-scanning-configuration scanOnPush=true
# Get scan results
aws ecr describe-image-scan-findings \
--repository-name myapp \
--image-id imageTag=latest

Dependency Awareness

Your application code is only a small fraction of what you ship. You also ship every npm package, pip package, and system library you depend on.

Terminal window
# Node.js
npm audit # check for known vulnerabilities
npm audit fix # auto-fix where possible
npm audit --audit-level=high # fail only on high/critical
# Python
pip install safety
safety check # scan installed packages
safety check -r requirements.txt
# In CI β€” keep it in the pipeline
- name: Dependency audit
run: npm audit --audit-level=high

Keeping Dependencies Updated

Terminal window
# Check for outdated packages
npm outdated
pip list --outdated
# Automated dependency updates
# Dependabot (GitHub) β€” opens PRs for outdated dependencies
# Renovate β€” similar, more configurable
# .github/dependabot.yml
version: 2
updates:
- package-ecosystem: "npm"
directory: "/"
schedule:
interval: "weekly"
ignore:
- dependency-name: "eslint" # ignore specific package

Kubernetes RBAC Basics

RBAC (Role-Based Access Control) controls who can do what in your cluster.

Key Resources

ServiceAccount β†’ bound to β†’ RoleBinding/ClusterRoleBinding β†’ references β†’ Role/ClusterRole
ResourceScopeUse case
RoleSingle namespacePod reader in production namespace
ClusterRoleAll namespacesNode reader, persistent volume admin
RoleBindingSingle namespaceBinds a Role to a subject in a namespace
ClusterRoleBindingAll namespacesBinds ClusterRole globally

Creating Roles

# Role β€” only in production namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: production
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list"]
---
# RoleBinding β€” grants pod-reader role to service account
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: production
subjects:
- kind: ServiceAccount
name: myapp
namespace: production
- kind: User
name: alice@example.com
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io

ServiceAccount for Apps

# ServiceAccount β€” identity for your app
apiVersion: v1
kind: ServiceAccount
metadata:
name: myapp
namespace: production
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/myapp-role
# ↑ For EKS: maps K8s SA to AWS IAM role (IRSA)
---
# Use in deployment
spec:
serviceAccountName: myapp # not the default service account!
automountServiceAccountToken: false # only mount if your app needs K8s API access
Terminal window
# Check what a service account can do
kubectl auth can-i get pods --as=system:serviceaccount:production:myapp
kubectl auth can-i list secrets --as=system:serviceaccount:production:myapp
# List all permissions for a role
kubectl describe role pod-reader -n production
# Check who has cluster-admin
kubectl get clusterrolebindings -o json | \
jq '.items[] | select(.roleRef.name=="cluster-admin") | .subjects'

Least Privilege Mindset

Every principal (user, service account, IAM role) should have exactly the permissions it needs β€” no more.

Checklist

IAM (AWS):

  • No * actions in production
  • EC2 instances get roles, not access keys
  • Rotate access keys if they exist at all
  • Use IAM Access Analyzer to find unused permissions
  • Enable CloudTrail for audit logs

Kubernetes:

  • No workload should use the default service account
  • No workload should have cluster-admin
  • Mount service account tokens only if needed (automountServiceAccountToken: false)
  • Use network policies to restrict pod-to-pod communication

Containers:

  • Run as non-root user (USER 1000 in Dockerfile)
  • Read-only root filesystem where possible
  • Drop all capabilities, add back only what’s needed
# Secure container security context
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
containers:
- name: myapp
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE # only if app binds to port < 1024
volumeMounts:
- name: tmp
mountPath: /tmp # need writable /tmp? mount it separately
volumes:
- name: tmp
emptyDir: {}

Supply Chain Awareness (SBOM)

SBOM = Software Bill of Materials. A list of every component in your software β€” like a nutrition label.

Why It Matters

The SolarWinds and Log4Shell attacks showed that attackers can compromise software through dependencies. An SBOM lets you:

  • Know what’s in your software
  • Quickly identify if you’re affected by a new CVE
  • Meet compliance requirements (many enterprises now require SBOMs)

Generate an SBOM

Terminal window
# Using syft (by Anchore)
brew install syft
# Generate SBOM for a Docker image
syft myapp:latest -o cyclonedx-json > sbom.json
syft myapp:latest -o spdx-json > sbom.spdx.json
# Generate for local directory
syft dir:./myapp -o cyclonedx-json > sbom.json
# In GitHub Actions
- uses: anchore/sbom-action@v0
with:
image: myapp:${{ github.sha }}
artifact-name: sbom.spdx.json
format: spdx-json

Scan SBOM for Vulnerabilities

Terminal window
# Using grype (by Anchore) β€” scan SBOM
grype sbom:./sbom.json
grype sbom:./sbom.json --fail-on high

Attestation (Proving the SBOM is Real)

Terminal window
# Sign an image and attach SBOM with cosign
cosign sign myapp:latest
cosign attest --predicate sbom.json --type cyclonedx myapp:latest
# Verify signature
cosign verify myapp:latest