OpenSSF Scorecard: Measure Your Open Source Security
TOP 5 March 24, 2026, 5:30 a.m.

OpenSSF Scorecard: Measure Your Open Source Security

Open source software fuels almost every modern application, but with great power comes the responsibility of keeping that code safe. The OpenSSF Scorecard is a free, automated tool that evaluates a repository’s security posture across a dozen best‑practice checks. By running Scorecard, developers get a clear, actionable report that highlights strengths, uncovers gaps, and helps teams prioritize remediation without diving into endless manual audits.

What Is the OpenSSF Scorecard?

The OpenSSF (Open Source Security Foundation) Scorecard is an open‑source project that provides a standardized, community‑driven scoring system for GitHub repositories. It examines concrete signals—such as the presence of a security policy, usage of signed commits, or the frequency of dependency updates—to assign a score from 0 to 10 for each check. The aggregate score offers a quick health indicator, while the detailed breakdown tells you exactly where to improve.

Because Scorecard is language‑agnostic and runs directly against the repository’s metadata, you don’t need to instrument your code or add special dependencies. It works out‑of‑the‑box with any project hosted on GitHub, GitLab (via mirroring), or other compatible platforms, making it a universal baseline for open‑source security hygiene.

How Scorecard Works Under the Hood

Scorecard performs a series of static analyses and API queries. Each check is implemented as a separate module that looks for a specific artifact:

  • Binary Artifacts – Does the repo publish signed releases?
  • Security Policy – Is there a SECURITY.md file?
  • Signed Commits – Are Git commits cryptographically signed?
  • Dependency Updates – How quickly are vulnerable dependencies patched?
  • Code Review – Are pull requests reviewed by at least two reviewers?

Each module returns a score of 0, 5, or 10, reflecting “missing,” “partial,” or “complete” compliance. The results are compiled into a JSON payload, which can be consumed by humans or automated dashboards.

Scorecard also respects a repository’s .github/scorecard.yml configuration file, allowing maintainers to customize thresholds, disable checks, or provide additional context for false positives.

Installing and Running Scorecard Locally

Before you embed Scorecard into your CI pipeline, it’s helpful to try it locally. The tool is distributed as a single binary for Linux, macOS, and Windows, and can also be installed via go install or Docker. Below is the most common installation path using the pre‑compiled binary.

# 1️⃣ Download the latest binary (replace VERSION with the current release)
curl -L -o scorecard https://github.com/ossf/scorecard/releases/download/v{{VERSION}}/scorecard_{{VERSION}}_linux_amd64
chmod +x scorecard
sudo mv scorecard /usr/local/bin/

# 2️⃣ Verify the installation
scorecard version

Once installed, point Scorecard at any public GitHub repo. The tool will automatically fetch the repository’s metadata and compute the scores.

# Example: Scan the popular Flask web framework
scorecard --repo=github.com/pallets/flask --format=json > flask-scorecard.json

The resulting flask-scorecard.json contains a top‑level score field, a checks array with individual results, and timestamps for when the scan was performed.

Integrating Scorecard into CI/CD Pipelines

Automating Scorecard in your CI pipeline ensures that every pull request is evaluated against the same security baseline. Most teams use GitHub Actions, GitLab CI, or Jenkins. Below is a minimal GitHub Actions workflow that runs Scorecard on every push to main and posts a comment with the summary.

name: Security Scorecard

on:
  push:
    branches: [ main ]
  pull_request:

jobs:
  scorecard:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Install Scorecard
        run: |
          curl -L -o scorecard https://github.com/ossf/scorecard/releases/download/v2.3.0/scorecard_2.3.0_linux_amd64
          chmod +x scorecard
          sudo mv scorecard /usr/local/bin/

      - name: Run Scorecard
        id: run-scorecard
        run: |
          scorecard --repo=${{ github.repository }} --format=json > scorecard.json
          echo "score=$(jq .score scorecard.json)" >> $GITHUB_OUTPUT

      - name: Post comment
        if: github.event_name == 'pull_request'
        uses: actions/github-script@v7
        with:
          script: |
            const fs = require('fs');
            const report = JSON.parse(fs.readFileSync('scorecard.json', 'utf8'));
            const summary = report.checks.map(c => `- **${c.name}**: ${c.score}/10`).join('\\n');
            github.rest.issues.createComment({
              issue_number: context.payload.pull_request.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: `## OpenSSF Scorecard Summary\\nOverall score: **${report.score}/10**\\n${summary}`
            });

This workflow does three things: checks out the code, installs the binary, runs the scan, and finally posts a neatly formatted comment on the PR. Teams can extend the script to fail the build if the overall score drops below a threshold, turning security into a gate rather than an afterthought.

Parsing Scorecard Results Programmatically

While the JSON output is human‑readable, most organizations prefer to ingest the data into dashboards like Grafana, Splunk, or custom dashboards. Below is a concise Python snippet that extracts the overall score and the list of failing checks, then pushes the metrics to a Prometheus Pushgateway.

import json
import requests

# Load the JSON report generated by Scorecard
with open('scorecard.json') as f:
    report = json.load(f)

overall_score = report['score']
failing_checks = [c['name'] for c in report['checks'] if c['score'] < 10]

# Prepare Prometheus metrics payload
payload = f"""# TYPE open_source_score gauge
open_source_score {overall_score}
# TYPE open_source_failing_checks gauge
open_source_failing_checks {len(failing_checks)}
"""

# Push to Pushgateway (replace with your endpoint)
pushgateway_url = "http://pushgateway.example.com/metrics/job/scorecard"
response = requests.post(pushgateway_url, data=payload)
response.raise_for_status()
print("Metrics pushed successfully")

By feeding these metrics into a time‑series database, you can track security posture trends across dozens of repositories, set alerts for sudden drops, and even correlate scores with incident data.

Interpreting the Scorecard Report

The raw numbers are only useful when you understand what each check represents. Below is a quick cheat sheet for the most common checks and what a low score typically means.

  • Binary Artifacts (0‑10) – A low score suggests that releases are not signed or that binaries are not hosted at all. Consider using gpg signing and publishing releases on GitHub or an artifact repository.
  • Branch Protection (0‑10) – If this is missing, contributors can push directly to main. Enable required pull‑request reviews and status checks in your repository settings.
  • Dependency Update (0‑10) – A score of 0 often indicates that vulnerable dependencies linger for weeks. Automate dependency scanning with tools like Dependabot or Renovate.
  • Signed Commits (0‑10) – Unsigned commits make it harder to verify author identity. Encourage developers to configure git config --global user.signingkey and enforce commit.gpgsign.

When you see a 5, it usually means the artifact exists but is not fully compliant (e.g., a security policy file is present but lacks a clear reporting address). In such cases, a quick edit to the markdown can bump the score to 10.

Pro tip: Treat the Scorecard as a living document. Add a “Scorecard Review” item to your sprint retrospectives, and assign a rotating maintainer to keep the .github/scorecard.yml file up to date.

Real‑World Use Cases

1. Open‑Source Foundations – The Linux Foundation’s sigstore project uses Scorecard to certify that its downstream libraries meet a minimum security baseline before being listed in the official catalog. This automatic vetting reduces manual audits and speeds up onboarding of new contributors.

2. Enterprise Dependency Management – A large fintech company integrated Scorecard into its internal artifact proxy. Every third‑party library pulled from the proxy is first scanned; if the upstream repo scores below 7, the library is flagged for manual review, preventing vulnerable code from entering production.

3. Community‑Driven Projects – The popular requests library added a Scorecard badge to its README. The badge updates on each push, giving users instant visibility into the project’s security health and encouraging contributors to fix low‑scoring checks.

Advanced Configuration with .github/scorecard.yml

The default Scorecard run is opinionated, but you can tailor it to your organization’s risk model. Create a .github/scorecard.yml at the repository root and specify overrides. Below is an example that disables the “Token Permissions” check (useful for repositories that deliberately expose public tokens for testing) and raises the passing threshold to 8.

checks:
  token-permissions:
    enabled: false
  binary-artifacts:
    minimumScore: 8
  branch-protection:
    requiredReviewers: 2
  ci-tests:
    required: true

When Scorecard runs, it reads this file and applies the custom logic, ensuring that the generated score aligns with your internal policies.

Common Pitfalls and How to Avoid Them

Pitfall 1: Ignoring the “0‑Score” Checks – New projects often get a 0 for “Signed Commits” because developers forget to configure GPG. Fix this early by adding a git commit -S hook or enabling commit.gpgsign globally.

Pitfall 2: Treating the Score as a One‑Time Audit – Security is continuous. Schedule a weekly cron job that runs Scorecard and pushes the results to your monitoring system. This way, regressions are caught immediately.

Pitfall 3: Over‑Customizing the Config – Disabling too many checks defeats the purpose of a standardized baseline. Keep customizations minimal and document the rationale for each change.

Pro tip: Use the --show-details flag when debugging a failing check. It prints the exact GitHub API calls and file paths that Scorecard inspected, saving you from hunting in the dark.

Extending Scorecard with Custom Checks

Scorecard’s architecture is plugin‑friendly. If your organization has a unique security requirement—say, verifying that every Dockerfile contains a USER nonroot directive—you can write a custom check in Go and contribute it back to the upstream project. The steps are straightforward:

  1. Fork the scorecard repository.
  2. Create a new module under checks/custom/ that implements the Check interface.
  3. Register the check in checks/registry.go.
  4. Submit a pull request for review.

Once merged, every downstream user automatically benefits from the new check without altering their CI configuration.

Best Practices Checklist

  • Run Scorecard locally before pushing code.
  • Integrate Scorecard into every CI pipeline and enforce a minimum overall score.
  • Publish a Scorecard badge in your README to signal transparency.
  • Review and update .github/scorecard.yml quarterly.
  • Automate metric collection for trend analysis.
  • Educate contributors about signed commits and security policies.

Conclusion

The OpenSSF Scorecard turns abstract security best practices into concrete, measurable data points. By adopting it early, you gain a continuous feedback loop that surfaces weaknesses before they become incidents. Whether you’re a solo maintainer, a bustling open‑source community, or an enterprise with thousands of dependencies, Scorecard scales to fit your workflow. Install it, bake it into your CI, monitor the trends, and watch your security posture climb—one check at a time.

Share this article