I still think metadata associated with packages (like stars, download count and more) are easy to fake and not the best metric. OpenSSF scorecard has some adoption among project maintainers but hardly any adoption in terms of making security decision based on it.
IMHO code is the source of truth. It may seem infeasible to mass analyse OSS code, but given the recent incidents (Shai-Hulud et.al) I think that’s the way forward. Personally am more bullish on SLSA or other artefact provenance technology adoption. Till that happens, metadata will be misused by attackers.
Interesting project! Though, it's usually the smaller and less known-about projects that fall victim to OSS supply-chain attacks (such as the XZ attack).
Since this is a manual check, I worry that most users will just check the big and grandiose dependencies that they have.
Who would you say are your target audience with this tool? OSS developers? Security researchers? Regular users? Corporate managers?
Thank you for the thoughtful comment! You raise an excellent point about smaller projects being overlooked.
That's actually one of the key problems this tool aims to address. While it's a manual check, the tool helps you examine ALL dependencies in your project - including those smaller, lesser-known libraries that often slip under the radar.
The dependency check option (`os4g check --show-dependencies`) is particularly valuable here: it often reveals that well-known, popular libraries actually depend on small, undermaintained projects. This visibility helps users discover these hidden but critical dependencies that might otherwise go unnoticed.
The target audience is primarily general users and developers who may not be deeply familiar with OSS sustainability issues, rather than OSS maintainers or security researchers who already understand these problems well. The goal is to raise awareness and help everyday developers understand the health status of their entire dependency tree, so they can make more informed decisions and potentially contribute back to these smaller projects that their software relies on.
The key difference is focus: OpenSSF Scorecard primarily evaluates security best practices (dependency updates, SAST, branch protection, etc.), while oss-sustain-guard focuses specifically on sustainability and maintenance health metrics.
For example, oss-sustain-guard checks:
- How quickly maintainers respond to issues
- Recent commit activity patterns
- Community engagement trends
- Maintainer burnout indicators
A project can have a perfect Scorecard security score but still be at risk if the sole maintainer is overwhelmed or going inactive - which is what we saw in cases like XZ or event-stream.
As for Google's Assured OSS, it's a curated list of vetted packages, which is valuable for organizations. However, oss-sustain-guard is designed to help individual developers assess ANY package in their dependency tree, including those smaller transitive dependencies that wouldn't appear on curated lists.
I see these tools as complementary rather than competing - security practices (Scorecard) + sustainability health (oss-sustain-guard) + vetted packages (Assured OSS) together give a more complete picture of dependency risk.
I still think metadata associated with packages (like stars, download count and more) are easy to fake and not the best metric. OpenSSF scorecard has some adoption among project maintainers but hardly any adoption in terms of making security decision based on it.
IMHO code is the source of truth. It may seem infeasible to mass analyse OSS code, but given the recent incidents (Shai-Hulud et.al) I think that’s the way forward. Personally am more bullish on SLSA or other artefact provenance technology adoption. Till that happens, metadata will be misused by attackers.
Interesting project! Though, it's usually the smaller and less known-about projects that fall victim to OSS supply-chain attacks (such as the XZ attack).
Since this is a manual check, I worry that most users will just check the big and grandiose dependencies that they have.
Who would you say are your target audience with this tool? OSS developers? Security researchers? Regular users? Corporate managers?
Thank you for the thoughtful comment! You raise an excellent point about smaller projects being overlooked.
That's actually one of the key problems this tool aims to address. While it's a manual check, the tool helps you examine ALL dependencies in your project - including those smaller, lesser-known libraries that often slip under the radar.
The dependency check option (`os4g check --show-dependencies`) is particularly valuable here: it often reveals that well-known, popular libraries actually depend on small, undermaintained projects. This visibility helps users discover these hidden but critical dependencies that might otherwise go unnoticed.
The target audience is primarily general users and developers who may not be deeply familiar with OSS sustainability issues, rather than OSS maintainers or security researchers who already understand these problems well. The goal is to raise awareness and help everyday developers understand the health status of their entire dependency tree, so they can make more informed decisions and potentially contribute back to these smaller projects that their software relies on.
Not trying to hate, but these projects come to mind:
https://scorecard.dev/
https://cloud.google.com/security/products/assured-open-sour...
Thank you for your comment!
The key difference is focus: OpenSSF Scorecard primarily evaluates security best practices (dependency updates, SAST, branch protection, etc.), while oss-sustain-guard focuses specifically on sustainability and maintenance health metrics.
For example, oss-sustain-guard checks: - How quickly maintainers respond to issues - Recent commit activity patterns - Community engagement trends - Maintainer burnout indicators
A project can have a perfect Scorecard security score but still be at risk if the sole maintainer is overwhelmed or going inactive - which is what we saw in cases like XZ or event-stream.
As for Google's Assured OSS, it's a curated list of vetted packages, which is valuable for organizations. However, oss-sustain-guard is designed to help individual developers assess ANY package in their dependency tree, including those smaller transitive dependencies that wouldn't appear on curated lists.
I see these tools as complementary rather than competing - security practices (Scorecard) + sustainability health (oss-sustain-guard) + vetted packages (Assured OSS) together give a more complete picture of dependency risk.