I'm the creator of Agent Skills. I've been building this because I realized managing and discovering agent skills for local agents was becoming a mess.
We just hit a milestone of indexing nearly 5,000 skills. It's been a fun engineering challenge dealing with deduplication, categorization, and handling different metadata formats from GitHub.
Honestly, building this has brought back the joy of creating for me. I just launched on other platforms and the feedback has been incredibly motivating. For example, a user recently pointed out that "permission boundaries" (read-only vs write access) are a bigger bottleneck than discovery—an insight I hadn't fully considered and am now planning to build.
The Stack: It's built with Next.js, Sanity (for the structured content), and hosted on Vercel. I'm using some custom scripts to parse GitHub repos for skill.json / SKILL.md files.
I'd love to hear your thoughts on what makes an agent skill "trustworthy" to you. Is it the star count? The code visibility? Or something else?
Cool project, congrats! 5k seems like a lot given do you have any intake filters? Did you “test” all of them? I would assume many of them have overlapping functionality is this something you plan to address?
I'm the creator of Agent Skills. I've been building this because I realized managing and discovering agent skills for local agents was becoming a mess.
We just hit a milestone of indexing nearly 5,000 skills. It's been a fun engineering challenge dealing with deduplication, categorization, and handling different metadata formats from GitHub.
Honestly, building this has brought back the joy of creating for me. I just launched on other platforms and the feedback has been incredibly motivating. For example, a user recently pointed out that "permission boundaries" (read-only vs write access) are a bigger bottleneck than discovery—an insight I hadn't fully considered and am now planning to build.
The Stack: It's built with Next.js, Sanity (for the structured content), and hosted on Vercel. I'm using some custom scripts to parse GitHub repos for skill.json / SKILL.md files.
I'd love to hear your thoughts on what makes an agent skill "trustworthy" to you. Is it the star count? The code visibility? Or something else?
Cool project, congrats! 5k seems like a lot given do you have any intake filters? Did you “test” all of them? I would assume many of them have overlapping functionality is this something you plan to address?