6 comments

  • elevenapril 2 hours ago

    Hi HN,

    I built SkillRisk because I was terrified of giving my AI agents shell_exec or broad API access without checking them first.

    It is a free security analyzer strictly for AI Agent Skills (Tools).

    The Problem: We define skills in JSON/YAML for Claude/OpenAI, often copy-pasting code that grants excessive permissions (wildcard file access, dangerous evals, etc.).

    The Solution: SkillRisk parses these definitions and runs static analysis rules to catch:

    Privilege Escalation: Detects loosely scoped permissions. Injection Risks: Finds arguments vulnerable to command injection. Data Leaks: Checks for hardcoded secrets in skill schemas. You can paste your skill definition and get a report instantly. No login required for the core scanner. I linked directly to the free scanner so you can try it instantly.

    Try it here: https://skillrisk.org/free-check

    I'd love to hear how you handle security for your AI agents!

  • aghilmort 2 hours ago

    this is really great

    toss in test building skills

    macro linter skills

    Etc

      elevenapril 2 hours ago

      Thanks! The 'macro linter' framing is spot on—treating skill definitions with the same rigor as code is exactly the goal. regarding 'test building': are you envisioning something that auto-generates adversarial inputs (like fuzzing) based on the schema, or more like scaffolding for unit tests to ensure the tool executes correctly? I’d love to dig into that use case.

        aghilmort an hour ago

        all the above!

        Our team steers models using info theory; think error-correcting codes for LLMs in Shannon sense. Do in-context by interleaving codewords & content, semi-secret post-transformer model, etc.

        Simple example. Can get model to gen vertically aligned text tables so all columns & borders align etc. Leverages we can use hypertokens to get model to track what to put in each cell & why + structured table schema & tool call trick

        We view our tech as linting cert in certain precise sense. The catch is bridging semantic coherence. That’s most readily done using similarly precise semantic rubric like yours.

        Why? The general problem of things that nobody wants to do relative to their role, time, resources, etc.

        Test gen, refactor, design, any and all the things getting in way of dev & layperson adoption. What layperson wants to write hey ok so map reduce this with 5 alt models in MoE and get back to me? What dev wants to laboriously sketch 67M SQL attacks as part of their prompt, etc.

        Why? The most direct way to solve that why should I have to do this problem & also solve having the model do reliably. This becomes esp. problematic for structured data & interfaces which is our focus.

        You’re building exactly the sorts of structured rule sets desperately needed right now. Our stuff makes sure these sorts of skills get executed reliably.

        While we also do quite a bit on data & viz semantic tooling, big gap in what you’re doing with semantic code linting of all shapes & sizes. Just reading code and suggesting key fuzz spots or fuzz categories missed by trad fuzzers. Macro semantic linting for forms. Etcccccccccccccc

          elevenapril an hour ago

          Wow, I have to admit, the "Shannon sense / error-correcting codes" angle is wild.

          I'm just here trying to stop people from accidentally letting agents rm -rf their servers with static rules, but your approach to runtime steering sounds like the real endgame for reliability.

          You nailed it on the "bridging semantic coherence" part. It feels like we're attacking the same beast from two ends: I'm writing the specs/contracts, and you're ensuring the execution actually honors them.

          Really appreciate the validation. Hearing "desperately needed" from someone working on that level of the stack makes my day.

            aghilmort 25 minutes ago

            yeah, one way to frame is have to have structural parity & semantic parity & bridge to & from both like balanced scales.

            We started with structure to help others solve semantics. Your approach doing same thing from other direction!

            While theoretically possible to do just one or other in nested way it’s much easier to do little bit of both, especially if want anything approaching associative recall & reasoning. Akin to dynamically balancing volume between parts of songs or reprojecting continuously into some frequency envelope etc.