I built TensorWall because I noticed how difficult it is to balance LLM freedom with production safety. It’s an open-source security layer designed to intercept, analyze, and filter prompts and responses in real-time.
Key features:
PII Redaction: Automatically masks sensitive data before it reaches the model.
Prompt Injection Defense: Detects malicious patterns in user inputs.
Output Validation: Ensures the model stays within predefined constraints.
Framework Agnostic: Easy to integrate with existing Python stacks.
I’m looking for feedback on the architecture and what specific security "walls" you'd like to see next.
Hi HN,
I built TensorWall because I noticed how difficult it is to balance LLM freedom with production safety. It’s an open-source security layer designed to intercept, analyze, and filter prompts and responses in real-time.
Key features:
PII Redaction: Automatically masks sensitive data before it reaches the model.
Prompt Injection Defense: Detects malicious patterns in user inputs.
Output Validation: Ensures the model stays within predefined constraints.
Framework Agnostic: Easy to integrate with existing Python stacks.
I’m looking for feedback on the architecture and what specific security "walls" you'd like to see next.
Check it out here: https://github.com/datallmhub/TensorWall