6 comments

  • bit_tea a day ago

    Kudos for the emphasis on solving the cross-team coordination problem for these "mega-workflows", as that's usually where the bespoke glue code starts to rot.

    How is this fundamentally different from low-code automation tools like n8n or Make? Curious what you think these tools lack/fall short of

      iamsamwood 5 hours ago

      Those tools are excellent for simple data movement, but drag-and-drop becomes unwieldy as logic density increases. This often leads to script bloat, where developers embed code inside nodes, creating the worst of both worlds. We keep process logic as code so you can use standard dev tools—like Git, Cursor, and CI/CD—while maintaining a high-quality integration DX.

      We also use a highly performant Go and LevelDB runtime for solid performance at high scale, and our focus on enterprise means we include all the compliance features and industry-specific integrations that aren't native to other platform.

  • alanJames34 a day ago

    How do you handle debugging when a workflow spans that many systems? Most workflow platforms, especially low-code/no-code, become painful to debug at scale.

      iamsamwood a day ago

      Debugging at scale is simplified architecturally by separating the integration/glue from the core process logic, where the platform handles that integration glue for you. This decision focuses all the debugging efforts on a single Common Operating Script (as code), which decouples your high-level busines flow from the underlying integrations and infrastructure.

      We also provide native distributed tracing (OpenTelemetry) across the entire distributed stack, with hooks to let you trace individual functions. This allows you to follow a single transaction through the Connector Hub and the Common Operating Script, correlating errors and tracking performance, across the various layers without manual log stitching.

      Out of the box, we also include a centralized APM suite with Prometheus & Grafana (amp, and amq) with real-time app and infra metrics, and CloudWatch Logging attached to all the services, for centralized logs, to make this even easier.

  • db422 a day ago

    "Mega-workflow" is a bit vague. Is that a certain size/complexity of process? Sounds interesting but it needs to be able to scale.

      iamsamwood a day ago

      Fair point on the terminology. We generally define a "mega-workflow" by the scale of the logic and participants involved—-typically processes spanning multiple teams, 10+ systems, and anywhere from 50 to thousands of individual tasks. We've seen this successfully scale to the largest enterprises (including Allianz & Citi).

      From what we've seen in enterprise environments, there is a clear progression where these types of processes start to fail:

      * Stage 1: Simple RPA for basic task repetition.

      * Stage 2: Low-code/no-code platforms for departmental workflows.

      * Stage 3: The Breaking Point: When the complexity hits a level where your engineers spend more time maintaining integrations to external systems, fixing broken glue code, and manually stitching together workflow systems, than actually shipping features. This is exactly where Luther Enterprise shines.

      We’ve also found this approach works equally well for early-stage teams, especially in regulated environments such as in insurance and banking. These founders need a backend fast, but they hit "mega-workflow" complexity on day one because of high participant counts, strict compliance rules, and a massive volume of validation logic for edge cases.

      You can check out our case studies here https://enterprise.luthersystems.com/product/case-studies where we deep-dive into the specifics.