1 comments

  • rovmut 2 hours ago

    Hey HN, I’m the maker of LayoutCraft.

    I built this because I was frustrated with the current state of AI image generation. I found myself spending hours re-rolling prompts just to get the text spelled correctly or to fix a layout that looked "slightly off." The issue is that diffusion models generate pixels, which makes them inherently probabilistic—if you ask to change a text the whole image is regenerated.

    LayoutCraft takes a different approach. Instead of guessing pixels, I built a rendering pipeline that uses LLMs to write HTML and CSS, which is then rendered via Playwright. This makes the output deterministic. It means the text is actual DOM text (always crisp), the layouts respect specific hex codes for brand consistency, and you can resize the same design state to different aspect ratios instantly without artifacts.

    It’s definitely not for generating "art" or photorealistic scenes, but for structured marketing assets where layout consistency matters, I found this workflow much more reliable. I’d love to hear your feedback on the approach!