- Memory-mapped file I/O (no read syscalls)
- Zero-copy parsing where possible
- SIMD-accelerated string search for finding PDF structures
- Parallel extraction across pages using Zig's thread pool
- Streaming output (no intermediate allocations for extracted text)
What it handles:
- XRef tables and streams (PDF 1.5+)
- Incremental PDF updates (/Prev chain)
- FlateDecode, ASCII85, LZW, RunLength decompression
- Font encodings: WinAnsi, MacRoman, ToUnicode CMap
- CID fonts (Type0, Identity-H/V, UTF-16BE with surrogate pairs)
very nice, it'd be good to see a feature comparison as when I use mupdf it's not really just about speed, but about the level of support of all kinds of obscure pdf features, and good level of accuracy of the built-in algorithms for things like handling two-column pages, identifying paragraphs, etc.
the licensing is a huge blocker for using mupdf in non-OSS tools, so it's very nice to see this is MIT
Not being slow - they compile straight to bytecode, they aren't interpreted, and have aggressive, opinionated optimizations baked in by default, so it's even faster than compiled c (under default conditions.)
Contrasted with python, which is interpreted, has a clunky runtime, minimal optimizations, and all sorts of choices that result in slow, redundant, and also slow, performance.
The price for performance is safety checks, redundancy, how badly wrong things can go, and so on.
A good compromise is luajit - you get some of the same aggressive optimizations, but in an interpreted language, with better-than-c performance but interpreted language convenience, access to low level things that can explode just as spectacularly as with zig or c, but also a beautiful language.
If it's really better than what we had before, what does it matter how it was made? It's literally hacked together with the tools of the day (LLMs) isn't that the very hacker ethos? Patching stuff together that works in a new and useful way.
5x speed improvements on pdf text extraction might be great for some applications I'm not aware of, I wouldn't just dismiss it out of hand because the author used $robot to write the code.
Presumably the thought to make the thing in the first place and decide what features to add and not add was more important than how the code is generated?
I built a PDF text extraction library in Zig that's significantly faster than MuPDF for text extraction workloads.
~41K pages/sec peak throughput.
Key choices: memory-mapped I/O, SIMD string search, parallel page extraction, streaming output. Handles CID fonts, incremental updates, all common compression filters.
~5,000 lines, no dependencies, compiles in <2s.
Why it's fast:
What it handles:What kind of performance are you seeing with/without SIMD enabled?
From https://github.com/Lulzx/zpdf/blob/main/src/main.zig it looks like the help text cites an unimplemented "-j" option to enable multiple threads.
There is a "--parallel" option, but that is only implemented for the "bench" command.
I have now made parallel by default and added an option to enable multiple threads.
I haven't tested without SIMD.
You've released quite a few projects lately, very impressive.
Are you using LLMs for parts of the coding?
What's your work flow when approaching a new project like this?
Claude Code.
> Are you using LLMs for parts of the coding?
I can't talk about the code, but the readme and commit messages are most likely LLM-generated.
And when you take into account that the first commit happened just three hours ago, it feels like the entire project has been vibe coded.
What's fast about mmap?
very nice, it'd be good to see a feature comparison as when I use mupdf it's not really just about speed, but about the level of support of all kinds of obscure pdf features, and good level of accuracy of the built-in algorithms for things like handling two-column pages, identifying paragraphs, etc.
the licensing is a huge blocker for using mupdf in non-OSS tools, so it's very nice to see this is MIT
python bindings would be good too
added a comparison, will improve further. https://github.com/Lulzx/zpdf?tab=readme-ov-file#comparison-...
also, added python bindings.
Now we just need Python bindings so I can use it in my trash language of choice.
added python bindings!
excellent stuff what makes zig so fast
It makes your development workflow smooth enough that you have the time and energy to do stuff like all the bullet points listed in https://news.ycombinator.com/item?id=46437289
Not being slow - they compile straight to bytecode, they aren't interpreted, and have aggressive, opinionated optimizations baked in by default, so it's even faster than compiled c (under default conditions.)
Contrasted with python, which is interpreted, has a clunky runtime, minimal optimizations, and all sorts of choices that result in slow, redundant, and also slow, performance.
The price for performance is safety checks, redundancy, how badly wrong things can go, and so on.
A good compromise is luajit - you get some of the same aggressive optimizations, but in an interpreted language, with better-than-c performance but interpreted language convenience, access to low level things that can explode just as spectacularly as with zig or c, but also a beautiful language.
will add this to the list, now learning new languages is less of a barrier with LLMs
- First commit 3hours ago.
- commit message: LLM-generated.
- README: LLM-generated.
I'm not convinced that projects vibe coded over the evening deserve the HN front page…
Edit: and of course the author's blog is also full of AI slop…
2026 hasn't even started I already hate it.
Wait, but why?
If it's really better than what we had before, what does it matter how it was made? It's literally hacked together with the tools of the day (LLMs) isn't that the very hacker ethos? Patching stuff together that works in a new and useful way.
5x speed improvements on pdf text extraction might be great for some applications I'm not aware of, I wouldn't just dismiss it out of hand because the author used $robot to write the code.
Presumably the thought to make the thing in the first place and decide what features to add and not add was more important than how the code is generated?