Deep nesting: The indexer enforces a 255-depth limit (and gives a clear error if exceeded). That's a u8 + stack overflow safety guard. Details on the Known Limitations page: https://giantjson.com/docs/known-limitations/
Wide objects / long lines: This was actually the harder problem. In Text Mode, extremely long lines (especially without spaces, like minified JSON or base64 blobs) caused serious issues with Android's text layout engine. I ended up detecting those early and truncating at ~5KB for display.
In Browser Mode, cards truncate values aggressively (100 chars collapsed, 1000 chars expanded), but the full value is still available for copy-to-clipboard operations. I also tried to make truncation "useful" by sniffing for magic bytes—if it looks like base64-encoded data, it shows a badge with the detected format (PNG, PDF, etc.) and lets you extract/download it.
Index build time & memory: These are definitely the limiting factors right now. The structural index itself grows linearly with node count (32 bytes/node stored on disk), and for minified JSON I also keep a sparse line index in memory. For big files, the initial indexing can take a minute—I'm not sure if that scares users away or if they expect it for a GB sized file.
I've been watching Play Console for ANRs/OOMs and so far just had 1-2 isolated cases that I could fix from the stack traces. But honestly, I'm still figuring out which direction to prioritize next—real-world usage patterns will tell me more than my synthetic tests did.
I built this primarily as an engineering challenge. I was bored and didn't want to release just another generic utility app (there are already 1000s of identical JSON viewers on the Play Store). I wanted to build something unique that pushes the limits of what's possible on a phone.
Also, I hate ads. So there are no ads here.
Important: This is a viewer, not an editor. It treats the file as read-only to ensure safety and speed. Also, it strictly requires valid JSON syntax—it relies on precise structural indexing rather than fuzzy parsing.
The Tech Stack:
I didn't invent anything new here; I just spent a lot of time trial-and-erroring my way to a solution that works on mobile constraints. I patched together a native Rust layer (via JNI) using some powerful existing crates:
* Zero-Copy Access
'memmap2' maps the source file. My custom index format is theoretically capable of addressing files up to 1TB (limited by the packed 40-bit offsets), but in practice, I've "only" tested it on files up to 2.5GB. Why? Honestly, I was too lazy to wait for massive indexes to rebuild every time I deployed a debug build.
* SIMD Scanning
'memchr' is used heavily for lexical scanning.
* Parallelism:
'rayon' keeps background tasks off the UI thread.
* Efficient Indexing:
The custom structural index packs node metadata efficiently (roughly 32 bytes per node). This index is built once, cached on disk, and then memory-mapped. This technically allows us to navigate a massive tree with near-zero Java heap usage, as we just jump to offsets in the mapped index.
UI Experiments (The "Sandbox"):
Since this was a personal playground for me, I experimented with some non-standard UI ideas:
* Visual Query Builder
Instead of slow embedded logic, I built a visual query builder backed by a custom multi-pass SIMD search.
* 5KB Rendering Window
The UI only parses/highlights the chunk visible in the viewport to prevent freezes on massive single lines.
* Base64 Extraction
The Browser Mode can automatically detect and extract encoded files from within JSON string values. It supports ~50 formats including: PNG, JPEG, GIF, WebP, BMP, TIFF, ICO, SVG, HEIC, MP4, PDF, RTF, ZIP, PSD, RAR, 7Z, GZIP, TAR, MP3, OGG, FLAC, WAV, WebM, TTF, OTF...
A Note on Limits:
Before you try to load a file with 2M nested depth: please check the Known Limitations page (https://giantjson.com/docs/known-limitations/). I've documented the architectural boundaries (like the 255 nesting depth limit to prevent stack overflows, and the 1TB max file size) so you know exactly what to expect.
Why I'm Posting:
I'm honestly not sure if there's a mass-market need for this, or if I'm the only one who thinks it's cool.
I'd love to hear if any of you actually have a workflow where inspecting massive JSON files on a phone is useful (e.g., field ops, emergencies, game modding). Also, I'm curious what kind of "pathological" JSON structures you deal with—I've tried to handle the big ones (huge files, long lines), but real-world data is often surprising.
Really impressive work, especially on mobile. The mmap + zero-copy, read-only approach feels like the right tradeoff for files at this scale.
Curious how it behaves with extremely wide objects or deep nesting — do index build time or memory pressure become the limiting factor?
Nice example of serious systems engineering in a place where it’s rarely done.
Thanks! Really appreciate it.
Deep nesting: The indexer enforces a 255-depth limit (and gives a clear error if exceeded). That's a u8 + stack overflow safety guard. Details on the Known Limitations page: https://giantjson.com/docs/known-limitations/
Wide objects / long lines: This was actually the harder problem. In Text Mode, extremely long lines (especially without spaces, like minified JSON or base64 blobs) caused serious issues with Android's text layout engine. I ended up detecting those early and truncating at ~5KB for display.
In Browser Mode, cards truncate values aggressively (100 chars collapsed, 1000 chars expanded), but the full value is still available for copy-to-clipboard operations. I also tried to make truncation "useful" by sniffing for magic bytes—if it looks like base64-encoded data, it shows a badge with the detected format (PNG, PDF, etc.) and lets you extract/download it.
Index build time & memory: These are definitely the limiting factors right now. The structural index itself grows linearly with node count (32 bytes/node stored on disk), and for minified JSON I also keep a sparse line index in memory. For big files, the initial indexing can take a minute—I'm not sure if that scares users away or if they expect it for a GB sized file.
I've been watching Play Console for ANRs/OOMs and so far just had 1-2 isolated cases that I could fix from the stack traces. But honestly, I'm still figuring out which direction to prioritize next—real-world usage patterns will tell me more than my synthetic tests did.
Hi HN, I'm the developer of Giant JSON Viewer.
I built this primarily as an engineering challenge. I was bored and didn't want to release just another generic utility app (there are already 1000s of identical JSON viewers on the Play Store). I wanted to build something unique that pushes the limits of what's possible on a phone.
Also, I hate ads. So there are no ads here.
Important: This is a viewer, not an editor. It treats the file as read-only to ensure safety and speed. Also, it strictly requires valid JSON syntax—it relies on precise structural indexing rather than fuzzy parsing.
The Tech Stack: I didn't invent anything new here; I just spent a lot of time trial-and-erroring my way to a solution that works on mobile constraints. I patched together a native Rust layer (via JNI) using some powerful existing crates:
* Zero-Copy Access 'memmap2' maps the source file. My custom index format is theoretically capable of addressing files up to 1TB (limited by the packed 40-bit offsets), but in practice, I've "only" tested it on files up to 2.5GB. Why? Honestly, I was too lazy to wait for massive indexes to rebuild every time I deployed a debug build.
* SIMD Scanning 'memchr' is used heavily for lexical scanning.
* Parallelism: 'rayon' keeps background tasks off the UI thread.
* Efficient Indexing: The custom structural index packs node metadata efficiently (roughly 32 bytes per node). This index is built once, cached on disk, and then memory-mapped. This technically allows us to navigate a massive tree with near-zero Java heap usage, as we just jump to offsets in the mapped index.
UI Experiments (The "Sandbox"): Since this was a personal playground for me, I experimented with some non-standard UI ideas:
* Visual Query Builder Instead of slow embedded logic, I built a visual query builder backed by a custom multi-pass SIMD search.
* 5KB Rendering Window The UI only parses/highlights the chunk visible in the viewport to prevent freezes on massive single lines.
* Base64 Extraction The Browser Mode can automatically detect and extract encoded files from within JSON string values. It supports ~50 formats including: PNG, JPEG, GIF, WebP, BMP, TIFF, ICO, SVG, HEIC, MP4, PDF, RTF, ZIP, PSD, RAR, 7Z, GZIP, TAR, MP3, OGG, FLAC, WAV, WebM, TTF, OTF...
A Note on Limits: Before you try to load a file with 2M nested depth: please check the Known Limitations page (https://giantjson.com/docs/known-limitations/). I've documented the architectural boundaries (like the 255 nesting depth limit to prevent stack overflows, and the 1TB max file size) so you know exactly what to expect.
Why I'm Posting: I'm honestly not sure if there's a mass-market need for this, or if I'm the only one who thinks it's cool.
I'd love to hear if any of you actually have a workflow where inspecting massive JSON files on a phone is useful (e.g., field ops, emergencies, game modding). Also, I'm curious what kind of "pathological" JSON structures you deal with—I've tried to handle the big ones (huge files, long lines), but real-world data is often surprising.
Play Store: https://play.google.com/store/apps/details?id=com.giantjsonv... Docs: https://giantjson.com/docs/