It's just one request per analysis. Reddit allows much more. I thought about that when I thought implementing "deep mode".
Right now when user analyzes:
- Thread, it fetches all metadata, 1 req
- Subreddit, fetches 100 posts, 1 req
- Search result, 100 posts, 1 req
Subreddit posts and Search result posts are just posts. Extension doesn't open the posts to get all comments of each. But I thought of it. I don't remember exact numbers of reddit's rate limits but it should be possible to implement in the future. I'm thinking of adding "overheat bar" (like in games) to indicate to the user how much requests is remaining per minute and other safety checks.
Fetching metadata is done on user's behave, not reddit API.
It's not scraping. Metadata are accessible to anyone by just adding .json to URL
How do you get around the rate limits sites usually have to stop scraping?
It's just one request per analysis. Reddit allows much more. I thought about that when I thought implementing "deep mode".
Right now when user analyzes: - Thread, it fetches all metadata, 1 req - Subreddit, fetches 100 posts, 1 req - Search result, 100 posts, 1 req
Subreddit posts and Search result posts are just posts. Extension doesn't open the posts to get all comments of each. But I thought of it. I don't remember exact numbers of reddit's rate limits but it should be possible to implement in the future. I'm thinking of adding "overheat bar" (like in games) to indicate to the user how much requests is remaining per minute and other safety checks.
Fetching metadata is done on user's behave, not reddit API. It's not scraping. Metadata are accessible to anyone by just adding .json to URL