No description
- Python 47.8%
- CSS 28.9%
- JavaScript 13.2%
- HTML 9.6%
- Dockerfile 0.4%
- Other 0.1%
Hide raw OCR text, show only kanji + reading by default. Tap a word card to expand and reveal badges and meanings. Adds chevron indicator and tighter spacing for a more condensed drawer. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> |
||
|---|---|---|
| lib | ||
| scripts | ||
| server | ||
| tests | ||
| .dockerignore | ||
| .env.example | ||
| .gitignore | ||
| __init__.py | ||
| cron.sh | ||
| docker-compose.yml | ||
| Dockerfile | ||
| entrypoint.sh | ||
| LICENSE | ||
| main.py | ||
| pytest.ini | ||
| README.md | ||
| requirements.txt | ||
| TODO.md | ||
Kyourinrin
A web-based manga/webtoon aggregator and reader. Kyourinrin scrapes comics from multiple sources and serves them through a self-hosted web interface with reading progress tracking.
Features
- Scrapes manga/webtoons from multiple aggregator sites (rawuwu, rawkuma, and more)
- Self-hosted web interface for reading comics
- Reading progress tracking per chapter
- User authentication with argon2 password hashing
- Automatic periodic scraping via background threads
- Manual scraping via CLI or web interface refresh
Requirements
- Python 3.10+
- Docker and Docker Compose (recommended)
- Or: a Debian-based Linux system for manual setup
Quick Start (Docker)
-
Clone the repository:
git clone <repository-url> cd Kyourinrin -
Create the secret key file:
echo "your-secret-key-here" > .kyourinrin_secret -
Copy and edit the environment example:
cp .env.example .env -
Start with Docker Compose:
docker compose up -d -
Create a user account:
docker compose exec kyourinrin /kyourinrin/venv/bin/python main.py admin -u -
Open http://localhost:16313 in your browser.
Manual Setup
-
Create a virtual environment and install dependencies:
python3 -m venv venv source venv/bin/activate pip install -r requirements.txt -
Create the secret key file:
echo "your-secret-key-here" > .kyourinrin_secret -
Create a user:
python main.py admin -u -
Start the server:
python main.py server
Configuration
The application uses the following configuration:
| Setting | Default | Description |
|---|---|---|
| Server address | 0.0.0.0 |
Bind address for the web server |
| Server port | 16313 |
Port for the web server |
| Secret key | (required) | Flask session secret in .kyourinrin_secret |
CLI Usage
# Start the web server
python main.py server [--address 0.0.0.0] [--port 16313]
# Scrape all known comics
python main.py scraper --all
# Scrape from a specific site
python main.py scraper --website rawuwu
# Scrape a specific comic from a site
python main.py scraper --website rawuwu --comic comic-name
# Create a new user
python main.py admin --user
Cron Setup
To automatically scrape on a schedule, add to your crontab:
# Scrape all comics every 3 hours
0 */3 * * * /path/to/Kyourinrin/cron.sh
API Endpoints
Authentication
| Method | Endpoint | Description |
|---|---|---|
| GET | / |
Redirects to home (if logged in) or login page |
| GET/POST | /login |
Login page; POST accepts JSON {"username": "...", "password": "..."} |
| GET | /logout |
Logs out current session |
Comics
| Method | Endpoint | Description |
|---|---|---|
| GET | /home |
Home page showing unfinished comics with unread chapters |
| GET | /comics |
List all comics in the collection |
| GET | /comic/<comic> |
View a specific comic's chapters |
| GET | /comic/<comic>/<chapter> |
Read a specific chapter |
| GET | /image/<comic>/<chapter>/<num> |
Serve a specific page image |
| GET | /thumb/<comic> |
Serve comic thumbnail |
Management
| Method | Endpoint | Description |
|---|---|---|
| PUT | /read/<comic>/<chapter> |
Mark a chapter as read |
| PUT | /add/<aggregator>/<comic> |
Add a new comic from an aggregator |
| DELETE | /delete/<comic> |
Delete a comic from the collection |
| GET | /refresh |
Trigger a manual scrape of all comics |
Project Structure
Kyourinrin/
├── main.py # CLI entry point
├── lib/ # Scraper library
│ ├── scraper.py # Core scraping logic and aggregator management
│ ├── config.py # Scraper configuration
│ ├── json_utils.py # Thread-safe JSON file access with locking
│ ├── rawuwu.py # RawUwU scraper
│ ├── rawkuma.py # Rawkuma scraper
│ └── ... # Other scraper modules
├── server/ # Flask web server
│ ├── kyourinrin.py # Routes and server logic
│ ├── config.py # Server configuration
│ ├── static/ # CSS, JS, static assets
│ └── templates/ # Jinja2 HTML templates
├── tests/ # Test suite
│ ├── conftest.py # Pytest fixtures
│ └── test_scraper.py # Unit tests
├── data/ # Runtime data (created automatically)
│ ├── comics # Comic metadata (JSON)
│ ├── users # User credentials (JSON)
│ └── mapping # Aggregator-to-comic mapping (JSON)
├── comics/ # Downloaded comic images (created automatically)
├── Dockerfile # Container build definition
├── docker-compose.yml # Container orchestration
├── requirements.txt # Python dependencies
├── entrypoint.sh # Docker container entry point
└── cron.sh # Cron job script for scheduled scraping
Testing
Run the test suite:
pip install -r requirements.txt
pytest
License
This project is licensed under the MIT License. See LICENSE for details.