No description
  • Python 47.8%
  • CSS 28.9%
  • JavaScript 13.2%
  • HTML 9.6%
  • Dockerfile 0.4%
  • Other 0.1%
Find a file
Claude c92c4c2534 feat: make translation word cards collapsible
Hide raw OCR text, show only kanji + reading by default. Tap a word
card to expand and reveal badges and meanings. Adds chevron indicator
and tighter spacing for a more condensed drawer.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 10:59:42 +00:00
lib fix: use filename number as OCR translation page key 2026-02-12 10:23:12 +00:00
scripts feat: add one-time script to re-key existing translation files 2026-02-12 10:23:18 +00:00
server feat: make translation word cards collapsible 2026-02-12 10:59:42 +00:00
tests fix: update dependencies, improve infrastructure, add docs and tests 2026-02-08 23:51:30 +00:00
.dockerignore add .dockerignore to exclude large directories from build context 2026-02-09 11:13:01 +00:00
.env.example fix: update dependencies, improve infrastructure, add docs and tests 2026-02-08 23:51:30 +00:00
.gitignore add gitignore 2026-02-09 01:47:32 +01:00
__init__.py initial commit 2021-09-10 20:32:15 +02:00
cron.sh change the venv locaction so that it does not get shadowed 2026-02-09 03:06:00 +01:00
docker-compose.yml fix: use multi-stage build instead of separate deps Dockerfile 2026-02-09 15:09:34 +00:00
Dockerfile fix: use multi-stage build instead of separate deps Dockerfile 2026-02-09 15:09:34 +00:00
entrypoint.sh change the venv locaction so that it does not get shadowed 2026-02-09 03:06:00 +01:00
LICENSE fix: update dependencies, improve infrastructure, add docs and tests 2026-02-08 23:51:30 +00:00
main.py feat: add --chapter flag to scraper mode for single-chapter scraping 2026-02-09 21:32:25 +00:00
pytest.ini fix: update dependencies, improve infrastructure, add docs and tests 2026-02-08 23:51:30 +00:00
README.md fix: update dependencies, improve infrastructure, add docs and tests 2026-02-08 23:51:30 +00:00
requirements.txt fix: add torchvision to requirements for CRAFT text detector 2026-02-09 14:30:44 +00:00
TODO.md feat: complete UI redesign with dark/light theme and fix default user creation 2026-02-09 00:34:20 +00:00

Kyourinrin

A web-based manga/webtoon aggregator and reader. Kyourinrin scrapes comics from multiple sources and serves them through a self-hosted web interface with reading progress tracking.

Features

  • Scrapes manga/webtoons from multiple aggregator sites (rawuwu, rawkuma, and more)
  • Self-hosted web interface for reading comics
  • Reading progress tracking per chapter
  • User authentication with argon2 password hashing
  • Automatic periodic scraping via background threads
  • Manual scraping via CLI or web interface refresh

Requirements

  • Python 3.10+
  • Docker and Docker Compose (recommended)
  • Or: a Debian-based Linux system for manual setup

Quick Start (Docker)

  1. Clone the repository:

    git clone <repository-url>
    cd Kyourinrin
    
  2. Create the secret key file:

    echo "your-secret-key-here" > .kyourinrin_secret
    
  3. Copy and edit the environment example:

    cp .env.example .env
    
  4. Start with Docker Compose:

    docker compose up -d
    
  5. Create a user account:

    docker compose exec kyourinrin /kyourinrin/venv/bin/python main.py admin -u
    
  6. Open http://localhost:16313 in your browser.

Manual Setup

  1. Create a virtual environment and install dependencies:

    python3 -m venv venv
    source venv/bin/activate
    pip install -r requirements.txt
    
  2. Create the secret key file:

    echo "your-secret-key-here" > .kyourinrin_secret
    
  3. Create a user:

    python main.py admin -u
    
  4. Start the server:

    python main.py server
    

Configuration

The application uses the following configuration:

Setting Default Description
Server address 0.0.0.0 Bind address for the web server
Server port 16313 Port for the web server
Secret key (required) Flask session secret in .kyourinrin_secret

CLI Usage

# Start the web server
python main.py server [--address 0.0.0.0] [--port 16313]

# Scrape all known comics
python main.py scraper --all

# Scrape from a specific site
python main.py scraper --website rawuwu

# Scrape a specific comic from a site
python main.py scraper --website rawuwu --comic comic-name

# Create a new user
python main.py admin --user

Cron Setup

To automatically scrape on a schedule, add to your crontab:

# Scrape all comics every 3 hours
0 */3 * * * /path/to/Kyourinrin/cron.sh

API Endpoints

Authentication

Method Endpoint Description
GET / Redirects to home (if logged in) or login page
GET/POST /login Login page; POST accepts JSON {"username": "...", "password": "..."}
GET /logout Logs out current session

Comics

Method Endpoint Description
GET /home Home page showing unfinished comics with unread chapters
GET /comics List all comics in the collection
GET /comic/<comic> View a specific comic's chapters
GET /comic/<comic>/<chapter> Read a specific chapter
GET /image/<comic>/<chapter>/<num> Serve a specific page image
GET /thumb/<comic> Serve comic thumbnail

Management

Method Endpoint Description
PUT /read/<comic>/<chapter> Mark a chapter as read
PUT /add/<aggregator>/<comic> Add a new comic from an aggregator
DELETE /delete/<comic> Delete a comic from the collection
GET /refresh Trigger a manual scrape of all comics

Project Structure

Kyourinrin/
├── main.py              # CLI entry point
├── lib/                 # Scraper library
│   ├── scraper.py       # Core scraping logic and aggregator management
│   ├── config.py        # Scraper configuration
│   ├── json_utils.py    # Thread-safe JSON file access with locking
│   ├── rawuwu.py        # RawUwU scraper
│   ├── rawkuma.py       # Rawkuma scraper
│   └── ...              # Other scraper modules
├── server/              # Flask web server
│   ├── kyourinrin.py    # Routes and server logic
│   ├── config.py        # Server configuration
│   ├── static/          # CSS, JS, static assets
│   └── templates/       # Jinja2 HTML templates
├── tests/               # Test suite
│   ├── conftest.py      # Pytest fixtures
│   └── test_scraper.py  # Unit tests
├── data/                # Runtime data (created automatically)
│   ├── comics           # Comic metadata (JSON)
│   ├── users            # User credentials (JSON)
│   └── mapping          # Aggregator-to-comic mapping (JSON)
├── comics/              # Downloaded comic images (created automatically)
├── Dockerfile           # Container build definition
├── docker-compose.yml   # Container orchestration
├── requirements.txt     # Python dependencies
├── entrypoint.sh        # Docker container entry point
└── cron.sh              # Cron job script for scheduled scraping

Testing

Run the test suite:

pip install -r requirements.txt
pytest

License

This project is licensed under the MIT License. See LICENSE for details.