基于 OAuth 2.0 对于多网站用户账户统合功能,以及跨端服务集群的分析和研究

论坛相关 补丁网站论坛补丁站App
浏览数 - 127发布于 - 2026-04-09 - 16:26

重新编辑于 - 2026-04-09 - 17:14

鲲

7912

目的

目前 鲲 Galgame 这个项目有两个主要的网站,鲲 Galgame 论坛 和 鲲 Galgame 补丁,众所周知这两个网站的账户是不统一的,这在一些方面会对用户的使用以及网站维护造成很大的困扰

所以我们需要重构整个网站的账户系统,使得两个网站账户统一

这个方案有些过于大了,这一篇文章根本就写不完,后面等我们推出论坛的其他功能才可以写完文章

为什么要换后端是因为全栈 Node.js 框架的一些固有问题,后端写起来非常不顺手,没办法自由发挥

还有就是 Nitro 实在太垃圾了,性能太不好(主要是构建时和开发体验不好,虽然简洁但是如果需求过于复杂它撑不住,目前单论坛而言就有 200 个 api 了),如果再和目前使用的 Cloudflare Tunnel 一叠加,几乎人数一多就会出现 Error 1033 的错误,没办法解决

并且我们今年要上线 App 和桌面端,Node.js 的集成后端肯定是不行的,不方便暴露出 api 供第三方调用,不方便横向扩展

最初我想的是集成式的后端就足够了,现在发现虽然省了很多心,但是项目规模大了之后集成后端承载不起需求了

去年我发布过这样一篇文章: 基于大型 Web 服务集群的搭建以及维护设计研究

本次大重构只是产品技术迭代面临的一个正常挑战,并不是说集成后端不好,主要原因是集成后端已经跟不上现在的需求了,次要原因单纯是我喜欢折腾,或者说想要接触新的技术栈

初步的设计是:

鲲 Galgame 论坛一个后端

鲲 Galgame 补丁一个后端

游戏元信息的查询和编辑一个后端

鲲 Galgame OAuth 系统一个后端

App 和桌面端是对以上所有功能的统合,以上所有后端都具备 api endpoint,直接做一个 api getway 实现跨服务,App 和桌面端只需要一个独有的小后端即可,最后差不多类似于这样

text
                  ┌─────────────┐
Web (kungal)  ──→ │             │ ──→ kungal-api (Go Fiber)
Web (moyu)    ──→ │  API Gateway│ ──→ moyu-api (Go Fiber)
App           ──→ │  (Nginx/    │ ──→ gamewiki-api (Go Fiber)
Desktop       ──→ │   Traefik)  │ ──→ oauth-api (Go Fiber)
                  └─────────────┘

最终差不多六七个后端,六七个前端,若干数据库

注意,这个文章只是我自己当备忘录用户,现在还没有写完,之后代码写到哪里我就更新到哪里,如果文章停更说明目标已经实现

大家给我提的所有建议我都收到了,我把它们暂时加到网站的待办列表页面了,现在正在完全重构网站,技术上暂时没有办法修复任何 鲲 Galgame 论坛 和 鲲 Galgame 补丁 的问题和改进建议,如果重构完毕,我一定会第一时间修改这些问题,重构会让网站变得更加无敌,重构之后就可以加更多方便的功能了,请大家等待一段时间

路线图

迁移是需要路线图的,目前我们正在进行下面的工作

鲲 Galgame 论坛 Nuxt4 + Nitro 架构 -> Nuxt4 + GO Fiber (换后端)

鲲 Galgame 补丁 Next.js + Node.js Server -> Next.js / Nuxt + GO Fiber(换后端,为了统一可能换前端)

鲲 Galgame OAuth 系统 Nuxt4 + GO Fiber

gamewiki(现在还没开工)

具体的来说,这是路线图

  1. 编写基本的 OAuth 系统
  2. 迁移所有用户的账户到 鲲 Galgame OAuth 系统
  3. 重构编写基本的 鲲 Galgame 论坛 后端
  4. 对接 鲲 Galgame 论坛 账户到 鲲 Galgame OAuth 系统
  5. 重构编写基本的 鲲 Galgame 补丁 后端
  6. 对接 鲲 Galgame 补丁 账户到 鲲 Galgame OAuth 系统
  7. 重构所有 鲲 Galgame 论坛 后端(目前进度进行到这里
  8. 重构所有 鲲 Galgame 补丁 后端
  9. 编写 game wiki 后端,供所有网站调用
  10. 联调 鲲 Galgame 论坛、鲲 Galgame 补丁、鲲 Galgame OAuth 系统、gamewiki
  11. 上线 6.0.0 版本论坛(目前论坛版本是 5.x,到这里为止论坛和补丁站账户会被统一,会上线,这样就能实现大家的各种新要求了)
  12. 给 鲲 Galgame 论坛 增加上传游戏文件等高级功能
  13. 优化 鲲 Galgame 补丁 的大补丁文件上传
  14. 优化一系列体验,上线 7.0.0 版本论坛
  15. 编写 鲲 Galgame / Kun Visual Novel App 应用 / 桌面端

值得注意的是,如果新的网站没有上线,所有用户都是感知不到的,如果上线了我会发布新的话题

下面可能是一些我自己的备忘录,因为太复杂了我自己写完代码可能都忘记操作步骤是什么了,需要记一下

迁移脚本执行顺序

要同步的项目有三个

  • kun-oauth-admin

  • kun-galgame-nuxt4

  • kun-galgame-patch-next

同时这三个项目需要三个数据库,迁移脚本有大量的耦合,因此迁移脚本的执行顺序是有必要的,并且必须按照这个顺序执行,任何一个错误都可能导致失败

创建 kun_oauth_admin 数据库

如果是第一次运行,需要创建针对于 kun-oauth-admin 的数据库,目前这个数据库的名称是 kun_oauth_admin

如果是第一次执行迁移脚本(新服务器),需要链接数据库后执行

sql
create database kun_oauth_admin;

如果是重复运行脚本,则执行

sql
drop database kun_oauth_admin;
create database kun_oauth_admin;

因为脚本十分复杂,编写可重复运行的脚本不如推倒重新运行所有内容,这也是我们需要停机更新网站的直接原因

去重 kungalgame 数据库中的邮箱重复用户

我们 Nuxt4 + Nitro 版本论坛(版本代号 5.1.0 以前)使用的是 kungalgame 这个数据库

但是这个版本的论坛存在一个 BUG,那就是没有对用户邮箱设置大小写不敏感的唯一约束

这个意思就是,如果有用户使用了 [email protected][email protected] ,那么这会被视为两个用户,理论上这就是一个用户,应该在用户注册或者更改邮箱的时候提示用户已存在

但是由于我的代码疏忽导致用户邮箱的大小写敏感,不同大小写邮箱的用户也被当作了若干个不同的用户

这在旧版本论坛中并不会产生任何问题,但是新版论坛我们放弃了 Nuxt 的集成后端 Nitro,选择了 GO Fiber + GORM 这个技术栈,为了之后的后端编写更加规范,我们选择移除部分重复的用户

这个邮箱去重脚本我们已经编写好,这一步直接在 kun-galgame-nuxt4 下运行

shell
cd apps/api && go run check-dup-email

迁移 kungalgame 相关表

论坛的数据库表需要做一些更改才可以更好的兼容 kun-oauth-admin 以及 gorm

迁移脚本已经写好,继续在 kun-galgame-nuxt4 下运行这个命令即可

shell
go run migrate

创建 kun_oauth_admin 的预制表

我们已经在项目中编写好了脚本,只需要在 kun-oauth-admin 根目录运行下面的命令即可

shell
cd apps/api && go run migrate

迁移所有用户到 kun_oauth_admin 数据库

下一步是迁移所有用户,迁移用户的细节是

markdown
# User Migration: Kungal + Moyu → KUN OAuth

This document describes the design, logic, and considerations behind the user migration script that consolidates users from two separate websites (KUN Galgame and MoYu Patch) into the centralized KUN OAuth system, while keeping all three databases' user IDs synchronized.

## Overview

**Script location:** `apps/api/cmd/migrate-users/main.go`

**What it does in one run:**

1. Reads all users from kungal and moyu source databases
2. Merges users with the same email into a single identity
3. Creates unified users in the OAuth target database with chronologically ordered IDs
4. Preserves legacy passwords for transparent migration on first login
5. Creates per-site metadata (roles, daily counters, etc.)
6. Migrates social relations (follows)
7. Maps site-level roles to global OAuth roles (admin, moderator)
8. **Remaps user IDs in both source databases** so all three databases share the same user IDs

After the script completes, both kungal and moyu can reference users by the same integer ID as the OAuth system, with zero additional migration work on the application side.

---

## Prerequisites

Before running the migration:

1. **Target database schema must exist:**
   ```bash
   cd apps/api
   go run ./cmd/migrate        # Creates tables and seeds sites + roles
   ```

2. **Source databases must be accessible** (kungal and moyu PostgreSQL instances)

3. **Source databases should have unique emails per user** — the script skips intra-database email duplicates (keeps the earlier registration). Deduplicate emails in source databases before running if possible.

4. **Backup all three databases** — the script modifies source databases (remaps user IDs). This is irreversible.

---

## Execution

```bash
cd apps/api

# Dry run first (no changes made)
go run ./cmd/migrate-users \
  --kungal-dsn="host=localhost port=5432 user=postgres password=xxx dbname=kungalgame sslmode=disable" \
  --moyu-dsn="host=localhost port=5432 user=postgres password=xxx dbname=kungalgame_patch sslmode=disable" \
  --dry-run

# Actual migration
go run ./cmd/migrate-users \
  --kungal-dsn="host=localhost port=5432 user=postgres password=xxx dbname=kungalgame sslmode=disable" \
  --moyu-dsn="host=localhost port=5432 user=postgres password=xxx dbname=kungalgame_patch sslmode=disable"
```

---

## Step-by-Step Logic

### Step 1: Fetch Source Data

Reads all users from both databases:
- `KungalUser` — maps to kungal's `user` table (columns: `id, name, email, password, avatar, bio, role, status, moemoepoint, ip, daily_check_in, daily_image_count, daily_toolset_upload_count, created, updated`)
- `MoyuUser` — maps to moyu's `user` table (similar structure with `daily_upload_size, last_login_time` instead of `daily_toolset_upload_count`)

### Step 2: Merge by Email

Users from both sites are merged into a unified list keyed by lowercase email:

- **Kungal users are inserted first.** If two kungal users share the same email (intra-database duplicate), only the first one (by iteration order = lowest ID = earliest registration) is kept. The duplicate is skipped and counted in `SkippedDuplicates`.
- **Moyu users are matched against the map.** If a moyu user's email already exists (from kungal), the two are merged:
  - Kungal takes priority for name, email, avatar, bio
  - Moemoepoints are summed
  - The earlier `CreatedAt` is used
  - Both source records are preserved for password migration and site data
- If a moyu user's email is unique, they become a moyu-only user.
- Intra-moyu email duplicates are also skipped (same logic as kungal).

### Step 3: Sort Chronologically

All merged users are sorted by `CreatedAt` ascending. This determines the final user ID assignment — **the earliest registered user gets ID 1, the next gets ID 2, and so on.** This preserves registration order across both sites.

### Step 4: Insert Users with Sequential IDs

For each merged user (in chronological order):

1. **Skip if email already exists in target database** (for idempotent re-runs)
2. **Deduplicate username** — if the name is already taken, append `_1`, `_2`, etc.
3. **Set legacy passwords:**
   - `kungal_password` — the original bcrypt hash from kungal (if the user came from kungal)
   - `moyu_password` — the original argon2id hash from moyu (if the user came from moyu)
   - `password` — set to NULL (will be populated on first successful legacy login)
4. **Explicitly set `user.ID`** — sequential starting from `max_existing_id + 1`
5. **Create within a transaction** that also inserts:
   - `user_site_data` records (one per source site, containing role, status, daily counters, and extra JSON metadata)
   - `user_migration` records (audit trail: source_db, source_user_id, merged_from)

The mapping `(source_db, source_user_id) → new_user_id` is stored in memory (`sourceToNewID`) for the remap step.

After all insertions, the PostgreSQL sequence is reset:
```sql
SELECT setval(pg_get_serial_sequence('users', 'id'), (SELECT COALESCE(MAX(id), 1) FROM users));
```

### Step 5: Migrate Social Relations

Moyu's `user_follow_relation` records are migrated to the OAuth system's `user_follow` table. Both follower and following IDs are mapped through `sourceToNewID`. Records where either user wasn't migrated, or where follower equals following, are skipped.

Duplicate follow relations (from re-runs) are silently ignored.

### Step 6: Map Roles to Global OAuth Roles

Site-level roles from `user_site_data` are mapped to global OAuth roles:

| Site | Source Role Value | OAuth Role |
|------|------------------|------------|
| Kungal | 3 | admin |
| Kungal | 2 | moderator |
| Moyu | 4 | admin (super admin) |
| Moyu | 3 | moderator (admin) |

The highest privilege across all sites wins. Roles are inserted into the `user_roles` many-to-many join table with `ON CONFLICT DO NOTHING` for idempotency.

### Step 7: Remap User IDs in Source Databases

This is the critical step that synchronizes user IDs across all three databases.

**Problem:** The OAuth target assigned new sequential IDs (1, 2, 3...) based on chronological order. The source databases still have their original IDs (potentially overlapping between kungal and moyu, and different from the new IDs). All business tables (galgame, topic, patch, comments, etc.) reference `user.id` via foreign keys.

**Solution: Two-pass remap with offset**

For each source database (kungal, then moyu):

1. **Discover existing tables** — query `pg_tables` to skip tables that don't exist yet (e.g., `oauth_account` before `prisma migrate` is run)

2. **Disable triggers** on all affected tables (disables FK constraint enforcement during the update)

3. **Create temporary mapping table** `_id_map(old_id, new_id)` populated from `sourceToNewID`

4. **Pass 1 — Offset all IDs to temporary range:**
   ```sql
   -- For every FK column in every business table:
   UPDATE "table" SET "column" = "table"."column" + 100000000
     FROM _id_map WHERE "table"."column" = _id_map.old_id;

   -- For user.id itself:
   UPDATE "user" SET id = id + 100000000
     FROM _id_map WHERE "user".id = _id_map.old_id;
   ```
   This moves ALL mapped user IDs to the 100M+ range, completely vacating the target ID space. No collisions are possible because the offset range (100M+) doesn't overlap with any real ID.

5. **Pass 2 — Set final IDs:**
   ```sql
   -- For every FK column:
   UPDATE "table" SET "column" = _id_map.new_id
     FROM _id_map WHERE "table"."column" = _id_map.old_id + 100000000;

   -- For user.id:
   UPDATE "user" SET id = _id_map.new_id
     FROM _id_map WHERE "user".id = _id_map.old_id + 100000000;
   ```

6. **Reset sequence** — update `user_id_seq` to current max ID

7. **Re-enable triggers** on all affected tables

The entire remap runs in a single transaction per source database. If anything fails, all changes are rolled back.

**Tables remapped in Kungal** (51 FK columns across ~30 tables):

```
chat_room_participant.user_id, chat_room_admin.user_id,
chat_message.sender_id, chat_message.receiver_id,
chat_message_read_by.user_id, chat_message_reaction.user_id,
doc_article.author_id,
galgame.user_id, galgame_rating.user_id, galgame_rating_like.user_id,
galgame_rating_comment.user_id, galgame_rating_comment.target_user_id,
galgame_comment.user_id, galgame_comment.target_user_id,
galgame_comment_like.user_id, galgame_history.user_id,
galgame_link.user_id, galgame_pr.user_id,
galgame_resource.user_id, galgame_resource_like.user_id,
galgame_toolset.user_id, galgame_toolset_contributor.user_id,
galgame_toolset_practicality.user_id, galgame_toolset_resource.user_id,
galgame_toolset_comment.user_id,
galgame_website.user_id, galgame_website_comment.user_id,
galgame_website_like.user_id, galgame_website_favorite.user_id,
message.sender_id, message.receiver_id,
system_message.user_id,
topic.user_id, topic_comment.user_id, topic_comment.target_user_id,
topic_comment_like.user_id, topic_poll.user_id, topic_poll_vote.user_id,
topic_reply.user_id, topic_reply_like.user_id, topic_reply_dislike.user_id,
topic_upvote.user_id, topic_like.user_id, topic_dislike.user_id,
topic_favorite.user_id,
todo.user_id, update_log.user_id, unmoe.user_id,
user_friend.user_id, user_friend.friend_id,
user_follow.follower_id, user_follow.followed_id,
oauth_account.user_id
```

**Tables remapped in Moyu** (18 FK columns across ~15 tables):

```
chat_member.user_id, chat_message.sender_id, chat_message.deleted_by_id,
chat_message_seen.user_id, chat_message_reaction.user_id,
patch.user_id, patch_resource.user_id, patch_comment.user_id,
admin_log.user_id,
user_follow_relation.follower_id, user_follow_relation.following_id,
user_message.sender_id, user_message.recipient_id,
user_patch_favorite_relation.user_id,
user_patch_contribute_relation.user_id,
user_patch_comment_like_relation.user_id,
user_patch_resource_like_relation.user_id,
oauth_account.user_id
```

---

## Password Migration Strategy

Users are migrated with NULL `password` fields. Legacy password hashes are preserved in separate columns:

| Column | Format | Source |
|--------|--------|--------|
| `password` | NULL initially, argon2id after migration | Set on first successful login |
| `kungal_password` | bcrypt hash | Kungal source database |
| `moyu_password` | Custom argon2id (`salt_hex:hash_hex`) | Moyu source database |

**Login flow** (implemented in `AuthService.Login`):

1. If `password` is set → verify with argon2id (new system)
2. Else if `kungal_password` is set → verify with bcrypt. On success: hash with argon2id, save to `password`, clear `kungal_password` and `moyu_password`
3. Else if `moyu_password` is set → verify with custom argon2id parser. On success: same migration as above
4. Else → error "password required" (user must reset via email)

This transparent migration happens once per user. After their first successful login, the legacy fields are cleared and they use the new password system going forward.

---

## Design Decisions

### Why chronological ID ordering?

Users on kungal care about their registration order and ID number. By sorting all users from both sites by `CreatedAt` and assigning sequential IDs, the earliest registered user (kungal was founded ~2023, moyu ~2024) gets the smallest ID. This preserves the perceived seniority.

### Why two-pass offset remap?

Directly updating `user.id` from old to new causes unique constraint violations when ID spaces overlap. For example, old_id=5 → new_id=3, but old_id=3 hasn't been remapped yet. The two-pass approach:
1. Moves ALL IDs to a non-overlapping range (100M+)
2. Sets them to their final values

This guarantees zero collisions regardless of how old and new ID spaces overlap.

### Why include ALL users in the mapping (even unchanged IDs)?

Even if `old_id == new_id` for some users, they must still be included in the mapping and go through the two-pass offset. Otherwise, their ID would remain at the original value during Pass 1, potentially blocking another user's new_id in Pass 2.

### Why disable triggers instead of deferring constraints?

PostgreSQL's `SET CONSTRAINTS ALL DEFERRED` only works for constraints declared as `DEFERRABLE`. Prisma-generated constraints are not deferrable by default. Disabling triggers is the reliable way to suspend FK enforcement during bulk updates.

### Why not use `user_migrations` table for the FK remap?

The `user_migrations` table exists in the OAuth target database, not in the source databases. The remap uses an in-memory `sourceToNewID` map, materialized as a temporary table (`_id_map`) in each source database's transaction. This avoids cross-database queries and keeps the remap self-contained.

---

## Idempotency

The script is designed for safe re-runs:

- Users already in the target database (matched by email) are skipped
- Follow relations use `ON CONFLICT DO NOTHING`
- Role assignments use `ON CONFLICT DO NOTHING`
- The remap step is transactional — if it fails, source databases are unchanged

However, if the remap succeeds partially (e.g., kungal succeeds but moyu fails), you must restore from backup before re-running, because the kungal source database's IDs will already be changed.

**Recommended approach for re-runs:**
1. Backup all databases before running
2. If the script fails mid-remap, restore from backup
3. Fix the issue, re-run from scratch

---

## Output

The script prints a summary after completion:

```
==================================================
Migration Results
==================================================
Kungal users total:    67373
Moyu users total:      21286
--------------------------------------------------
New users created:     81442
Users merged:          7194      (same email across both sites)
Site data created:     88636     (one per user per site)
Follows migrated:      2278
Follows skipped:       0
Roles assigned:        57        (admin + moderator)
Skipped (existing):    0         (already in target)
Errors:                0
==================================================
```

Progress is logged every 1000 users during the insertion phase.

---

## Post-Migration Checklist

1. **Verify user count:** `SELECT COUNT(*) FROM users;` in OAuth database should match `New users created`
2. **Verify ID ordering:** `SELECT id, created_at FROM users ORDER BY id LIMIT 10;` — IDs should increase with time
3. **Verify source DB IDs match:** Pick a user, check their ID is the same in kungal, moyu, and OAuth databases
4. **Test legacy login:** Try logging in with a migrated user's original password — it should work and transparently migrate the password hash
5. **Test OAuth flow:** Verify the full OAuth authorization code flow works with a migrated user
6. **Run the Prisma schema changes** on source databases (add `oauth_account` table, `String[]``JsonB` conversions, count field backfills — see `prisma/moyu/MIGRATION_NOTES.md`)

具体来说,在 kun-oauth-admin 中运行

shell
go run ./cmd/migrate-users \
--kungal-dsn="host=127.0.0.1 port=5432 user=postgres password=kunloveren dbname=kungalgame" \
--moyu-dsn="host=127.0.0.1 port=5432 user=postgres password=renlovekun dbname=kungalgame_patch"

这个脚本运行的时间相当久,大概需要五分钟左右

迁移 kungalgame_patch 相关表

这是鲲 Galgame 补丁相关的数据库表,这些表也需要做调整来适应 oauth 和 gorm,具体来说

markdown
# MoYu Patch Prisma Schema Migration Notes

This document explains all schema changes made to the MoYu Patch Prisma models in preparation for the backend migration from Nitro (Node.js) to Go Fiber + GORM. These changes are fully backward-compatible with existing Prisma Client usage — new fields have defaults, and type changes are storage-level only.

## Context

The KUN ecosystem is migrating its backend to Go (Fiber + GORM). Two issues with the current schema must be addressed before the Go backend can efficiently work with this database:

1. **`String[]` (PostgreSQL `text[]`)** — GORM has no native support for PostgreSQL array types. Reading/writing `text[]` columns requires custom scanner/valuer implementations or the `pq` library. By contrast, `JsonB` is natively supported via `gorm.io/datatypes.JSON` with zero boilerplate.

2. **`_count` subqueries** — Prisma's `include: { _count: { select: { like: true } } }` is a Prisma-specific feature with no GORM equivalent. Every list query would need explicit `JOIN + COUNT` subqueries, which is verbose and slow. Denormalized count fields (`like_count`, `favorite_count`, etc.) allow simple `SELECT` queries and are incremented/decremented atomically when the corresponding action occurs.

3. **OAuth integration** — The KUN OAuth system provides centralized authentication. Each site needs an `oauth_account` table to link the OAuth user UUID (`sub` claim) with the local `user.id`.

## Changes by Category

### 1. `String[]` → `Json @db.JsonB` (16 fields)

All `String[]` fields have been changed to `Json @default("[]") @db.JsonB`. The underlying PostgreSQL column type changes from `text[]` to `jsonb`. The data format remains the same (a JSON array of strings), but the storage type is now natively supported by GORM.

**Affected fields:**

| File | Model | Fields |
|------|-------|--------|
| `patch.prisma` | `patch` | `type`, `language`, `engine`, `platform` |
| `patch_resource.prisma` | `patch_resource` | `type`, `language`, `platform` |
| `patch_release.prisma` | `patch_release` | `platforms`, `languages` |
| `patch_char.prisma` | `patch_char` | `roles` |
| `patch_company.prisma` | `patch_company` | `primary_language`, `official_website`, `parent_brand`, `alias` |
| `patch_person.prisma` | `patch_person` | `roles`, `links` |
| `patch_tag.prisma` | `patch_tag` | `alias` |

**Data migration SQL** (run after `prisma migrate deploy`):

```sql
-- For each affected column, convert text[] to jsonb.
-- Example for patch.type:
ALTER TABLE patch
  ALTER COLUMN type TYPE jsonb USING to_jsonb(type),
  ALTER COLUMN type SET DEFAULT '[]'::jsonb;

-- Repeat for all 16 columns listed above.
```

**Frontend impact:** If existing Prisma Client code reads these as `string[]`, it will now receive `JsonValue`. You need to cast or parse:

```typescript
// Before
const types: string[] = patch.type

// After
const types: string[] = patch.type as string[]
```

### 2. Denormalized Count Fields (8 fields)

New integer fields with `@default(0)` added to avoid `_count` subqueries.

| Model | New Field | Counts rows from |
|-------|-----------|-----------------|
| `user` | `follower_count` | `user_follow_relation` where `following_id = user.id` |
| `user` | `following_count` | `user_follow_relation` where `follower_id = user.id` |
| `patch` | `favorite_count` | `user_patch_favorite_relation` |
| `patch` | `contribute_count` | `user_patch_contribute_relation` |
| `patch` | `comment_count` | `patch_comment` |
| `patch` | `resource_count` | `patch_resource` |
| `patch_comment` | `like_count` | `user_patch_comment_like_relation` |
| `patch_resource` | `like_count` | `user_patch_resource_like_relation` |

**Data backfill SQL** (run after migration):

```sql
-- Patch counts
UPDATE patch SET favorite_count = (SELECT COUNT(*) FROM user_patch_favorite_relation WHERE patch_id = patch.id);
UPDATE patch SET contribute_count = (SELECT COUNT(*) FROM user_patch_contribute_relation WHERE patch_id = patch.id);
UPDATE patch SET comment_count = (SELECT COUNT(*) FROM patch_comment WHERE patch_id = patch.id);
UPDATE patch SET resource_count = (SELECT COUNT(*) FROM patch_resource WHERE patch_id = patch.id);

-- Comment like counts
UPDATE patch_comment SET like_count = (SELECT COUNT(*) FROM user_patch_comment_like_relation WHERE comment_id = patch_comment.id);

-- Resource like counts
UPDATE patch_resource SET like_count = (SELECT COUNT(*) FROM user_patch_resource_like_relation WHERE resource_id = patch_resource.id);

-- User follow counts
UPDATE "user" SET follower_count = (SELECT COUNT(*) FROM user_follow_relation WHERE following_id = "user".id);
UPDATE "user" SET following_count = (SELECT COUNT(*) FROM user_follow_relation WHERE follower_id = "user".id);
```

**Code change required:** After adding these fields, all places that create/delete likes, favorites, follows, comments, or resources must also increment/decrement the corresponding count field. For example:

```typescript
// When a user likes a comment:
await prisma.$transaction([
  prisma.user_patch_comment_like_relation.create({ data: { user_id, comment_id } }),
  prisma.patch_comment.update({
    where: { id: comment_id },
    data: { like_count: { increment: 1 } },
  }),
])
```

### 3. OAuth Integration (2 additions)

**New model: `oauth_account`**

```prisma
model oauth_account {
  id       Int    @id @default(autoincrement())
  user_id  Int
  provider String @default("kun-oauth") @db.VarChar(50)
  sub      String @unique @db.VarChar(255)  // OAuth user UUID
  user     user   @relation(fields: [user_id], references: [id], onDelete: Cascade)
  created  DateTime @default(now())
  updated  DateTime @updatedAt
  @@index([user_id])
}
```

**New relation on `user`:** `oauth_account oauth_account[]`

This is identical to the kungal implementation. The `sub` field stores the UUID from the OAuth server's `/oauth/userinfo` response. When a user logs in via OAuth for the first time, the backend matches by email to link an existing account, or creates a new user.

**OAuth flow overview** (see `docs/integration/oauth-integration-guide.md` for full details):

1. User clicks "Login with KUN Account" on MoYu
2. Browser redirects to `oauth.kungal.com/api/v1/oauth/authorize`
3. User authenticates on OAuth server
4. Redirect back to MoYu with authorization code
5. MoYu server exchanges code for access_token
6. MoYu server calls `/oauth/userinfo` to get user UUID
7. MoYu server finds or creates local user via `oauth_account.sub`

## Migration Checklist

1. Run `prisma migrate dev` to generate and apply the migration
2. Run the data migration SQL for `String[]``JsonB` conversions (if Prisma doesn't handle it automatically)
3. Run the backfill SQL for count fields
4. Update all `create`/`delete` operations on like/favorite/follow/comment/resource to also increment/decrement counts
5. Update TypeScript types where `string[]` becomes `JsonValue`
6. Deploy and verify

我们已经编写好了脚本,直接在 kun-galgame-patch-next 下执行

shell
psql -h localhost -U postgres -d kungalgame_patch -f prisma/migrations/20260409_oauth_integration/migration.sql

使用 pnpm prisma:push 校验迁移

按理来说迁移脚本中 SQL 迁移完,与 prisma db push 生成的数据表应该是兼容的

可以在 kun-galgame-nuxt4 和 kun-galgame-patch-next 中都运行一次

shell
pnpm prisma:push

如果没有任何报错和警告,说明迁移成功

如果提示

shell
⚠️  There might be data loss when applying the changes:

  • The `alias` column on the `galgame_engine` table would be dropped and recreated. This will lead to data loss.

  • The `galgame_type` column on the `galgame_rating` table would be dropped and recreated. This will lead to data loss.

  • The `homepage` column on the `galgame_toolset` table would be dropped and recreated. This will lead to data loss.

  • The `domain` column on the `galgame_website` table would be dropped and recreated. This will lead to data loss.

这说明 kungalgame 数据库迁移到时候因为各种原因导致没迁移完

这个时候在鲲 Galgame 论坛项目的根目录再执行一遍 kungalgame 的迁移命令即可

shell
cd apps/api && go run migrate

重新运行

如果中途出现了任何错误,建议重置三个数据库,并且重新运行所有脚本

下面是一个统合的命令,方便我自己看

shell
psql -U postgres
drop database kun_oauth_admin;
create database kun_oauth_admin;


# restore kungalgame & kungalgame_patch dbs (drop, create, insert data)
./apps/web/backup/restore_db.sh


cd kun-galgame-nuxt4
cd apps/api && go run check-dup-email
go run migrate


cd kun-oauth-admin
cd apps/api && go run migrate
go run ./cmd/migrate-users --kungal-dsn="host=127.0.0.1 port=5432 user=postgres password=kunloveren dbname=kungalgame" --moyu-dsn="host=127.0.0.1 port=5432 user=postgres password=renlovekun dbname=kungalgame_patch"


cd kun-galgame-patch-next

# if db version mismatch, firstly replace it
psql -U postgres
\c kungalgame_patch;
REINDEX DATABASE kungalgame_patch;
ALTER DATABASE kungalgame_patch REFRESH COLLATION VERSION;
\q

psql -h localhost -U postgres -d kungalgame_patch -f prisma/migrations/20260409_oauth_integration/migration.sql


# recheck migration via prisma db push
cd kun-galgame-nuxt4
cd apps/web
pnpm prisma:push
# if here has any warnings, rerun kungal db migration command
cd apps/api && go run migrate

其他问题

还有一些各种杂七杂八的问题

某些包没有安装

如果是在服务器上操作,至少要保证服务器上有 cmake, go, psql 等等环境

database has a collation version mismatch

在实际操作的时候可能碰到这种数据库版本不一致的警告

shell
❯ psql -U postgres
psql (18.3)
Type "help" for help.
postgres=# \c kungalgame_patch;
WARNING:  database "kungalgame_patch" has a collation version mismatch
DETAIL:  The database was created using collation version 2.42, but the operating system provides version 2.43.
HINT:  Rebuild all objects in this database that use the default collation and run ALTER DATABASE kungalgame_patch REFRESH COLLATION VERSION, or build PostgreSQL with the right library version.
You are now connected to database "kungalgame_patch" as user "postgres".

解决方案是这样的

shell
postgres=# \c kungalgame_patch;
WARNING:  database "kungalgame_patch" has a collation version mismatch
DETAIL:  The database was created using collation version 2.42, but the operating system provides version 2.43.
HINT:  Rebuild all objects in this database that use the default collation and run ALTER DATABASE kungalgame_patch REFRESH COLLATION VERSION, or build PostgreSQL with the right library version.
You are now connected to database "kungalgame_patch" as user "postgres".
kungalgame_patch=# REINDEX DATABASE kungalgame_patch;
WARNING:  database "kungalgame_patch" has a collation version mismatch
DETAIL:  The database was created using collation version 2.42, but the operating system provides version 2.43.
HINT:  Rebuild all objects in this database that use the default collation and run ALTER DATABASE kungalgame_patch REFRESH COLLATION VERSION, or build PostgreSQL with the right library version.
REINDEX
kungalgame_patch=# ALTER DATABASE kungalgame_patch REFRESH COLLATION VERSION;
NOTICE:  changing version from 2.42 to 2.43
ALTER DATABASE

这样就可以解决版本不一致的问题了,不会丢数据

本文版权遵循 CC BY-NC 协议 本站版权政策

3 条回复

白狐初墨
发布于 2026-04-09 - 17:07

哇哦,终于

Tfmth
发布于 2026-04-09 - 17:36

前排ᗴ͜ꩰ(՞⸝⸝o̴̶̷̥᷅ ̫ o̴̶̷᷄⸝⸝՞) ꩰ͜ᗱ

uoht
uoht

1214

#3
发布于 2026-04-09 - 22:09

image.pngimage.pngimage.png看不懂,直接喂给ai解释了Sticker(提示词:用人话解释内容并分析对不对)

(。>︿<。) 已经一滴回复都不剩了哦~