Showing preview only (1,121K chars total). Download the full file or copy to clipboard to get everything.
Repository: hujinbo23/dst-admin-go
Branch: main
Commit: 065fa628fcac
Files: 151
Total size: 1.0 MB
Directory structure:
gitextract_5gwete13/
├── .claude/
│ └── skills/
│ ├── api-generator/
│ │ └── SKILL.md
│ ├── find-skills/
│ │ └── SKILL.md
│ ├── go-concurrency-patterns/
│ │ └── SKILL.md
│ ├── golang-patterns/
│ │ └── SKILL.md
│ ├── golang-pro/
│ │ ├── SKILL.md
│ │ └── references/
│ │ ├── concurrency.md
│ │ ├── generics.md
│ │ ├── interfaces.md
│ │ ├── project-structure.md
│ │ └── testing.md
│ └── skill-creator/
│ ├── LICENSE.txt
│ ├── SKILL.md
│ ├── agents/
│ │ ├── analyzer.md
│ │ ├── comparator.md
│ │ └── grader.md
│ ├── assets/
│ │ └── eval_review.html
│ ├── eval-viewer/
│ │ ├── generate_review.py
│ │ └── viewer.html
│ ├── references/
│ │ └── schemas.md
│ └── scripts/
│ ├── aggregate_benchmark.py
│ ├── generate_report.py
│ ├── improve_description.py
│ ├── package_skill.py
│ ├── quick_validate.py
│ ├── run_eval.py
│ ├── run_loop.py
│ └── utils.py
├── .gitignore
├── CLAUDE.md
├── LICENSE
├── README-EN.md
├── README.md
├── cmd/
│ └── server/
│ └── main.go
├── config.yml
├── docs/
│ ├── docs.go
│ ├── swagger.json
│ └── swagger.yaml
├── go.mod
├── go.sum
├── internal/
│ ├── api/
│ │ ├── handler/
│ │ │ ├── backup_handler.go
│ │ │ ├── dst_api_handler.go
│ │ │ ├── dst_config_handler.go
│ │ │ ├── dst_map_handler.go
│ │ │ ├── game_config_handler.go
│ │ │ ├── game_handler.go
│ │ │ ├── kv_handler.go
│ │ │ ├── level_handler.go
│ │ │ ├── level_log_handler.go
│ │ │ ├── login_handler.go
│ │ │ ├── mod_handler.go
│ │ │ ├── player_handler.go
│ │ │ ├── player_log_handler.go
│ │ │ ├── statistics_handler.go
│ │ │ └── update.go
│ │ └── router.go
│ ├── collect/
│ │ ├── collect.go
│ │ └── collect_map.go
│ ├── config/
│ │ └── config.go
│ ├── database/
│ │ └── sqlite.go
│ ├── middleware/
│ │ ├── auth.go
│ │ ├── cluster.go
│ │ ├── error.go
│ │ └── start_before.go
│ ├── model/
│ │ ├── LogRecord.go
│ │ ├── announce.go
│ │ ├── autoCheck.go
│ │ ├── backup.go
│ │ ├── backupSnapshot.go
│ │ ├── cluster.go
│ │ ├── connect.go
│ │ ├── jobTask.go
│ │ ├── kv.go
│ │ ├── modInfo.go
│ │ ├── modKv.go
│ │ ├── model.go
│ │ ├── playerLog.go
│ │ ├── regenerate.go
│ │ ├── spawnRole.go
│ │ └── webLink.go
│ ├── pkg/
│ │ ├── context/
│ │ │ └── cluster.go
│ │ ├── response/
│ │ │ └── response.go
│ │ └── utils/
│ │ ├── collectionUtils/
│ │ │ └── collectionUtils.go
│ │ ├── dateUtils.go
│ │ ├── dstUtils/
│ │ │ └── dstUtils.go
│ │ ├── envUtils.go
│ │ ├── fileUtils/
│ │ │ └── fileUtls.go
│ │ ├── luaUtils/
│ │ │ └── luaUtils.go
│ │ ├── shellUtils/
│ │ │ └── shellUitls.go
│ │ ├── systemUtils/
│ │ │ └── SystemUtils.go
│ │ └── zip/
│ │ └── zip.go
│ └── service/
│ ├── archive/
│ │ └── path_resolver.go
│ ├── backup/
│ │ └── backup_service.go
│ ├── dstConfig/
│ │ ├── dst_config.go
│ │ ├── factory.go
│ │ └── one_dst_config.go
│ ├── dstMap/
│ │ └── dst_map.go
│ ├── dstPath/
│ │ ├── dst_path.go
│ │ ├── linux_dst.go
│ │ └── window_dst.go
│ ├── game/
│ │ ├── factory.go
│ │ ├── linux_process.go
│ │ ├── process.go
│ │ ├── windowGameCli.go
│ │ └── window_process.go
│ ├── gameArchive/
│ │ └── game_archive.go
│ ├── gameConfig/
│ │ └── game_config.go
│ ├── level/
│ │ └── level.go
│ ├── levelConfig/
│ │ └── level_config.go
│ ├── login/
│ │ └── login_service.go
│ ├── mod/
│ │ └── mod_service.go
│ ├── player/
│ │ └── player_service.go
│ └── update/
│ ├── factory.go
│ ├── linux_update.go
│ ├── update.go
│ └── window_update.go
├── scripts/
│ ├── build_linux.sh
│ ├── build_swagger.sh
│ ├── build_window.sh
│ ├── docker/
│ │ ├── Dockerfile
│ │ ├── README.md
│ │ ├── docker-entrypoint.sh
│ │ ├── docker_build.sh
│ │ └── docker_dst_config
│ ├── docker-build-mac/
│ │ ├── Dockerfile
│ │ ├── README.md
│ │ ├── docker-entrypoint.sh
│ │ ├── docker_dst_config
│ │ └── dst-mac-arm64-env-install.md
│ └── py-dst-cli/
│ ├── README.md
│ ├── dst_version.py
│ ├── dst_world_setting.json
│ ├── main.py
│ ├── parse_TooManyItemPlus_items.py
│ ├── parse_mod.py
│ ├── parse_world_setting.py
│ ├── parse_world_webp.py
│ ├── requirements.txt
│ └── steamapikey.txt
└── static/
├── Caves/
│ ├── leveldataoverride.lua
│ ├── modoverrides.lua
│ └── server.ini
├── Master/
│ ├── leveldataoverride.lua
│ ├── modoverrides.lua
│ └── server.ini
├── customcommands.lua
└── template/
├── caves_server.ini
├── cluster.ini
├── cluster2.ini
├── master_server.ini
├── server.ini
└── test.go
================================================
FILE CONTENTS
================================================
================================================
FILE: .claude/skills/api-generator/SKILL.md
================================================
---
name: api-generator
description: Specialized skill for developing and refactoring the DST (Don't Starve Together) Admin Go project. Use this skill whenever the user mentions DST, game server management, API refactoring, adding handlers/services, creating CRUD endpoints, or working within the dst-admin-go codebase. This skill ensures all code follows the project's architectural patterns including dependency injection, layered architecture (Handler → Service → Model), and specific naming conventions. Make sure to use this skill when the user asks to refactor existing APIs, add new features, fix bugs, or modify any part of the dst-admin-go project structure.
license: MIT
metadata:
author: DST Admin Go Team
version: "2.0.0"
domain: application
triggers: DST, dst-admin-go, Don't Starve Together, game server, refactor API, add handler, create service, GORM, Gin, cluster management, backup service, player management, mod management, CRUD generator, generate API, create endpoint, 生成API, 创建接口
role: specialist
scope: implementation
output-format: code
related-skills: golang-pro, golang-patterns
---
# DST Admin Go API Generator
Specialized skill for developing and refactoring the DST (Don't Starve Together) Admin Go project. Use this skill whenever the user mentions DST, game server management, API refactoring, adding handlers/services, creating CRUD endpoints, or working within the dst-admin-go codebase. This skill ensures all code follows the project's architectural patterns including dependency injection, layered architecture (Handler → Service → Model), and specific naming conventions.
---
## Project Context
DST Admin Go is a web-based management panel for "Don't Starve Together" game servers written in Go. It follows a three-layer architecture:
1. **Handler Layer** (`internal/api/handler/`): HTTP request handling, validation, response formatting
2. **Service Layer** (`internal/service/`): Business logic, orchestration
3. **Model Layer** (`internal/model/`): Database entities (GORM)
### Architecture Patterns
- **Dependency Injection**: All services instantiated in `internal/api/router.go`, injected via constructors
- **No Global Variables**: Pass config and dependencies through constructors
- **Platform Abstraction**: Factory patterns for Linux/Windows differences
- **Naming Conventions**:
- Files: `snake_case.go` (e.g., `backup_service.go`)
- Structs: `PascalCase` without suffix (e.g., `BackupInfo`, NOT `BackupInfoVO`)
- Constructors: `New{Name}` (e.g., `NewBackupService`)
- Interfaces: Simple names (e.g., `Config`, `Process`)
### Project Structure
```
dst-admin-go/
├── cmd/server/main.go # Entry point
├── internal/
│ ├── api/
│ │ ├── handler/ # HTTP handlers (controllers)
│ │ │ └── {entity}_handler.go
│ │ └── router.go # Route registration & DI
│ ├── model/ # GORM models
│ │ └── {entity}.go
│ ├── service/ # Business logic
│ │ └── {domain}/
│ │ ├── {domain}_service.go
│ │ ├── factory.go # (if platform-specific)
│ │ ├── linux_{domain}.go
│ │ └── window_{domain}.go
│ └── pkg/
│ ├── response/ # Standard responses
│ └── utils/ # Utilities
└── config.yml # Application config
```
---
## Your Instructions
You are an expert Go developer specializing in the DST Admin Go project. When the user requests creating a CRUD module, refactoring APIs, or adding new functionality:
### Step 1: Gather Requirements
Ask the user for the following information (use a friendly, concise tone):
1. **Entity/Model Name**: What is the entity called? (e.g., "Announcement", "ModConfig", "Player")
2. **Chinese Name**: What is the Chinese name for Swagger docs? (e.g., "公告", "模组配置")
3. **Database Fields**: What fields does this entity have?
- Field name, type (string, int, bool, time.Time, etc.)
- GORM constraints (e.g., `unique`, `not null`, `default`)
- JSON tag name (camelCase for API responses)
- Example: `Title string, required, JSON: "title"` or `IsActive bool, default true, JSON: "isActive"`
4. **Operations Needed**: Which operations should be available?
- Standard CRUD: Create, Read (Get by ID), Update, Delete, List (with pagination)
- Custom operations: Batch delete, toggle status, search, etc.
5. **Business Logic Notes**: Any special validation, processing, or relationships?
- E.g., "Must validate expiresAt is in the future", "Needs to interact with game process"
6. **Cluster Context**: Does this entity belong to a specific cluster/server?
- If yes: Endpoints will be under `/api/{clusterName}/{entity}` and need cluster middleware
### Step 2: Analyze Dependencies
Based on the requirements, determine:
- **Always needed**: `*gorm.DB` for database operations
- **Game-related**: If interacting with game state → `game.Process`
- **File operations**: If handling files/archives → `archive.PathResolver`
- **DST config**: If reading/writing DST configs → `dstConfig.Config`
- **Level config**: If working with world configs → `levelConfig.LevelConfigUtils`
- **Platform-specific**: If operations differ on Linux vs Windows → factory pattern
### Step 3: Generate Model File
Create `internal/model/{entity}.go`:
```go
package model
import (
"gorm.io/gorm"
"time"
)
// {EntityName} {中文描述}
type {EntityName} struct {
gorm.Model
// Fields based on user input
Title string `json:"title" gorm:"type:varchar(255);not null"`
Content string `json:"content" gorm:"type:text"`
IsActive bool `json:"isActive" gorm:"default:true"`
ExpiresAt *time.Time `json:"expiresAt"`
}
```
**Guidelines**:
- Always embed `gorm.Model` (provides ID, CreatedAt, UpdatedAt, DeletedAt)
- Use GORM tags for constraints: `type:`, `not null`, `unique`, `default:`
- Use JSON tags in camelCase
- Add struct comment with Chinese description
- Use pointer types for nullable fields (e.g., `*time.Time`, `*string`)
### Step 4: Generate Service File
Create `internal/service/{domain}/{domain}_service.go`:
```go
package {domain}
import (
"dst-admin-go/internal/model"
"gorm.io/gorm"
)
type {Entity}Service struct {
db *gorm.DB
// Add other dependencies as detected
}
func New{Entity}Service(db *gorm.DB /* other deps */) *{Entity}Service {
return &{Entity}Service{
db: db,
}
}
// List{Entity} 获取{中文名}列表
func (s *{Entity}Service) List{Entity}(page, pageSize int) ([]model.{Entity}, int64, error) {
var list []model.{Entity}
var total int64
offset := (page - 1) * pageSize
err := s.db.Model(&model.{Entity}{}).Count(&total).Error
if err != nil {
return nil, 0, err
}
err = s.db.Offset(offset).Limit(pageSize).Find(&list).Error
return list, total, err
}
// Get{Entity} 获取{中文名}详情
func (s *{Entity}Service) Get{Entity}(id uint) (*model.{Entity}, error) {
var entity model.{Entity}
err := s.db.First(&entity, id).Error
return &entity, err
}
// Create{Entity} 创建{中文名}
func (s *{Entity}Service) Create{Entity}(entity *model.{Entity}) error {
return s.db.Create(entity).Error
}
// Update{Entity} 更新{中文名}
func (s *{Entity}Service) Update{Entity}(entity *model.{Entity}) error {
return s.db.Save(entity).Error
}
// Delete{Entity} 删除{中文名}
func (s *{Entity}Service) Delete{Entity}(id uint) error {
return s.db.Delete(&model.{Entity}{}, id).Error
}
```
**Guidelines**:
- Constructor accepts all dependencies
- Chinese comments for all exported methods
- Use GORM for database operations
- Add pagination support for List (offset/limit)
- Return errors, don't handle them here
- Add custom business logic methods as needed
### Step 5: Generate Handler File
Create `internal/api/handler/{entity}_handler.go`:
```go
package handler
import (
"dst-admin-go/internal/model"
"dst-admin-go/internal/pkg/response"
"dst-admin-go/internal/service/{domain}"
"github.com/gin-gonic/gin"
"net/http"
"strconv"
)
type {Entity}Handler struct {
service *{domain}.{Entity}Service
}
func New{Entity}Handler(service *{domain}.{Entity}Service) *{Entity}Handler {
return &{Entity}Handler{service: service}
}
func (h *{Entity}Handler) RegisterRoute(router *gin.RouterGroup) {
router.GET("/api/{entity}/list", h.List)
router.GET("/api/{entity}/:id", h.Get)
router.POST("/api/{entity}", h.Create)
router.PUT("/api/{entity}/:id", h.Update)
router.DELETE("/api/{entity}/:id", h.Delete)
}
// List 获取{中文名}列表
// @Summary 获取{中文名}列表
// @Description 分页获取{中文名}列表
// @Tags {entity}
// @Accept json
// @Produce json
// @Param page query int false "页码" default(1)
// @Param pageSize query int false "每页数量" default(10)
// @Success 200 {object} response.Response{data=object{list=[]model.{Entity},total=int,page=int,pageSize=int}}
// @Router /api/{entity}/list [get]
func (h *{Entity}Handler) List(ctx *gin.Context) {
page, _ := strconv.Atoi(ctx.DefaultQuery("page", "1"))
pageSize, _ := strconv.Atoi(ctx.DefaultQuery("pageSize", "10"))
if page < 1 {
page = 1
}
if pageSize < 1 || pageSize > 100 {
pageSize = 10
}
list, total, err := h.service.List{Entity}(page, pageSize)
if err != nil {
response.FailWithMessage("获取列表失败: "+err.Error(), ctx)
return
}
response.OkWithData(gin.H{
"list": list,
"total": total,
"page": page,
"pageSize": pageSize,
}, ctx)
}
// Get 获取{中文名}详情
// @Summary 获取{中文名}详情
// @Description 根据ID获取{中文名}详情
// @Tags {entity}
// @Accept json
// @Produce json
// @Param id path int true "{中文名}ID"
// @Success 200 {object} response.Response{data=model.{Entity}}
// @Router /api/{entity}/{id} [get]
func (h *{Entity}Handler) Get(ctx *gin.Context) {
id, err := strconv.ParseUint(ctx.Param("id"), 10, 32)
if err != nil {
response.FailWithMessage("无效的ID", ctx)
return
}
entity, err := h.service.Get{Entity}(uint(id))
if err != nil {
response.FailWithMessage("获取详情失败: "+err.Error(), ctx)
return
}
response.OkWithData(entity, ctx)
}
// Create 创建{中文名}
// @Summary 创建{中文名}
// @Description 创建新的{中文名}
// @Tags {entity}
// @Accept json
// @Produce json
// @Param data body model.{Entity} true "{中文名}信息"
// @Success 200 {object} response.Response{data=model.{Entity}}
// @Router /api/{entity} [post]
func (h *{Entity}Handler) Create(ctx *gin.Context) {
var entity model.{Entity}
if err := ctx.ShouldBindJSON(&entity); err != nil {
response.FailWithMessage("参数错误: "+err.Error(), ctx)
return
}
if err := h.service.Create{Entity}(&entity); err != nil {
response.FailWithMessage("创建失败: "+err.Error(), ctx)
return
}
response.OkWithData(entity, ctx)
}
// Update 更新{中文名}
// @Summary 更新{中文名}
// @Description 更新{中文名}信息
// @Tags {entity}
// @Accept json
// @Produce json
// @Param id path int true "{中文名}ID"
// @Param data body model.{Entity} true "{中文名}信息"
// @Success 200 {object} response.Response{data=model.{Entity}}
// @Router /api/{entity}/{id} [put]
func (h *{Entity}Handler) Update(ctx *gin.Context) {
id, err := strconv.ParseUint(ctx.Param("id"), 10, 32)
if err != nil {
response.FailWithMessage("无效的ID", ctx)
return
}
var entity model.{Entity}
if err := ctx.ShouldBindJSON(&entity); err != nil {
response.FailWithMessage("参数错误: "+err.Error(), ctx)
return
}
entity.ID = uint(id)
if err := h.service.Update{Entity}(&entity); err != nil {
response.FailWithMessage("更新失败: "+err.Error(), ctx)
return
}
response.OkWithData(entity, ctx)
}
// Delete 删除{中文名}
// @Summary 删除{中文名}
// @Description 根据ID删除{中文名}
// @Tags {entity}
// @Accept json
// @Produce json
// @Param id path int true "{中文名}ID"
// @Success 200 {object} response.Response
// @Router /api/{entity}/{id} [delete]
func (h *{Entity}Handler) Delete(ctx *gin.Context) {
id, err := strconv.ParseUint(ctx.Param("id"), 10, 32)
if err != nil {
response.FailWithMessage("无效的ID", ctx)
return
}
if err := h.service.Delete{Entity}(uint(id)); err != nil {
response.FailWithMessage("删除失败: "+err.Error(), ctx)
return
}
response.OkWithMessage("删除成功", ctx)
}
```
**Guidelines**:
- Complete Swagger annotations for every handler
- Use `response.Response` helpers: `OkWithData`, `FailWithMessage`, `OkWithMessage`
- Validate all input parameters
- Chinese error messages
- Parse ID from path parameter as uint
- Bind JSON request body with `ShouldBindJSON`
- Return HTTP 200 even for errors (error code in response body)
### Step 6: Update Router
Read `internal/api/router.go` and make these changes:
1. **Add import** (if not exists):
```go
"{domain}" "dst-admin-go/internal/service/{domain}"
```
2. **In `Register()` function, add service initialization** (after existing services):
```go
// {entity} service
{entity}Service := {domain}.New{Entity}Service(db /* add detected dependencies */)
```
3. **Add handler initialization** (after existing handlers):
```go
{entity}Handler := handler.New{Entity}Handler({entity}Service)
```
4. **Add route registration** (after existing routes):
```go
{entity}Handler.RegisterRoute(router)
```
**Important**: Maintain the existing order and formatting. Add new code at the end of each section.
### Step 7: Handle Platform-Specific Code (if needed)
If the service needs platform-specific behavior:
1. Create `internal/service/{domain}/factory.go`:
```go
package {domain}
import (
"gorm.io/gorm"
"runtime"
)
func New{Entity}Service(db *gorm.DB) {Entity}Service {
if runtime.GOOS == "windows" {
return &Windows{Entity}Service{db: db}
}
return &Linux{Entity}Service{db: db}
}
```
2. Create interface in `{domain}_service.go`:
```go
type {Entity}Service interface {
List{Entity}(page, pageSize int) ([]model.{Entity}, int64, error)
// ... other methods
}
```
3. Create platform implementations:
- `internal/service/{domain}/linux_{domain}.go`
- `internal/service/{domain}/window_{domain}.go`
### Step 8: Verify and Test
After generating all files:
1. **Run compilation check**:
```bash
go mod tidy
go build cmd/server/main.go
```
2. **Report to user**:
- List all generated files
- Show modified files (router.go)
- Provide curl test commands
- Mention Swagger UI location
3. **Provide test commands**:
```bash
# List
curl -X GET "http://localhost:8082/api/{entity}/list?page=1&pageSize=10"
# Create
curl -X POST "http://localhost:8082/api/{entity}" \
-H "Content-Type: application/json" \
-d '{"field1": "value1", "field2": "value2"}'
# Get
curl -X GET "http://localhost:8082/api/{entity}/1"
# Update
curl -X PUT "http://localhost:8082/api/{entity}/1" \
-H "Content-Type: application/json" \
-d '{"field1": "new_value"}'
# Delete
curl -X DELETE "http://localhost:8082/api/{entity}/1"
```
4. **Remind user**: Access Swagger UI at `http://localhost:8082/swagger/index.html` (after running the server)
---
## Common Patterns Reference
### Response Helpers
Located in `internal/pkg/response/response.go`:
```go
// Success with data
response.OkWithData(data, ctx)
// Success with message
response.OkWithMessage("操作成功", ctx)
// Error with message
response.FailWithMessage("操作失败: "+err.Error(), ctx)
```
### Common Dependencies
- `*gorm.DB`: Database access
- `*gin.Context`: HTTP request context
- `dstConfig.Config`: DST configuration interface
- `game.Process`: Game process management interface
- `archive.PathResolver`: Archive path resolution
- `levelConfig.LevelConfigUtils`: Level config parsing
### Cluster-Aware Endpoints
If entity belongs to a cluster, routes should be:
```go
router.GET("/api/:cluster_name/{entity}/list", h.List)
router.GET("/api/:cluster_name/{entity}/:id", h.Get)
// etc.
```
**IMPORTANT**: Handler should use cluster context helper to get clusterName:
```go
import "dst-admin-go/internal/pkg/context"
clusterName := context.GetClusterName(ctx)
// Use clusterName in service calls
```
**DO NOT** use `ctx.Query("clusterName")` or `ctx.Param("cluster_name")` directly. Always use `context.GetClusterName(ctx)` which retrieves the clusterName set by the cluster middleware.
### Pagination Pattern
Always use this pattern for list endpoints:
```go
page, _ := strconv.Atoi(ctx.DefaultQuery("page", "1"))
pageSize, _ := strconv.Atoi(ctx.DefaultQuery("pageSize", "10"))
if page < 1 {
page = 1
}
if pageSize < 1 || pageSize > 100 {
pageSize = 10
}
offset := (page - 1) * pageSize
```
### GORM Common Tags
- `type:varchar(255)` - String with length
- `type:text` - Long text
- `not null` - Required field
- `unique` - Unique constraint
- `default:true` - Default value
- `index` - Create index
- `foreignKey:UserID` - Foreign key
---
## Examples
### Example 1: Simple Announcement System
**User Input**:
- Entity: Announcement
- Chinese: 公告
- Fields: Title (string, required), Content (text), IsActive (bool, default true), ExpiresAt (time, nullable)
- Operations: Full CRUD + List
- Cluster context: No
**Generated Files**:
- `internal/model/announce.go`
- `internal/service/announce/announce_service.go`
- `internal/api/handler/announce_handler.go`
- Modified: `internal/api/router.go`
### Example 2: Mod Management (Game-Related)
**User Input**:
- Entity: ModInfo
- Chinese: 模组信息
- Fields: ModId (string, unique), Name (string), Author (string), Version (string), IsEnabled (bool), ClusterName (string)
- Operations: Full CRUD + List + Toggle status
- Business logic: Needs to interact with game process when toggling
- Cluster context: Yes
**Generated Files**:
- `internal/model/modInfo.go`
- `internal/service/mod/mod_service.go` (with `game.Process` dependency)
- `internal/api/handler/mod_handler.go` (with cluster-aware routes)
- Modified: `internal/api/router.go`
**Additional method in service**:
```go
// ToggleModStatus 切换模组启用状态
func (s *ModService) ToggleModStatus(clusterName, modId string) error {
// Update database
// Interact with game process
return nil
}
```
---
## Critical Checklist
Before completing the task, verify:
- [ ] Model file has `gorm.Model` embedded
- [ ] All fields have both `json` and `gorm` tags
- [ ] Service has constructor accepting all dependencies
- [ ] All service methods have Chinese comments
- [ ] Handler has `RegisterRoute` method
- [ ] All handler methods have complete Swagger annotations
- [ ] Swagger tags use entity name (e.g., `@Tags announcement`)
- [ ] Chinese names used in Swagger summaries
- [ ] router.go has import added
- [ ] router.go has service initialization
- [ ] router.go has handler initialization
- [ ] router.go has route registration
- [ ] Naming follows conventions (snake_case files, PascalCase types, no VO suffix)
- [ ] Code compiles without errors
- [ ] Test commands provided to user
---
## DST-Specific Domain Knowledge
### Cluster Architecture
A **cluster** contains multiple **levels** (worlds):
- **Master** - Overworld (surface world)
- **Caves** - Underground world
Each level runs as a separate process with its own configuration.
### Important Paths
```go
// Use pathResolver service for all path operations
pathResolver.ClusterPath(clusterName) // e.g., ~/.klei/DoNotStarveTogether/MyCluster/
pathResolver.LevelPath(clusterName, "Master") // e.g., ~/.klei/DoNotStarveTogether/MyCluster/Master/
pathResolver.SavePath(clusterName, "Master") // e.g., ~/.klei/DoNotStarveTogether/MyCluster/Master/save/
```
### Configuration Files
- `cluster.ini` - Cluster settings (game mode, max players, passwords)
- `leveldataoverride.lua` - World generation settings (Lua table format)
- `modoverrides.lua` - Enabled mods configuration (Lua table format)
- `server.ini` - Server-specific settings
Use `luaUtils` package to parse/generate Lua configuration files.
### Process Management
**Linux**: Uses `screen` sessions
```go
screenName := fmt.Sprintf("dst_%s_%s", clusterName, levelName)
```
**Windows**: Uses custom CLI wrapper (`windowGameCli.go`)
---
## Common Utilities
### File Operations
```go
import "dst-admin-go/internal/pkg/utils/fileUtils"
fileUtils.PathExists(path)
fileUtils.CreateDir(path)
fileUtils.CopyFile(src, dst)
fileUtils.Unzip(zipPath, destPath)
```
### Shell Commands
```go
import "dst-admin-go/internal/pkg/utils/shellUtils"
output, err := shellUtils.ExecuteCommand("ls", "-la")
```
### Lua Configuration
```go
import "dst-admin-go/internal/pkg/utils/luaUtils"
// Parse Lua table to map
config, err := luaUtils.ParseLuaTable(luaContent)
// Generate Lua table from map
luaContent := luaUtils.GenerateLuaTable(configMap)
```
---
## Tips
- **Be concise**: Don't over-explain. Generate code efficiently.
- **Follow patterns**: Always reference existing code in the project for consistency.
- **Chinese comments**: Use Chinese for inline comments and method descriptions, English for Swagger and exported names.
- **Dependency detection**: Ask about business logic to determine dependencies accurately.
- **Platform awareness**: Ask if operations differ on Windows vs Linux.
- **Cluster context**: Ask if entity belongs to a specific cluster/server.
- **Validation**: Add appropriate validation in handlers (required fields, format checks).
- **Error messages**: Always include the original error in Chinese messages: "操作失败: " + err.Error()
---
## When NOT to Use This Skill
Don't use this skill if:
- User is asking general Go questions (not specific to DST Admin Go)
- User wants to modify frontend code
- User is asking about Docker, deployment, or infrastructure
- Task doesn't involve creating/modifying handlers, services, or models
In those cases, handle the request normally without the skill context.
---
## Summary
This skill automates the creation of complete CRUD modules in DST Admin Go following the project's three-layer architecture. It handles model generation, service creation with dependency injection, handler implementation with Swagger docs, and router integration. Always gather complete requirements first, analyze dependencies, generate code following established patterns, and verify compilation before reporting success to the user.
================================================
FILE: .claude/skills/find-skills/SKILL.md
================================================
---
name: find-skills
description: Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
---
# Find Skills
This skill helps you discover and install skills from the open agent skills ecosystem.
## When to Use This Skill
Use this skill when the user:
- Asks "how do I do X" where X might be a common task with an existing skill
- Says "find a skill for X" or "is there a skill for X"
- Asks "can you do X" where X is a specialized capability
- Expresses interest in extending agent capabilities
- Wants to search for tools, templates, or workflows
- Mentions they wish they had help with a specific domain (design, testing, deployment, etc.)
## What is the Skills CLI?
The Skills CLI (`npx skills`) is the package manager for the open agent skills ecosystem. Skills are modular packages that extend agent capabilities with specialized knowledge, workflows, and tools.
**Key commands:**
- `npx skills find [query]` - Search for skills interactively or by keyword
- `npx skills add <package>` - Install a skill from GitHub or other sources
- `npx skills check` - Check for skill updates
- `npx skills update` - Update all installed skills
**Browse skills at:** https://skills.sh/
## How to Help Users Find Skills
### Step 1: Understand What They Need
When a user asks for help with something, identify:
1. The domain (e.g., React, testing, design, deployment)
2. The specific task (e.g., writing tests, creating animations, reviewing PRs)
3. Whether this is a common enough task that a skill likely exists
### Step 2: Search for Skills
Run the find command with a relevant query:
```bash
npx skills find [query]
```
For example:
- User asks "how do I make my React app faster?" → `npx skills find react performance`
- User asks "can you help me with PR reviews?" → `npx skills find pr review`
- User asks "I need to create a changelog" → `npx skills find changelog`
The command will return results like:
```
Install with npx skills add <owner/repo@skill>
vercel-labs/agent-skills@vercel-react-best-practices
└ https://skills.sh/vercel-labs/agent-skills/vercel-react-best-practices
```
### Step 3: Present Options to the User
When you find relevant skills, present them to the user with:
1. The skill name and what it does
2. The install command they can run
3. A link to learn more at skills.sh
Example response:
```
I found a skill that might help! The "vercel-react-best-practices" skill provides
React and Next.js performance optimization guidelines from Vercel Engineering.
To install it:
npx skills add vercel-labs/agent-skills@vercel-react-best-practices
Learn more: https://skills.sh/vercel-labs/agent-skills/vercel-react-best-practices
```
### Step 4: Offer to Install
If the user wants to proceed, you can install the skill for them:
```bash
npx skills add <owner/repo@skill> -g -y
```
The `-g` flag installs globally (user-level) and `-y` skips confirmation prompts.
## Common Skill Categories
When searching, consider these common categories:
| Category | Example Queries |
| --------------- | ---------------------------------------- |
| Web Development | react, nextjs, typescript, css, tailwind |
| Testing | testing, jest, playwright, e2e |
| DevOps | deploy, docker, kubernetes, ci-cd |
| Documentation | docs, readme, changelog, api-docs |
| Code Quality | review, lint, refactor, best-practices |
| Design | ui, ux, design-system, accessibility |
| Productivity | workflow, automation, git |
## Tips for Effective Searches
1. **Use specific keywords**: "react testing" is better than just "testing"
2. **Try alternative terms**: If "deploy" doesn't work, try "deployment" or "ci-cd"
3. **Check popular sources**: Many skills come from `vercel-labs/agent-skills` or `ComposioHQ/awesome-claude-skills`
## When No Skills Are Found
If no relevant skills exist:
1. Acknowledge that no existing skill was found
2. Offer to help with the task directly using your general capabilities
3. Suggest the user could create their own skill with `npx skills init`
Example:
```
I searched for skills related to "xyz" but didn't find any matches.
I can still help you with this task directly! Would you like me to proceed?
If this is something you do often, you could create your own skill:
npx skills init my-xyz-skill
```
================================================
FILE: .claude/skills/go-concurrency-patterns/SKILL.md
================================================
---
name: go-concurrency-patterns
description: Master Go concurrency with goroutines, channels, sync primitives, and context. Use when building concurrent Go applications, implementing worker pools, or debugging race conditions.
---
# Go Concurrency Patterns
Production patterns for Go concurrency including goroutines, channels, synchronization primitives, and context management.
## When to Use This Skill
- Building concurrent Go applications
- Implementing worker pools and pipelines
- Managing goroutine lifecycles
- Using channels for communication
- Debugging race conditions
- Implementing graceful shutdown
## Core Concepts
### 1. Go Concurrency Primitives
| Primitive | Purpose |
| ----------------- | -------------------------------- |
| `goroutine` | Lightweight concurrent execution |
| `channel` | Communication between goroutines |
| `select` | Multiplex channel operations |
| `sync.Mutex` | Mutual exclusion |
| `sync.WaitGroup` | Wait for goroutines to complete |
| `context.Context` | Cancellation and deadlines |
### 2. Go Concurrency Mantra
```
Don't communicate by sharing memory;
share memory by communicating.
```
## Quick Start
```go
package main
import (
"context"
"fmt"
"sync"
"time"
)
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
results := make(chan string, 10)
var wg sync.WaitGroup
// Spawn workers
for i := 0; i < 3; i++ {
wg.Add(1)
go worker(ctx, i, results, &wg)
}
// Close results when done
go func() {
wg.Wait()
close(results)
}()
// Collect results
for result := range results {
fmt.Println(result)
}
}
func worker(ctx context.Context, id int, results chan<- string, wg *sync.WaitGroup) {
defer wg.Done()
select {
case <-ctx.Done():
return
case results <- fmt.Sprintf("Worker %d done", id):
}
}
```
## Patterns
### Pattern 1: Worker Pool
```go
package main
import (
"context"
"fmt"
"sync"
)
type Job struct {
ID int
Data string
}
type Result struct {
JobID int
Output string
Err error
}
func WorkerPool(ctx context.Context, numWorkers int, jobs <-chan Job) <-chan Result {
results := make(chan Result, len(jobs))
var wg sync.WaitGroup
for i := 0; i < numWorkers; i++ {
wg.Add(1)
go func(workerID int) {
defer wg.Done()
for job := range jobs {
select {
case <-ctx.Done():
return
default:
result := processJob(job)
results <- result
}
}
}(i)
}
go func() {
wg.Wait()
close(results)
}()
return results
}
func processJob(job Job) Result {
// Simulate work
return Result{
JobID: job.ID,
Output: fmt.Sprintf("Processed: %s", job.Data),
}
}
// Usage
func main() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
jobs := make(chan Job, 100)
// Send jobs
go func() {
for i := 0; i < 50; i++ {
jobs <- Job{ID: i, Data: fmt.Sprintf("job-%d", i)}
}
close(jobs)
}()
// Process with 5 workers
results := WorkerPool(ctx, 5, jobs)
for result := range results {
fmt.Printf("Result: %+v\n", result)
}
}
```
### Pattern 2: Fan-Out/Fan-In Pipeline
```go
package main
import (
"context"
"sync"
)
// Stage 1: Generate numbers
func generate(ctx context.Context, nums ...int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for _, n := range nums {
select {
case <-ctx.Done():
return
case out <- n:
}
}
}()
return out
}
// Stage 2: Square numbers (can run multiple instances)
func square(ctx context.Context, in <-chan int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for n := range in {
select {
case <-ctx.Done():
return
case out <- n * n:
}
}
}()
return out
}
// Fan-in: Merge multiple channels into one
func merge(ctx context.Context, cs ...<-chan int) <-chan int {
var wg sync.WaitGroup
out := make(chan int)
// Start output goroutine for each input channel
output := func(c <-chan int) {
defer wg.Done()
for n := range c {
select {
case <-ctx.Done():
return
case out <- n:
}
}
}
wg.Add(len(cs))
for _, c := range cs {
go output(c)
}
// Close out after all inputs are done
go func() {
wg.Wait()
close(out)
}()
return out
}
func main() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Generate input
in := generate(ctx, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
// Fan out to multiple squarers
c1 := square(ctx, in)
c2 := square(ctx, in)
c3 := square(ctx, in)
// Fan in results
for result := range merge(ctx, c1, c2, c3) {
fmt.Println(result)
}
}
```
### Pattern 3: Bounded Concurrency with Semaphore
```go
package main
import (
"context"
"fmt"
"golang.org/x/sync/semaphore"
"sync"
)
type RateLimitedWorker struct {
sem *semaphore.Weighted
}
func NewRateLimitedWorker(maxConcurrent int64) *RateLimitedWorker {
return &RateLimitedWorker{
sem: semaphore.NewWeighted(maxConcurrent),
}
}
func (w *RateLimitedWorker) Do(ctx context.Context, tasks []func() error) []error {
var (
wg sync.WaitGroup
mu sync.Mutex
errors []error
)
for _, task := range tasks {
// Acquire semaphore (blocks if at limit)
if err := w.sem.Acquire(ctx, 1); err != nil {
return []error{err}
}
wg.Add(1)
go func(t func() error) {
defer wg.Done()
defer w.sem.Release(1)
if err := t(); err != nil {
mu.Lock()
errors = append(errors, err)
mu.Unlock()
}
}(task)
}
wg.Wait()
return errors
}
// Alternative: Channel-based semaphore
type Semaphore chan struct{}
func NewSemaphore(n int) Semaphore {
return make(chan struct{}, n)
}
func (s Semaphore) Acquire() {
s <- struct{}{}
}
func (s Semaphore) Release() {
<-s
}
```
### Pattern 4: Graceful Shutdown
```go
package main
import (
"context"
"fmt"
"os"
"os/signal"
"sync"
"syscall"
"time"
)
type Server struct {
shutdown chan struct{}
wg sync.WaitGroup
}
func NewServer() *Server {
return &Server{
shutdown: make(chan struct{}),
}
}
func (s *Server) Start(ctx context.Context) {
// Start workers
for i := 0; i < 5; i++ {
s.wg.Add(1)
go s.worker(ctx, i)
}
}
func (s *Server) worker(ctx context.Context, id int) {
defer s.wg.Done()
defer fmt.Printf("Worker %d stopped\n", id)
ticker := time.NewTicker(time.Second)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
// Cleanup
fmt.Printf("Worker %d cleaning up...\n", id)
time.Sleep(500 * time.Millisecond) // Simulated cleanup
return
case <-ticker.C:
fmt.Printf("Worker %d working...\n", id)
}
}
}
func (s *Server) Shutdown(timeout time.Duration) {
// Signal shutdown
close(s.shutdown)
// Wait with timeout
done := make(chan struct{})
go func() {
s.wg.Wait()
close(done)
}()
select {
case <-done:
fmt.Println("Clean shutdown completed")
case <-time.After(timeout):
fmt.Println("Shutdown timed out, forcing exit")
}
}
func main() {
// Setup signal handling
ctx, cancel := context.WithCancel(context.Background())
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM)
server := NewServer()
server.Start(ctx)
// Wait for signal
sig := <-sigCh
fmt.Printf("\nReceived signal: %v\n", sig)
// Cancel context to stop workers
cancel()
// Wait for graceful shutdown
server.Shutdown(5 * time.Second)
}
```
### Pattern 5: Error Group with Cancellation
```go
package main
import (
"context"
"fmt"
"golang.org/x/sync/errgroup"
"net/http"
)
func fetchAllURLs(ctx context.Context, urls []string) ([]string, error) {
g, ctx := errgroup.WithContext(ctx)
results := make([]string, len(urls))
for i, url := range urls {
i, url := i, url // Capture loop variables
g.Go(func() error {
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
return fmt.Errorf("creating request for %s: %w", url, err)
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
return fmt.Errorf("fetching %s: %w", url, err)
}
defer resp.Body.Close()
results[i] = fmt.Sprintf("%s: %d", url, resp.StatusCode)
return nil
})
}
// Wait for all goroutines to complete or one to fail
if err := g.Wait(); err != nil {
return nil, err // First error cancels all others
}
return results, nil
}
// With concurrency limit
func fetchWithLimit(ctx context.Context, urls []string, limit int) ([]string, error) {
g, ctx := errgroup.WithContext(ctx)
g.SetLimit(limit) // Max concurrent goroutines
results := make([]string, len(urls))
var mu sync.Mutex
for i, url := range urls {
i, url := i, url
g.Go(func() error {
result, err := fetchURL(ctx, url)
if err != nil {
return err
}
mu.Lock()
results[i] = result
mu.Unlock()
return nil
})
}
if err := g.Wait(); err != nil {
return nil, err
}
return results, nil
}
```
### Pattern 6: Concurrent Map with sync.Map
```go
package main
import (
"sync"
)
// For frequent reads, infrequent writes
type Cache struct {
m sync.Map
}
func (c *Cache) Get(key string) (interface{}, bool) {
return c.m.Load(key)
}
func (c *Cache) Set(key string, value interface{}) {
c.m.Store(key, value)
}
func (c *Cache) GetOrSet(key string, value interface{}) (interface{}, bool) {
return c.m.LoadOrStore(key, value)
}
func (c *Cache) Delete(key string) {
c.m.Delete(key)
}
// For write-heavy workloads, use sharded map
type ShardedMap struct {
shards []*shard
numShards int
}
type shard struct {
sync.RWMutex
data map[string]interface{}
}
func NewShardedMap(numShards int) *ShardedMap {
m := &ShardedMap{
shards: make([]*shard, numShards),
numShards: numShards,
}
for i := range m.shards {
m.shards[i] = &shard{data: make(map[string]interface{})}
}
return m
}
func (m *ShardedMap) getShard(key string) *shard {
// Simple hash
h := 0
for _, c := range key {
h = 31*h + int(c)
}
return m.shards[h%m.numShards]
}
func (m *ShardedMap) Get(key string) (interface{}, bool) {
shard := m.getShard(key)
shard.RLock()
defer shard.RUnlock()
v, ok := shard.data[key]
return v, ok
}
func (m *ShardedMap) Set(key string, value interface{}) {
shard := m.getShard(key)
shard.Lock()
defer shard.Unlock()
shard.data[key] = value
}
```
### Pattern 7: Select with Timeout and Default
```go
func selectPatterns() {
ch := make(chan int)
// Timeout pattern
select {
case v := <-ch:
fmt.Println("Received:", v)
case <-time.After(time.Second):
fmt.Println("Timeout!")
}
// Non-blocking send/receive
select {
case ch <- 42:
fmt.Println("Sent")
default:
fmt.Println("Channel full, skipping")
}
// Priority select (check high priority first)
highPriority := make(chan int)
lowPriority := make(chan int)
for {
select {
case msg := <-highPriority:
fmt.Println("High priority:", msg)
default:
select {
case msg := <-highPriority:
fmt.Println("High priority:", msg)
case msg := <-lowPriority:
fmt.Println("Low priority:", msg)
}
}
}
}
```
## Race Detection
```bash
# Run tests with race detector
go test -race ./...
# Build with race detector
go build -race .
# Run with race detector
go run -race main.go
```
## Best Practices
### Do's
- **Use context** - For cancellation and deadlines
- **Close channels** - From sender side only
- **Use errgroup** - For concurrent operations with errors
- **Buffer channels** - When you know the count
- **Prefer channels** - Over mutexes when possible
### Don'ts
- **Don't leak goroutines** - Always have exit path
- **Don't close from receiver** - Causes panic
- **Don't use shared memory** - Unless necessary
- **Don't ignore context cancellation** - Check ctx.Done()
- **Don't use time.Sleep for sync** - Use proper primitives
## Resources
- [Go Concurrency Patterns](https://go.dev/blog/pipelines)
- [Effective Go - Concurrency](https://go.dev/doc/effective_go#concurrency)
- [Go by Example - Goroutines](https://gobyexample.com/goroutines)
================================================
FILE: .claude/skills/golang-patterns/SKILL.md
================================================
---
name: golang-patterns
description: Idiomatic Go patterns, best practices, and conventions for building robust, efficient, and maintainable Go applications.
origin: ECC
---
# Go Development Patterns
Idiomatic Go patterns and best practices for building robust, efficient, and maintainable applications.
## When to Activate
- Writing new Go code
- Reviewing Go code
- Refactoring existing Go code
- Designing Go packages/modules
## Core Principles
### 1. Simplicity and Clarity
Go favors simplicity over cleverness. Code should be obvious and easy to read.
```go
// Good: Clear and direct
func GetUser(id string) (*User, error) {
user, err := db.FindUser(id)
if err != nil {
return nil, fmt.Errorf("get user %s: %w", id, err)
}
return user, nil
}
// Bad: Overly clever
func GetUser(id string) (*User, error) {
return func() (*User, error) {
if u, e := db.FindUser(id); e == nil {
return u, nil
} else {
return nil, e
}
}()
}
```
### 2. Make the Zero Value Useful
Design types so their zero value is immediately usable without initialization.
```go
// Good: Zero value is useful
type Counter struct {
mu sync.Mutex
count int // zero value is 0, ready to use
}
func (c *Counter) Inc() {
c.mu.Lock()
c.count++
c.mu.Unlock()
}
// Good: bytes.Buffer works with zero value
var buf bytes.Buffer
buf.WriteString("hello")
// Bad: Requires initialization
type BadCounter struct {
counts map[string]int // nil map will panic
}
```
### 3. Accept Interfaces, Return Structs
Functions should accept interface parameters and return concrete types.
```go
// Good: Accepts interface, returns concrete type
func ProcessData(r io.Reader) (*Result, error) {
data, err := io.ReadAll(r)
if err != nil {
return nil, err
}
return &Result{Data: data}, nil
}
// Bad: Returns interface (hides implementation details unnecessarily)
func ProcessData(r io.Reader) (io.Reader, error) {
// ...
}
```
## Error Handling Patterns
### Error Wrapping with Context
```go
// Good: Wrap errors with context
func LoadConfig(path string) (*Config, error) {
data, err := os.ReadFile(path)
if err != nil {
return nil, fmt.Errorf("load config %s: %w", path, err)
}
var cfg Config
if err := json.Unmarshal(data, &cfg); err != nil {
return nil, fmt.Errorf("parse config %s: %w", path, err)
}
return &cfg, nil
}
```
### Custom Error Types
```go
// Define domain-specific errors
type ValidationError struct {
Field string
Message string
}
func (e *ValidationError) Error() string {
return fmt.Sprintf("validation failed on %s: %s", e.Field, e.Message)
}
// Sentinel errors for common cases
var (
ErrNotFound = errors.New("resource not found")
ErrUnauthorized = errors.New("unauthorized")
ErrInvalidInput = errors.New("invalid input")
)
```
### Error Checking with errors.Is and errors.As
```go
func HandleError(err error) {
// Check for specific error
if errors.Is(err, sql.ErrNoRows) {
log.Println("No records found")
return
}
// Check for error type
var validationErr *ValidationError
if errors.As(err, &validationErr) {
log.Printf("Validation error on field %s: %s",
validationErr.Field, validationErr.Message)
return
}
// Unknown error
log.Printf("Unexpected error: %v", err)
}
```
### Never Ignore Errors
```go
// Bad: Ignoring error with blank identifier
result, _ := doSomething()
// Good: Handle or explicitly document why it's safe to ignore
result, err := doSomething()
if err != nil {
return err
}
// Acceptable: When error truly doesn't matter (rare)
_ = writer.Close() // Best-effort cleanup, error logged elsewhere
```
## Concurrency Patterns
### Worker Pool
```go
func WorkerPool(jobs <-chan Job, results chan<- Result, numWorkers int) {
var wg sync.WaitGroup
for i := 0; i < numWorkers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for job := range jobs {
results <- process(job)
}
}()
}
wg.Wait()
close(results)
}
```
### Context for Cancellation and Timeouts
```go
func FetchWithTimeout(ctx context.Context, url string) ([]byte, error) {
ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
return nil, fmt.Errorf("create request: %w", err)
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
return nil, fmt.Errorf("fetch %s: %w", url, err)
}
defer resp.Body.Close()
return io.ReadAll(resp.Body)
}
```
### Graceful Shutdown
```go
func GracefulShutdown(server *http.Server) {
quit := make(chan os.Signal, 1)
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
<-quit
log.Println("Shutting down server...")
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
if err := server.Shutdown(ctx); err != nil {
log.Fatalf("Server forced to shutdown: %v", err)
}
log.Println("Server exited")
}
```
### errgroup for Coordinated Goroutines
```go
import "golang.org/x/sync/errgroup"
func FetchAll(ctx context.Context, urls []string) ([][]byte, error) {
g, ctx := errgroup.WithContext(ctx)
results := make([][]byte, len(urls))
for i, url := range urls {
i, url := i, url // Capture loop variables
g.Go(func() error {
data, err := FetchWithTimeout(ctx, url)
if err != nil {
return err
}
results[i] = data
return nil
})
}
if err := g.Wait(); err != nil {
return nil, err
}
return results, nil
}
```
### Avoiding Goroutine Leaks
```go
// Bad: Goroutine leak if context is cancelled
func leakyFetch(ctx context.Context, url string) <-chan []byte {
ch := make(chan []byte)
go func() {
data, _ := fetch(url)
ch <- data // Blocks forever if no receiver
}()
return ch
}
// Good: Properly handles cancellation
func safeFetch(ctx context.Context, url string) <-chan []byte {
ch := make(chan []byte, 1) // Buffered channel
go func() {
data, err := fetch(url)
if err != nil {
return
}
select {
case ch <- data:
case <-ctx.Done():
}
}()
return ch
}
```
## Interface Design
### Small, Focused Interfaces
```go
// Good: Single-method interfaces
type Reader interface {
Read(p []byte) (n int, err error)
}
type Writer interface {
Write(p []byte) (n int, err error)
}
type Closer interface {
Close() error
}
// Compose interfaces as needed
type ReadWriteCloser interface {
Reader
Writer
Closer
}
```
### Define Interfaces Where They're Used
```go
// In the consumer package, not the provider
package service
// UserStore defines what this service needs
type UserStore interface {
GetUser(id string) (*User, error)
SaveUser(user *User) error
}
type Service struct {
store UserStore
}
// Concrete implementation can be in another package
// It doesn't need to know about this interface
```
### Optional Behavior with Type Assertions
```go
type Flusher interface {
Flush() error
}
func WriteAndFlush(w io.Writer, data []byte) error {
if _, err := w.Write(data); err != nil {
return err
}
// Flush if supported
if f, ok := w.(Flusher); ok {
return f.Flush()
}
return nil
}
```
## Package Organization
### Standard Project Layout
```text
myproject/
├── cmd/
│ └── myapp/
│ └── main.go # Entry point
├── internal/
│ ├── handler/ # HTTP handlers
│ ├── service/ # Business logic
│ ├── repository/ # Data access
│ └── config/ # Configuration
├── pkg/
│ └── client/ # Public API client
├── api/
│ └── v1/ # API definitions (proto, OpenAPI)
├── testdata/ # Test fixtures
├── go.mod
├── go.sum
└── Makefile
```
### Package Naming
```go
// Good: Short, lowercase, no underscores
package http
package json
package user
// Bad: Verbose, mixed case, or redundant
package httpHandler
package json_parser
package userService // Redundant 'Service' suffix
```
### Avoid Package-Level State
```go
// Bad: Global mutable state
var db *sql.DB
func init() {
db, _ = sql.Open("postgres", os.Getenv("DATABASE_URL"))
}
// Good: Dependency injection
type Server struct {
db *sql.DB
}
func NewServer(db *sql.DB) *Server {
return &Server{db: db}
}
```
## Struct Design
### Functional Options Pattern
```go
type Server struct {
addr string
timeout time.Duration
logger *log.Logger
}
type Option func(*Server)
func WithTimeout(d time.Duration) Option {
return func(s *Server) {
s.timeout = d
}
}
func WithLogger(l *log.Logger) Option {
return func(s *Server) {
s.logger = l
}
}
func NewServer(addr string, opts ...Option) *Server {
s := &Server{
addr: addr,
timeout: 30 * time.Second, // default
logger: log.Default(), // default
}
for _, opt := range opts {
opt(s)
}
return s
}
// Usage
server := NewServer(":8080",
WithTimeout(60*time.Second),
WithLogger(customLogger),
)
```
### Embedding for Composition
```go
type Logger struct {
prefix string
}
func (l *Logger) Log(msg string) {
fmt.Printf("[%s] %s\n", l.prefix, msg)
}
type Server struct {
*Logger // Embedding - Server gets Log method
addr string
}
func NewServer(addr string) *Server {
return &Server{
Logger: &Logger{prefix: "SERVER"},
addr: addr,
}
}
// Usage
s := NewServer(":8080")
s.Log("Starting...") // Calls embedded Logger.Log
```
## Memory and Performance
### Preallocate Slices When Size is Known
```go
// Bad: Grows slice multiple times
func processItems(items []Item) []Result {
var results []Result
for _, item := range items {
results = append(results, process(item))
}
return results
}
// Good: Single allocation
func processItems(items []Item) []Result {
results := make([]Result, 0, len(items))
for _, item := range items {
results = append(results, process(item))
}
return results
}
```
### Use sync.Pool for Frequent Allocations
```go
var bufferPool = sync.Pool{
New: func() interface{} {
return new(bytes.Buffer)
},
}
func ProcessRequest(data []byte) []byte {
buf := bufferPool.Get().(*bytes.Buffer)
defer func() {
buf.Reset()
bufferPool.Put(buf)
}()
buf.Write(data)
// Process...
return buf.Bytes()
}
```
### Avoid String Concatenation in Loops
```go
// Bad: Creates many string allocations
func join(parts []string) string {
var result string
for _, p := range parts {
result += p + ","
}
return result
}
// Good: Single allocation with strings.Builder
func join(parts []string) string {
var sb strings.Builder
for i, p := range parts {
if i > 0 {
sb.WriteString(",")
}
sb.WriteString(p)
}
return sb.String()
}
// Best: Use standard library
func join(parts []string) string {
return strings.Join(parts, ",")
}
```
## Go Tooling Integration
### Essential Commands
```bash
# Build and run
go build ./...
go run ./cmd/myapp
# Testing
go test ./...
go test -race ./...
go test -cover ./...
# Static analysis
go vet ./...
staticcheck ./...
golangci-lint run
# Module management
go mod tidy
go mod verify
# Formatting
gofmt -w .
goimports -w .
```
### Recommended Linter Configuration (.golangci.yml)
```yaml
linters:
enable:
- errcheck
- gosimple
- govet
- ineffassign
- staticcheck
- unused
- gofmt
- goimports
- misspell
- unconvert
- unparam
linters-settings:
errcheck:
check-type-assertions: true
govet:
check-shadowing: true
issues:
exclude-use-default: false
```
## Quick Reference: Go Idioms
| Idiom | Description |
|-------|-------------|
| Accept interfaces, return structs | Functions accept interface params, return concrete types |
| Errors are values | Treat errors as first-class values, not exceptions |
| Don't communicate by sharing memory | Use channels for coordination between goroutines |
| Make the zero value useful | Types should work without explicit initialization |
| A little copying is better than a little dependency | Avoid unnecessary external dependencies |
| Clear is better than clever | Prioritize readability over cleverness |
| gofmt is no one's favorite but everyone's friend | Always format with gofmt/goimports |
| Return early | Handle errors first, keep happy path unindented |
## Anti-Patterns to Avoid
```go
// Bad: Naked returns in long functions
func process() (result int, err error) {
// ... 50 lines ...
return // What is being returned?
}
// Bad: Using panic for control flow
func GetUser(id string) *User {
user, err := db.Find(id)
if err != nil {
panic(err) // Don't do this
}
return user
}
// Bad: Passing context in struct
type Request struct {
ctx context.Context // Context should be first param
ID string
}
// Good: Context as first parameter
func ProcessRequest(ctx context.Context, id string) error {
// ...
}
// Bad: Mixing value and pointer receivers
type Counter struct{ n int }
func (c Counter) Value() int { return c.n } // Value receiver
func (c *Counter) Increment() { c.n++ } // Pointer receiver
// Pick one style and be consistent
```
**Remember**: Go code should be boring in the best way - predictable, consistent, and easy to understand. When in doubt, keep it simple.
================================================
FILE: .claude/skills/golang-pro/SKILL.md
================================================
---
name: golang-pro
description: Use when building Go applications requiring concurrent programming, microservices architecture, or high-performance systems. Invoke for goroutines, channels, Go generics, gRPC integration.
license: MIT
metadata:
author: https://github.com/Jeffallan
version: "1.0.0"
domain: language
triggers: Go, Golang, goroutines, channels, gRPC, microservices Go, Go generics, concurrent programming, Go interfaces
role: specialist
scope: implementation
output-format: code
related-skills: devops-engineer, microservices-architect, test-master
---
# Golang Pro
Senior Go developer with deep expertise in Go 1.21+, concurrent programming, and cloud-native microservices. Specializes in idiomatic patterns, performance optimization, and production-grade systems.
## Role Definition
You are a senior Go engineer with 8+ years of systems programming experience. You specialize in Go 1.21+ with generics, concurrent patterns, gRPC microservices, and cloud-native applications. You build efficient, type-safe systems following Go proverbs.
## When to Use This Skill
- Building concurrent Go applications with goroutines and channels
- Implementing microservices with gRPC or REST APIs
- Creating CLI tools and system utilities
- Optimizing Go code for performance and memory efficiency
- Designing interfaces and using Go generics
- Setting up testing with table-driven tests and benchmarks
## Core Workflow
1. **Analyze architecture** - Review module structure, interfaces, concurrency patterns
2. **Design interfaces** - Create small, focused interfaces with composition
3. **Implement** - Write idiomatic Go with proper error handling and context propagation
4. **Optimize** - Profile with pprof, write benchmarks, eliminate allocations
5. **Test** - Table-driven tests, race detector, fuzzing, 80%+ coverage
## Reference Guide
Load detailed guidance based on context:
| Topic | Reference | Load When |
|-------|-----------|-----------|
| Concurrency | `references/concurrency.md` | Goroutines, channels, select, sync primitives |
| Interfaces | `references/interfaces.md` | Interface design, io.Reader/Writer, composition |
| Generics | `references/generics.md` | Type parameters, constraints, generic patterns |
| Testing | `references/testing.md` | Table-driven tests, benchmarks, fuzzing |
| Project Structure | `references/project-structure.md` | Module layout, internal packages, go.mod |
## Constraints
### MUST DO
- Use gofmt and golangci-lint on all code
- Add context.Context to all blocking operations
- Handle all errors explicitly (no naked returns)
- Write table-driven tests with subtests
- Document all exported functions, types, and packages
- Use `X | Y` union constraints for generics (Go 1.18+)
- Propagate errors with fmt.Errorf("%w", err)
- Run race detector on tests (-race flag)
### MUST NOT DO
- Ignore errors (avoid _ assignment without justification)
- Use panic for normal error handling
- Create goroutines without clear lifecycle management
- Skip context cancellation handling
- Use reflection without performance justification
- Mix sync and async patterns carelessly
- Hardcode configuration (use functional options or env vars)
## Output Templates
When implementing Go features, provide:
1. Interface definitions (contracts first)
2. Implementation files with proper package structure
3. Test file with table-driven tests
4. Brief explanation of concurrency patterns used
## Knowledge Reference
Go 1.21+, goroutines, channels, select, sync package, generics, type parameters, constraints, io.Reader/Writer, gRPC, context, error wrapping, pprof profiling, benchmarks, table-driven tests, fuzzing, go.mod, internal packages, functional options
================================================
FILE: .claude/skills/golang-pro/references/concurrency.md
================================================
# Concurrency Patterns
## Goroutine Lifecycle Management
```go
package main
import (
"context"
"fmt"
"sync"
"time"
)
// Worker pool with bounded concurrency
type WorkerPool struct {
workers int
tasks chan func()
wg sync.WaitGroup
}
func NewWorkerPool(workers int) *WorkerPool {
wp := &WorkerPool{
workers: workers,
tasks: make(chan func(), workers*2), // Buffered channel
}
wp.start()
return wp
}
func (wp *WorkerPool) start() {
for i := 0; i < wp.workers; i++ {
wp.wg.Add(1)
go func() {
defer wp.wg.Done()
for task := range wp.tasks {
task()
}
}()
}
}
func (wp *WorkerPool) Submit(task func()) {
wp.tasks <- task
}
func (wp *WorkerPool) Shutdown() {
close(wp.tasks)
wp.wg.Wait()
}
```
## Channel Patterns
```go
// Generator pattern
func generateNumbers(ctx context.Context, max int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for i := 0; i < max; i++ {
select {
case out <- i:
case <-ctx.Done():
return
}
}
}()
return out
}
// Fan-out, fan-in pattern
func fanOut(ctx context.Context, input <-chan int, workers int) []<-chan int {
channels := make([]<-chan int, workers)
for i := 0; i < workers; i++ {
channels[i] = process(ctx, input)
}
return channels
}
func process(ctx context.Context, input <-chan int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for val := range input {
select {
case out <- val * 2:
case <-ctx.Done():
return
}
}
}()
return out
}
func fanIn(ctx context.Context, channels ...<-chan int) <-chan int {
out := make(chan int)
var wg sync.WaitGroup
for _, ch := range channels {
wg.Add(1)
go func(c <-chan int) {
defer wg.Done()
for val := range c {
select {
case out <- val:
case <-ctx.Done():
return
}
}
}(ch)
}
go func() {
wg.Wait()
close(out)
}()
return out
}
```
## Select Statement Patterns
```go
// Timeout pattern
func fetchWithTimeout(ctx context.Context, url string) (string, error) {
result := make(chan string, 1)
errCh := make(chan error, 1)
go func() {
// Simulate network call
time.Sleep(100 * time.Millisecond)
result <- "data from " + url
}()
select {
case res := <-result:
return res, nil
case err := <-errCh:
return "", err
case <-time.After(50 * time.Millisecond):
return "", fmt.Errorf("timeout")
case <-ctx.Done():
return "", ctx.Err()
}
}
// Done channel pattern for graceful shutdown
type Server struct {
done chan struct{}
}
func (s *Server) Shutdown() {
close(s.done)
}
func (s *Server) Run(ctx context.Context) {
ticker := time.NewTicker(1 * time.Second)
defer ticker.Stop()
for {
select {
case <-ticker.C:
fmt.Println("tick")
case <-s.done:
fmt.Println("shutting down")
return
case <-ctx.Done():
fmt.Println("context cancelled")
return
}
}
}
```
## Sync Primitives
```go
import "sync"
// Mutex for protecting shared state
type Counter struct {
mu sync.Mutex
count int
}
func (c *Counter) Increment() {
c.mu.Lock()
defer c.mu.Unlock()
c.count++
}
func (c *Counter) Value() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.count
}
// RWMutex for read-heavy workloads
type Cache struct {
mu sync.RWMutex
items map[string]string
}
func (c *Cache) Get(key string) (string, bool) {
c.mu.RLock()
defer c.mu.RUnlock()
val, ok := c.items[key]
return val, ok
}
func (c *Cache) Set(key, value string) {
c.mu.Lock()
defer c.mu.Unlock()
c.items[key] = value
}
// sync.Once for initialization
type Service struct {
once sync.Once
config *Config
}
func (s *Service) getConfig() *Config {
s.once.Do(func() {
s.config = loadConfig() // Only called once
})
return s.config
}
```
## Rate Limiting and Backpressure
```go
import "golang.org/x/time/rate"
// Token bucket rate limiter
type RateLimiter struct {
limiter *rate.Limiter
}
func NewRateLimiter(rps int) *RateLimiter {
return &RateLimiter{
limiter: rate.NewLimiter(rate.Limit(rps), rps),
}
}
func (rl *RateLimiter) Process(ctx context.Context, item string) error {
if err := rl.limiter.Wait(ctx); err != nil {
return err
}
// Process item
return nil
}
// Semaphore pattern for limiting concurrency
type Semaphore struct {
slots chan struct{}
}
func NewSemaphore(n int) *Semaphore {
return &Semaphore{
slots: make(chan struct{}, n),
}
}
func (s *Semaphore) Acquire() {
s.slots <- struct{}{}
}
func (s *Semaphore) Release() {
<-s.slots
}
func (s *Semaphore) Do(fn func()) {
s.Acquire()
defer s.Release()
fn()
}
```
## Pipeline Pattern
```go
// Stage-based processing pipeline
func pipeline(ctx context.Context, input <-chan int) <-chan int {
// Stage 1: Square numbers
stage1 := make(chan int)
go func() {
defer close(stage1)
for num := range input {
select {
case stage1 <- num * num:
case <-ctx.Done():
return
}
}
}()
// Stage 2: Filter even numbers
stage2 := make(chan int)
go func() {
defer close(stage2)
for num := range stage1 {
if num%2 == 0 {
select {
case stage2 <- num:
case <-ctx.Done():
return
}
}
}
}()
return stage2
}
```
## Quick Reference
| Pattern | Use Case | Key Points |
|---------|----------|------------|
| Worker Pool | Bounded concurrency | Limit goroutines, reuse workers |
| Fan-out/Fan-in | Parallel processing | Distribute work, merge results |
| Pipeline | Stream processing | Chain transformations |
| Rate Limiter | API throttling | Control request rate |
| Semaphore | Resource limits | Cap concurrent operations |
| Done Channel | Graceful shutdown | Signal completion |
================================================
FILE: .claude/skills/golang-pro/references/generics.md
================================================
# Generics and Type Parameters
## Basic Type Parameters
```go
package main
// Generic function with type parameter
func Max[T constraints.Ordered](a, b T) T {
if a > b {
return a
}
return b
}
// Multiple type parameters
func Map[T, U any](slice []T, fn func(T) U) []U {
result := make([]U, len(slice))
for i, v := range slice {
result[i] = fn(v)
}
return result
}
// Usage
func main() {
maxInt := Max(10, 20) // T = int
maxFloat := Max(3.14, 2.71) // T = float64
maxString := Max("abc", "xyz") // T = string
nums := []int{1, 2, 3}
doubled := Map(nums, func(n int) int { return n * 2 })
strings := Map(nums, func(n int) string { return fmt.Sprintf("%d", n) })
}
```
## Type Constraints
```go
import "constraints"
// Built-in constraints
type Number interface {
constraints.Integer | constraints.Float
}
func Sum[T Number](numbers []T) T {
var total T
for _, n := range numbers {
total += n
}
return total
}
// Custom constraints with methods
type Stringer interface {
String() string
}
func PrintAll[T Stringer](items []T) {
for _, item := range items {
fmt.Println(item.String())
}
}
// Approximate constraint using ~
type Integer interface {
~int | ~int8 | ~int16 | ~int32 | ~int64
}
type MyInt int
func Double[T Integer](n T) T {
return n * 2
}
// Works with both int and MyInt
func main() {
fmt.Println(Double(5)) // int
fmt.Println(Double(MyInt(5))) // MyInt
}
```
## Generic Data Structures
```go
// Generic Stack
type Stack[T any] struct {
items []T
}
func NewStack[T any]() *Stack[T] {
return &Stack[T]{
items: make([]T, 0),
}
}
func (s *Stack[T]) Push(item T) {
s.items = append(s.items, item)
}
func (s *Stack[T]) Pop() (T, bool) {
if len(s.items) == 0 {
var zero T
return zero, false
}
item := s.items[len(s.items)-1]
s.items = s.items[:len(s.items)-1]
return item, true
}
func (s *Stack[T]) IsEmpty() bool {
return len(s.items) == 0
}
// Usage
intStack := NewStack[int]()
intStack.Push(1)
intStack.Push(2)
stringStack := NewStack[string]()
stringStack.Push("hello")
stringStack.Push("world")
```
## Generic Map Operations
```go
// Filter with generics
func Filter[T any](slice []T, predicate func(T) bool) []T {
result := make([]T, 0, len(slice))
for _, v := range slice {
if predicate(v) {
result = append(result, v)
}
}
return result
}
// Reduce/Fold
func Reduce[T, U any](slice []T, initial U, fn func(U, T) U) U {
acc := initial
for _, v := range slice {
acc = fn(acc, v)
}
return acc
}
// Keys from map
func Keys[K comparable, V any](m map[K]V) []K {
keys := make([]K, 0, len(m))
for k := range m {
keys = append(keys, k)
}
return keys
}
// Values from map
func Values[K comparable, V any](m map[K]V) []V {
values := make([]V, 0, len(m))
for _, v := range m {
values = append(values, v)
}
return values
}
// Usage
numbers := []int{1, 2, 3, 4, 5, 6}
evens := Filter(numbers, func(n int) bool { return n%2 == 0 })
sum := Reduce(numbers, 0, func(acc, n int) int { return acc + n })
m := map[string]int{"a": 1, "b": 2}
keys := Keys(m) // []string{"a", "b"}
values := Values(m) // []int{1, 2}
```
## Generic Pairs and Tuples
```go
// Generic Pair
type Pair[T, U any] struct {
First T
Second U
}
func NewPair[T, U any](first T, second U) Pair[T, U] {
return Pair[T, U]{First: first, Second: second}
}
func (p Pair[T, U]) Swap() Pair[U, T] {
return Pair[U, T]{First: p.Second, Second: p.First}
}
// Usage
pair := NewPair("name", 42)
swapped := pair.Swap() // Pair[int, string]
// Generic Result type (like Rust's Result<T, E>)
type Result[T any] struct {
value T
err error
}
func Ok[T any](value T) Result[T] {
return Result[T]{value: value}
}
func Err[T any](err error) Result[T] {
return Result[T]{err: err}
}
func (r Result[T]) IsOk() bool {
return r.err == nil
}
func (r Result[T]) Unwrap() (T, error) {
return r.value, r.err
}
func (r Result[T]) UnwrapOr(defaultValue T) T {
if r.err != nil {
return defaultValue
}
return r.value
}
```
## Comparable Constraint
```go
// Find using comparable
func Find[T comparable](slice []T, target T) (int, bool) {
for i, v := range slice {
if v == target {
return i, true
}
}
return -1, false
}
// Contains
func Contains[T comparable](slice []T, target T) bool {
_, found := Find(slice, target)
return found
}
// Unique elements
func Unique[T comparable](slice []T) []T {
seen := make(map[T]struct{})
result := make([]T, 0, len(slice))
for _, v := range slice {
if _, exists := seen[v]; !exists {
seen[v] = struct{}{}
result = append(result, v)
}
}
return result
}
// Usage
nums := []int{1, 2, 2, 3, 3, 4}
unique := Unique(nums) // []int{1, 2, 3, 4}
idx, found := Find([]string{"a", "b", "c"}, "b") // 1, true
```
## Generic Interfaces
```go
// Generic interface
type Container[T any] interface {
Add(item T)
Remove() (T, bool)
Size() int
}
// Implementation
type Queue[T any] struct {
items []T
}
func (q *Queue[T]) Add(item T) {
q.items = append(q.items, item)
}
func (q *Queue[T]) Remove() (T, bool) {
if len(q.items) == 0 {
var zero T
return zero, false
}
item := q.items[0]
q.items = q.items[1:]
return item, true
}
func (q *Queue[T]) Size() int {
return len(q.items)
}
// Function accepting generic interface
func ProcessContainer[T any](c Container[T], item T) {
c.Add(item)
fmt.Printf("Container size: %d\n", c.Size())
}
```
## Type Inference
```go
// Type inference works in most cases
func Identity[T any](x T) T {
return x
}
// No need to specify type
result := Identity(42) // T inferred as int
str := Identity("hello") // T inferred as string
// Type inference with constraints
func Min[T constraints.Ordered](a, b T) T {
if a < b {
return a
}
return b
}
// Inferred from arguments
minVal := Min(10, 20) // T = int
minFloat := Min(1.5, 2.5) // T = float64
// Explicit type when needed
result := Map[int, string]([]int{1, 2}, func(n int) string {
return fmt.Sprintf("%d", n)
})
```
## Generic Channels
```go
// Generic channel operations
func Merge[T any](channels ...<-chan T) <-chan T {
out := make(chan T)
var wg sync.WaitGroup
for _, ch := range channels {
wg.Add(1)
go func(c <-chan T) {
defer wg.Done()
for v := range c {
out <- v
}
}(ch)
}
go func() {
wg.Wait()
close(out)
}()
return out
}
// Generic pipeline stage
func Stage[T, U any](in <-chan T, fn func(T) U) <-chan U {
out := make(chan U)
go func() {
defer close(out)
for v := range in {
out <- fn(v)
}
}()
return out
}
// Usage
ch1 := make(chan int)
ch2 := make(chan int)
merged := Merge(ch1, ch2)
numbers := make(chan int)
doubled := Stage(numbers, func(n int) int { return n * 2 })
strings := Stage(doubled, func(n int) string { return fmt.Sprintf("%d", n) })
```
## Union Constraints
```go
// Union of types
type StringOrInt interface {
string | int
}
func Process[T StringOrInt](val T) string {
return fmt.Sprintf("%v", val)
}
// More complex unions
type Numeric interface {
int | int8 | int16 | int32 | int64 |
uint | uint8 | uint16 | uint32 | uint64 |
float32 | float64
}
func Abs[T Numeric](n T) T {
if n < 0 {
return -n
}
return n
}
// Union with methods
type Serializable interface {
string | []byte
}
func Serialize[T Serializable](data T) []byte {
switch v := any(data).(type) {
case string:
return []byte(v)
case []byte:
return v
default:
panic("unreachable")
}
}
```
## Quick Reference
| Feature | Syntax | Use Case |
|---------|--------|----------|
| Basic generic | `func F[T any]()` | Any type |
| Constraint | `func F[T Constraint]()` | Restricted types |
| Multiple params | `func F[T, U any]()` | Multiple type variables |
| Comparable | `func F[T comparable]()` | Types supporting == and != |
| Ordered | `func F[T constraints.Ordered]()` | Types supporting <, >, <=, >= |
| Union | `T interface{int \| string}` | Either type |
| Approximate | `~int` | Include type aliases |
================================================
FILE: .claude/skills/golang-pro/references/interfaces.md
================================================
# Interface Design and Composition
## Small, Focused Interfaces
```go
// Single-method interfaces (idiomatic Go)
type Reader interface {
Read(p []byte) (n int, err error)
}
type Writer interface {
Write(p []byte) (n int, err error)
}
type Closer interface {
Close() error
}
// Interface composition
type ReadCloser interface {
Reader
Closer
}
type WriteCloser interface {
Writer
Closer
}
type ReadWriteCloser interface {
Reader
Writer
Closer
}
```
## Accept Interfaces, Return Structs
```go
package storage
import "io"
// Storage is the concrete type (struct)
type Storage struct {
baseDir string
}
// NewStorage returns a concrete type
func NewStorage(baseDir string) *Storage {
return &Storage{baseDir: baseDir}
}
// SaveFile accepts an interface for flexibility
func (s *Storage) SaveFile(filename string, data io.Reader) error {
// Implementation can work with any Reader
// (file, network, buffer, etc.)
return nil
}
// Usage allows dependency injection
type Uploader interface {
SaveFile(filename string, data io.Reader) error
}
type Service struct {
uploader Uploader // Accept interface
}
// NewService accepts interface for testing flexibility
func NewService(uploader Uploader) *Service {
return &Service{uploader: uploader}
}
```
## io.Reader and io.Writer Patterns
```go
import (
"io"
"strings"
)
// Chain readers with io.MultiReader
func combineReaders() io.Reader {
r1 := strings.NewReader("Hello ")
r2 := strings.NewReader("World")
return io.MultiReader(r1, r2)
}
// Tee reader for duplicating reads
func duplicateRead(r io.Reader, w io.Writer) io.Reader {
return io.TeeReader(r, w) // Writes to w while reading from r
}
// Limit reader to prevent reading too much
func limitedRead(r io.Reader, n int64) io.Reader {
return io.LimitReader(r, n)
}
// Custom Reader implementation
type UppercaseReader struct {
src io.Reader
}
func (u *UppercaseReader) Read(p []byte) (n int, err error) {
n, err = u.src.Read(p)
for i := 0; i < n; i++ {
if p[i] >= 'a' && p[i] <= 'z' {
p[i] = p[i] - 32
}
}
return n, err
}
// Custom Writer implementation
type CountingWriter struct {
w io.Writer
count int64
}
func (cw *CountingWriter) Write(p []byte) (n int, err error) {
n, err = cw.w.Write(p)
cw.count += int64(n)
return n, err
}
func (cw *CountingWriter) BytesWritten() int64 {
return cw.count
}
```
## Embedding for Composition
```go
import "sync"
// Embed to extend behavior
type SafeCounter struct {
mu sync.Mutex
m map[string]int
}
func (sc *SafeCounter) Inc(key string) {
sc.mu.Lock()
defer sc.mu.Unlock()
sc.m[key]++
}
// Embed interface to add default behavior
type Logger interface {
Log(msg string)
}
type NoOpLogger struct{}
func (NoOpLogger) Log(msg string) {}
type Service struct {
Logger // Embedded interface (default implementation can be provided)
}
func NewService(logger Logger) *Service {
if logger == nil {
logger = NoOpLogger{} // Provide default
}
return &Service{Logger: logger}
}
// Now Service.Log() is available
```
## Interface Satisfaction Verification
```go
import "io"
// Compile-time interface verification
var _ io.Reader = (*MyReader)(nil)
var _ io.Writer = (*MyWriter)(nil)
var _ io.Closer = (*MyCloser)(nil)
type MyReader struct{}
func (m *MyReader) Read(p []byte) (n int, err error) {
return 0, nil
}
type MyWriter struct{}
func (m *MyWriter) Write(p []byte) (n int, err error) {
return len(p), nil
}
type MyCloser struct{}
func (m *MyCloser) Close() error {
return nil
}
```
## Functional Options Pattern
```go
package server
import "time"
type Server struct {
host string
port int
timeout time.Duration
maxConns int
enableLogger bool
}
// Option is a functional option for configuring Server
type Option func(*Server)
func WithHost(host string) Option {
return func(s *Server) {
s.host = host
}
}
func WithPort(port int) Option {
return func(s *Server) {
s.port = port
}
}
func WithTimeout(timeout time.Duration) Option {
return func(s *Server) {
s.timeout = timeout
}
}
func WithMaxConnections(max int) Option {
return func(s *Server) {
s.maxConns = max
}
}
func WithLogger(enabled bool) Option {
return func(s *Server) {
s.enableLogger = enabled
}
}
// NewServer creates a server with functional options
func NewServer(opts ...Option) *Server {
// Defaults
s := &Server{
host: "localhost",
port: 8080,
timeout: 30 * time.Second,
maxConns: 100,
}
// Apply options
for _, opt := range opts {
opt(s)
}
return s
}
// Usage:
// server := NewServer(
// WithHost("0.0.0.0"),
// WithPort(9000),
// WithTimeout(60 * time.Second),
// WithLogger(true),
// )
```
## Interface Segregation
```go
// Bad: Fat interface
type BadRepository interface {
Create(item Item) error
Read(id string) (Item, error)
Update(item Item) error
Delete(id string) error
List() ([]Item, error)
Search(query string) ([]Item, error)
Count() (int, error)
}
// Good: Segregated interfaces
type Creator interface {
Create(item Item) error
}
type Reader interface {
Read(id string) (Item, error)
}
type Updater interface {
Update(item Item) error
}
type Deleter interface {
Delete(id string) error
}
type Lister interface {
List() ([]Item, error)
}
// Compose only what you need
type ReadWriter interface {
Reader
Creator
}
type FullRepository interface {
Creator
Reader
Updater
Deleter
Lister
}
```
## Type Assertions and Type Switches
```go
import "fmt"
// Safe type assertion
func processValue(v interface{}) {
// Two-value assertion (safe)
if str, ok := v.(string); ok {
fmt.Println("String:", str)
return
}
// Type switch
switch val := v.(type) {
case int:
fmt.Println("Int:", val)
case string:
fmt.Println("String:", val)
case bool:
fmt.Println("Bool:", val)
default:
fmt.Println("Unknown type")
}
}
// Check for optional interface methods
type Flusher interface {
Flush() error
}
func writeAndFlush(w io.Writer, data []byte) error {
if _, err := w.Write(data); err != nil {
return err
}
// Check if Writer also implements Flusher
if flusher, ok := w.(Flusher); ok {
return flusher.Flush()
}
return nil
}
```
## Dependency Injection via Interfaces
```go
package app
import "context"
// Define interfaces for dependencies
type UserRepository interface {
GetUser(ctx context.Context, id string) (*User, error)
SaveUser(ctx context.Context, user *User) error
}
type EmailSender interface {
SendEmail(ctx context.Context, to, subject, body string) error
}
// Service depends on interfaces
type UserService struct {
repo UserRepository
mailer EmailSender
}
func NewUserService(repo UserRepository, mailer EmailSender) *UserService {
return &UserService{
repo: repo,
mailer: mailer,
}
}
func (s *UserService) RegisterUser(ctx context.Context, email string) error {
user := &User{Email: email}
if err := s.repo.SaveUser(ctx, user); err != nil {
return err
}
return s.mailer.SendEmail(ctx, email, "Welcome", "Thanks for registering!")
}
// Easy to mock in tests
type MockUserRepository struct{}
func (m *MockUserRepository) GetUser(ctx context.Context, id string) (*User, error) {
return &User{ID: id}, nil
}
func (m *MockUserRepository) SaveUser(ctx context.Context, user *User) error {
return nil
}
```
## Quick Reference
| Pattern | Use Case | Key Principle |
|---------|----------|---------------|
| Small interfaces | Flexibility | Single-method interfaces |
| Accept interfaces | Testability | Depend on abstractions |
| Return structs | Clarity | Concrete return types |
| io.Reader/Writer | I/O operations | Standard library integration |
| Embedding | Composition | Extend behavior without inheritance |
| Functional options | Configuration | Flexible constructors |
| Type assertions | Runtime checks | Safe downcasting |
================================================
FILE: .claude/skills/golang-pro/references/project-structure.md
================================================
# Project Structure and Module Management
## Standard Project Layout
```
myproject/
├── cmd/ # Main applications
│ ├── server/
│ │ └── main.go # Entry point for server
│ └── cli/
│ └── main.go # Entry point for CLI tool
├── internal/ # Private application code
│ ├── api/ # API handlers
│ ├── service/ # Business logic
│ └── repository/ # Data access layer
├── pkg/ # Public library code
│ └── models/ # Shared models
├── api/ # API definitions
│ ├── openapi.yaml # OpenAPI spec
│ └── proto/ # Protocol buffers
├── web/ # Web assets
│ ├── static/
│ └── templates/
├── scripts/ # Build and install scripts
├── configs/ # Configuration files
├── deployments/ # Docker, K8s configs
├── test/ # Additional test data
├── docs/ # Documentation
├── go.mod # Module definition
├── go.sum # Dependency checksums
├── Makefile # Build automation
└── README.md
```
## go.mod Basics
```go
// Initialize module
// go mod init github.com/user/project
module github.com/user/myproject
go 1.21
require (
github.com/gin-gonic/gin v1.9.1
github.com/lib/pq v1.10.9
go.uber.org/zap v1.26.0
)
require (
// Indirect dependencies (automatically managed)
github.com/bytedance/sonic v1.9.1 // indirect
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 // indirect
)
// Replace directive for local development
replace github.com/user/mylib => ../mylib
// Retract directive to mark bad versions
retract v1.0.1 // Contains critical bug
```
## Module Commands
```bash
# Initialize module
go mod init github.com/user/project
# Add missing dependencies
go mod tidy
# Download dependencies
go mod download
# Verify dependencies
go mod verify
# Show module graph
go mod graph
# Show why package is needed
go mod why github.com/user/package
# Vendor dependencies (copy to vendor/)
go mod vendor
# Update dependency
go get -u github.com/user/package
# Update to specific version
go get github.com/user/package@v1.2.3
# Update all dependencies
go get -u ./...
# Remove unused dependencies
go mod tidy
```
## Internal Packages
```go
// internal/ packages can only be imported by code in the parent tree
myproject/
├── internal/
│ ├── auth/ # Can only be imported by myproject
│ │ └── jwt.go
│ └── database/
│ └── postgres.go
└── pkg/
└── models/ # Can be imported by anyone
└── user.go
// This works (same project):
import "github.com/user/myproject/internal/auth"
// This fails (different project):
import "github.com/other/project/internal/auth" // Error!
// Internal subdirectories
myproject/
└── api/
└── internal/ # Can only be imported by code in api/
└── helpers.go
```
## Package Organization
```go
// user/user.go - Domain package
package user
import (
"context"
"time"
)
// User represents a user entity
type User struct {
ID string
Email string
CreatedAt time.Time
}
// Repository defines data access interface
type Repository interface {
Create(ctx context.Context, user *User) error
GetByID(ctx context.Context, id string) (*User, error)
Update(ctx context.Context, user *User) error
Delete(ctx context.Context, id string) error
}
// Service handles business logic
type Service struct {
repo Repository
}
// NewService creates a new user service
func NewService(repo Repository) *Service {
return &Service{repo: repo}
}
func (s *Service) RegisterUser(ctx context.Context, email string) (*User, error) {
user := &User{
ID: generateID(),
Email: email,
CreatedAt: time.Now(),
}
return user, s.repo.Create(ctx, user)
}
```
## Multi-Module Repository (Monorepo)
```
monorepo/
├── go.work # Workspace file
├── services/
│ ├── api/
│ │ ├── go.mod
│ │ └── main.go
│ └── worker/
│ ├── go.mod
│ └── main.go
└── shared/
└── models/
├── go.mod
└── user.go
// go.work
go 1.21
use (
./services/api
./services/worker
./shared/models
)
// Commands:
// go work init ./services/api ./services/worker
// go work use ./shared/models
// go work sync
```
## Build Tags and Constraints
```go
// +build integration
// integration_test.go
package myapp
import "testing"
func TestIntegration(t *testing.T) {
// Integration test code
}
// Build: go test -tags=integration
// File-level build constraints (Go 1.17+)
//go:build linux && amd64
package myapp
// Multiple constraints
//go:build linux || darwin
//go:build amd64
// Negation
//go:build !windows
// Common tags:
// linux, darwin, windows, freebsd
// amd64, arm64, 386, arm
// cgo, !cgo
```
## Makefile Example
```makefile
# Makefile
.PHONY: build test lint clean run
# Variables
BINARY_NAME=myapp
BUILD_DIR=bin
GO=go
GOFLAGS=-v
# Build the application
build:
$(GO) build $(GOFLAGS) -o $(BUILD_DIR)/$(BINARY_NAME) ./cmd/server
# Run tests
test:
$(GO) test -v -race -coverprofile=coverage.out ./...
# Run tests with coverage report
test-coverage: test
$(GO) tool cover -html=coverage.out
# Run linters
lint:
golangci-lint run ./...
# Format code
fmt:
$(GO) fmt ./...
goimports -w .
# Run the application
run:
$(GO) run ./cmd/server
# Clean build artifacts
clean:
rm -rf $(BUILD_DIR)
rm -f coverage.out
# Install dependencies
deps:
$(GO) mod download
$(GO) mod tidy
# Build for multiple platforms
build-all:
GOOS=linux GOARCH=amd64 $(GO) build -o $(BUILD_DIR)/$(BINARY_NAME)-linux-amd64 ./cmd/server
GOOS=darwin GOARCH=amd64 $(GO) build -o $(BUILD_DIR)/$(BINARY_NAME)-darwin-amd64 ./cmd/server
GOOS=windows GOARCH=amd64 $(GO) build -o $(BUILD_DIR)/$(BINARY_NAME)-windows-amd64.exe ./cmd/server
# Run with race detector
run-race:
$(GO) run -race ./cmd/server
# Generate code
generate:
$(GO) generate ./...
# Docker build
docker-build:
docker build -t $(BINARY_NAME):latest .
# Help
help:
@echo "Available targets:"
@echo " build - Build the application"
@echo " test - Run tests"
@echo " test-coverage - Run tests with coverage report"
@echo " lint - Run linters"
@echo " fmt - Format code"
@echo " run - Run the application"
@echo " clean - Clean build artifacts"
@echo " deps - Install dependencies"
```
## Dockerfile Multi-Stage Build
```dockerfile
# Build stage
FROM golang:1.21-alpine AS builder
WORKDIR /app
# Copy go mod files
COPY go.mod go.sum ./
RUN go mod download
# Copy source code
COPY . .
# Build binary
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o server ./cmd/server
# Final stage
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
# Copy binary from builder
COPY --from=builder /app/server .
# Copy config files if needed
COPY --from=builder /app/configs ./configs
EXPOSE 8080
CMD ["./server"]
```
## Version Information
```go
// version/version.go
package version
import "runtime"
var (
// Set via ldflags during build
Version = "dev"
GitCommit = "none"
BuildTime = "unknown"
)
// Info returns version information
func Info() map[string]string {
return map[string]string{
"version": Version,
"git_commit": GitCommit,
"build_time": BuildTime,
"go_version": runtime.Version(),
"os": runtime.GOOS,
"arch": runtime.GOARCH,
}
}
// Build with version info:
// go build -ldflags "-X github.com/user/project/version.Version=1.0.0 \
// -X github.com/user/project/version.GitCommit=$(git rev-parse HEAD) \
// -X github.com/user/project/version.BuildTime=$(date -u +%Y-%m-%dT%H:%M:%SZ)"
```
## Go Generate
```go
// models/user.go
//go:generate mockgen -source=user.go -destination=../mocks/user_mock.go -package=mocks
package models
type UserRepository interface {
GetUser(id string) (*User, error)
SaveUser(user *User) error
}
// tools.go - Track tool dependencies
//go:build tools
package tools
import (
_ "github.com/golang/mock/mockgen"
_ "golang.org/x/tools/cmd/stringer"
)
// Install tools:
// go install github.com/golang/mock/mockgen@latest
// Run generate:
// go generate ./...
```
## Configuration Management
```go
// config/config.go
package config
import (
"os"
"time"
"github.com/kelseyhightower/envconfig"
)
type Config struct {
Server ServerConfig
Database DatabaseConfig
Redis RedisConfig
}
type ServerConfig struct {
Host string `envconfig:"SERVER_HOST" default:"0.0.0.0"`
Port int `envconfig:"SERVER_PORT" default:"8080"`
ReadTimeout time.Duration `envconfig:"SERVER_READ_TIMEOUT" default:"10s"`
WriteTimeout time.Duration `envconfig:"SERVER_WRITE_TIMEOUT" default:"10s"`
}
type DatabaseConfig struct {
URL string `envconfig:"DATABASE_URL" required:"true"`
MaxOpenConns int `envconfig:"DB_MAX_OPEN_CONNS" default:"25"`
MaxIdleConns int `envconfig:"DB_MAX_IDLE_CONNS" default:"5"`
}
type RedisConfig struct {
Addr string `envconfig:"REDIS_ADDR" default:"localhost:6379"`
Password string `envconfig:"REDIS_PASSWORD"`
DB int `envconfig:"REDIS_DB" default:"0"`
}
// Load loads configuration from environment
func Load() (*Config, error) {
var cfg Config
if err := envconfig.Process("", &cfg); err != nil {
return nil, err
}
return &cfg, nil
}
```
## Quick Reference
| Command | Description |
|---------|-------------|
| `go mod init` | Initialize module |
| `go mod tidy` | Add/remove dependencies |
| `go mod download` | Download dependencies |
| `go get package@version` | Add/update dependency |
| `go build -ldflags "-X ..."` | Set version info |
| `go generate ./...` | Run code generation |
| `GOOS=linux go build` | Cross-compile |
| `go work init` | Initialize workspace |
================================================
FILE: .claude/skills/golang-pro/references/testing.md
================================================
# Testing and Benchmarking
## Table-Driven Tests
```go
package math
import "testing"
func Add(a, b int) int {
return a + b
}
func TestAdd(t *testing.T) {
tests := []struct {
name string
a, b int
expected int
}{
{"positive numbers", 2, 3, 5},
{"negative numbers", -2, -3, -5},
{"mixed signs", -2, 3, 1},
{"zeros", 0, 0, 0},
{"large numbers", 1000000, 2000000, 3000000},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := Add(tt.a, tt.b)
if result != tt.expected {
t.Errorf("Add(%d, %d) = %d; want %d", tt.a, tt.b, result, tt.expected)
}
})
}
}
```
## Subtests and Parallel Execution
```go
func TestParallel(t *testing.T) {
tests := []struct {
name string
input string
want string
}{
{"lowercase", "hello", "HELLO"},
{"uppercase", "WORLD", "WORLD"},
{"mixed", "HeLLo", "HELLO"},
}
for _, tt := range tests {
tt := tt // Capture range variable for parallel tests
t.Run(tt.name, func(t *testing.T) {
t.Parallel() // Run subtests in parallel
result := strings.ToUpper(tt.input)
if result != tt.want {
t.Errorf("got %q, want %q", result, tt.want)
}
})
}
}
```
## Test Helpers and Setup/Teardown
```go
func TestWithSetup(t *testing.T) {
// Setup
db := setupTestDB(t)
defer cleanupTestDB(t, db)
tests := []struct {
name string
user User
}{
{"valid user", User{Name: "John", Email: "john@example.com"}},
{"empty name", User{Name: "", Email: "test@example.com"}},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := db.SaveUser(tt.user)
if err != nil {
t.Fatalf("SaveUser failed: %v", err)
}
})
}
}
// Helper function (doesn't show in stack trace)
func setupTestDB(t *testing.T) *DB {
t.Helper()
db, err := NewDB(":memory:")
if err != nil {
t.Fatalf("failed to create test DB: %v", err)
}
return db
}
func cleanupTestDB(t *testing.T, db *DB) {
t.Helper()
if err := db.Close(); err != nil {
t.Errorf("failed to close DB: %v", err)
}
}
```
## Mocking with Interfaces
```go
// Interface to mock
type EmailSender interface {
Send(to, subject, body string) error
}
// Mock implementation
type MockEmailSender struct {
SentEmails []Email
ShouldFail bool
}
type Email struct {
To, Subject, Body string
}
func (m *MockEmailSender) Send(to, subject, body string) error {
if m.ShouldFail {
return fmt.Errorf("failed to send email")
}
m.SentEmails = append(m.SentEmails, Email{to, subject, body})
return nil
}
// Test using mock
func TestUserService_Register(t *testing.T) {
mockSender := &MockEmailSender{}
service := NewUserService(mockSender)
err := service.Register("user@example.com")
if err != nil {
t.Fatalf("Register failed: %v", err)
}
if len(mockSender.SentEmails) != 1 {
t.Errorf("expected 1 email sent, got %d", len(mockSender.SentEmails))
}
email := mockSender.SentEmails[0]
if email.To != "user@example.com" {
t.Errorf("expected email to user@example.com, got %s", email.To)
}
}
```
## Benchmarking
```go
func BenchmarkAdd(b *testing.B) {
for i := 0; i < b.N; i++ {
Add(100, 200)
}
}
// Benchmark with subtests
func BenchmarkStringOperations(b *testing.B) {
benchmarks := []struct {
name string
input string
}{
{"short", "hello"},
{"medium", strings.Repeat("hello", 10)},
{"long", strings.Repeat("hello", 100)},
}
for _, bm := range benchmarks {
b.Run(bm.name, func(b *testing.B) {
for i := 0; i < b.N; i++ {
_ = strings.ToUpper(bm.input)
}
})
}
}
// Benchmark with setup
func BenchmarkMapOperations(b *testing.B) {
m := make(map[string]int)
for i := 0; i < 1000; i++ {
m[fmt.Sprintf("key%d", i)] = i
}
b.ResetTimer() // Don't count setup time
for i := 0; i < b.N; i++ {
_ = m["key500"]
}
}
// Parallel benchmark
func BenchmarkConcurrentAccess(b *testing.B) {
var counter int64
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
atomic.AddInt64(&counter, 1)
}
})
}
// Memory allocation benchmark
func BenchmarkAllocation(b *testing.B) {
b.ReportAllocs() // Report allocations
for i := 0; i < b.N; i++ {
s := make([]int, 1000)
_ = s
}
}
```
## Fuzzing (Go 1.18+)
```go
func FuzzReverse(f *testing.F) {
// Seed corpus
testcases := []string{"hello", "world", "123", ""}
for _, tc := range testcases {
f.Add(tc)
}
f.Fuzz(func(t *testing.T, input string) {
reversed := Reverse(input)
doubleReversed := Reverse(reversed)
if input != doubleReversed {
t.Errorf("Reverse(Reverse(%q)) = %q, want %q", input, doubleReversed, input)
}
})
}
// Fuzz with multiple parameters
func FuzzAdd(f *testing.F) {
f.Add(1, 2)
f.Add(0, 0)
f.Add(-1, 1)
f.Fuzz(func(t *testing.T, a, b int) {
result := Add(a, b)
// Properties that should always hold
if result < a && b >= 0 {
t.Errorf("Add(%d, %d) = %d; result should be >= a when b >= 0", a, b, result)
}
})
}
```
## Test Coverage
```go
// Run tests with coverage:
// go test -cover
// go test -coverprofile=coverage.out
// go tool cover -html=coverage.out
func TestCalculate(t *testing.T) {
tests := []struct {
name string
input int
expected int
}{
{"zero", 0, 0},
{"positive", 5, 25},
{"negative", -3, 9},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := Calculate(tt.input)
if result != tt.expected {
t.Errorf("Calculate(%d) = %d; want %d", tt.input, result, tt.expected)
}
})
}
}
```
## Race Detector
```go
// Run with: go test -race
func TestConcurrentAccess(t *testing.T) {
var counter int
var wg sync.WaitGroup
// This will fail with -race if not synchronized
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
counter++ // Data race!
}()
}
wg.Wait()
}
// Fixed version with mutex
func TestConcurrentAccessSafe(t *testing.T) {
var counter int
var mu sync.Mutex
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
mu.Lock()
counter++
mu.Unlock()
}()
}
wg.Wait()
if counter != 10 {
t.Errorf("expected 10, got %d", counter)
}
}
```
## Golden Files
```go
import (
"os"
"path/filepath"
"testing"
)
func TestRenderHTML(t *testing.T) {
data := Data{Title: "Test", Content: "Hello"}
result := RenderHTML(data)
goldenFile := filepath.Join("testdata", "expected.html")
if *update {
// Update golden file: go test -update
os.WriteFile(goldenFile, []byte(result), 0644)
}
expected, err := os.ReadFile(goldenFile)
if err != nil {
t.Fatalf("failed to read golden file: %v", err)
}
if result != string(expected) {
t.Errorf("output doesn't match golden file\ngot:\n%s\nwant:\n%s", result, expected)
}
}
var update = flag.Bool("update", false, "update golden files")
```
## Integration Tests
```go
// integration_test.go
// +build integration
package myapp
import (
"testing"
"time"
)
func TestIntegration(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test in short mode")
}
// Long-running integration test
server := startTestServer(t)
defer server.Stop()
time.Sleep(100 * time.Millisecond) // Wait for server
client := NewClient(server.URL)
resp, err := client.Get("/health")
if err != nil {
t.Fatalf("health check failed: %v", err)
}
if resp.Status != "ok" {
t.Errorf("expected status ok, got %s", resp.Status)
}
}
// Run: go test -tags=integration
// Run short tests only: go test -short
```
## Testable Examples
```go
// Example tests that appear in godoc
func ExampleAdd() {
result := Add(2, 3)
fmt.Println(result)
// Output: 5
}
func ExampleAdd_negative() {
result := Add(-2, -3)
fmt.Println(result)
// Output: -5
}
// Unordered output
func ExampleKeys() {
m := map[string]int{"a": 1, "b": 2, "c": 3}
keys := Keys(m)
for _, k := range keys {
fmt.Println(k)
}
// Unordered output:
// a
// b
// c
}
```
## Quick Reference
| Command | Description |
|---------|-------------|
| `go test` | Run tests |
| `go test -v` | Verbose output |
| `go test -run TestName` | Run specific test |
| `go test -bench .` | Run benchmarks |
| `go test -cover` | Show coverage |
| `go test -race` | Run race detector |
| `go test -short` | Skip long tests |
| `go test -fuzz FuzzName` | Run fuzzing |
| `go test -cpuprofile cpu.prof` | CPU profiling |
| `go test -memprofile mem.prof` | Memory profiling |
================================================
FILE: .claude/skills/skill-creator/LICENSE.txt
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================
FILE: .claude/skills/skill-creator/SKILL.md
================================================
---
name: skill-creator
description: Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, update or optimize an existing skill, run evals to test a skill, benchmark skill performance with variance analysis, or optimize a skill's description for better triggering accuracy.
---
# Skill Creator
A skill for creating new skills and iteratively improving them.
At a high level, the process of creating a skill goes like this:
- Decide what you want the skill to do and roughly how it should do it
- Write a draft of the skill
- Create a few test prompts and run claude-with-access-to-the-skill on them
- Help the user evaluate the results both qualitatively and quantitatively
- While the runs happen in the background, draft some quantitative evals if there aren't any (if there are some, you can either use as is or modify if you feel something needs to change about them). Then explain them to the user (or if they already existed, explain the ones that already exist)
- Use the `eval-viewer/generate_review.py` script to show the user the results for them to look at, and also let them look at the quantitative metrics
- Rewrite the skill based on feedback from the user's evaluation of the results (and also if there are any glaring flaws that become apparent from the quantitative benchmarks)
- Repeat until you're satisfied
- Expand the test set and try again at larger scale
Your job when using this skill is to figure out where the user is in this process and then jump in and help them progress through these stages. So for instance, maybe they're like "I want to make a skill for X". You can help narrow down what they mean, write a draft, write the test cases, figure out how they want to evaluate, run all the prompts, and repeat.
On the other hand, maybe they already have a draft of the skill. In this case you can go straight to the eval/iterate part of the loop.
Of course, you should always be flexible and if the user is like "I don't need to run a bunch of evaluations, just vibe with me", you can do that instead.
Then after the skill is done (but again, the order is flexible), you can also run the skill description improver, which we have a whole separate script for, to optimize the triggering of the skill.
Cool? Cool.
## Communicating with the user
The skill creator is liable to be used by people across a wide range of familiarity with coding jargon. If you haven't heard (and how could you, it's only very recently that it started), there's a trend now where the power of Claude is inspiring plumbers to open up their terminals, parents and grandparents to google "how to install npm". On the other hand, the bulk of users are probably fairly computer-literate.
So please pay attention to context cues to understand how to phrase your communication! In the default case, just to give you some idea:
- "evaluation" and "benchmark" are borderline, but OK
- for "JSON" and "assertion" you want to see serious cues from the user that they know what those things are before using them without explaining them
It's OK to briefly explain terms if you're in doubt, and feel free to clarify terms with a short definition if you're unsure if the user will get it.
---
## Creating a skill
### Capture Intent
Start by understanding the user's intent. The current conversation might already contain a workflow the user wants to capture (e.g., they say "turn this into a skill"). If so, extract answers from the conversation history first — the tools used, the sequence of steps, corrections the user made, input/output formats observed. The user may need to fill the gaps, and should confirm before proceeding to the next step.
1. What should this skill enable Claude to do?
2. When should this skill trigger? (what user phrases/contexts)
3. What's the expected output format?
4. Should we set up test cases to verify the skill works? Skills with objectively verifiable outputs (file transforms, data extraction, code generation, fixed workflow steps) benefit from test cases. Skills with subjective outputs (writing style, art) often don't need them. Suggest the appropriate default based on the skill type, but let the user decide.
### Interview and Research
Proactively ask questions about edge cases, input/output formats, example files, success criteria, and dependencies. Wait to write test prompts until you've got this part ironed out.
Check available MCPs - if useful for research (searching docs, finding similar skills, looking up best practices), research in parallel via subagents if available, otherwise inline. Come prepared with context to reduce burden on the user.
### Write the SKILL.md
Based on the user interview, fill in these components:
- **name**: Skill identifier
- **description**: When to trigger, what it does. This is the primary triggering mechanism - include both what the skill does AND specific contexts for when to use it. All "when to use" info goes here, not in the body. Note: currently Claude has a tendency to "undertrigger" skills -- to not use them when they'd be useful. To combat this, please make the skill descriptions a little bit "pushy". So for instance, instead of "How to build a simple fast dashboard to display internal Anthropic data.", you might write "How to build a simple fast dashboard to display internal Anthropic data. Make sure to use this skill whenever the user mentions dashboards, data visualization, internal metrics, or wants to display any kind of company data, even if they don't explicitly ask for a 'dashboard.'"
- **compatibility**: Required tools, dependencies (optional, rarely needed)
- **the rest of the skill :)**
### Skill Writing Guide
#### Anatomy of a Skill
```
skill-name/
├── SKILL.md (required)
│ ├── YAML frontmatter (name, description required)
│ └── Markdown instructions
└── Bundled Resources (optional)
├── scripts/ - Executable code for deterministic/repetitive tasks
├── references/ - Docs loaded into context as needed
└── assets/ - Files used in output (templates, icons, fonts)
```
#### Progressive Disclosure
Skills use a three-level loading system:
1. **Metadata** (name + description) - Always in context (~100 words)
2. **SKILL.md body** - In context whenever skill triggers (<500 lines ideal)
3. **Bundled resources** - As needed (unlimited, scripts can execute without loading)
These word counts are approximate and you can feel free to go longer if needed.
**Key patterns:**
- Keep SKILL.md under 500 lines; if you're approaching this limit, add an additional layer of hierarchy along with clear pointers about where the model using the skill should go next to follow up.
- Reference files clearly from SKILL.md with guidance on when to read them
- For large reference files (>300 lines), include a table of contents
**Domain organization**: When a skill supports multiple domains/frameworks, organize by variant:
```
cloud-deploy/
├── SKILL.md (workflow + selection)
└── references/
├── aws.md
├── gcp.md
└── azure.md
```
Claude reads only the relevant reference file.
#### Principle of Lack of Surprise
This goes without saying, but skills must not contain malware, exploit code, or any content that could compromise system security. A skill's contents should not surprise the user in their intent if described. Don't go along with requests to create misleading skills or skills designed to facilitate unauthorized access, data exfiltration, or other malicious activities. Things like a "roleplay as an XYZ" are OK though.
#### Writing Patterns
Prefer using the imperative form in instructions.
**Defining output formats** - You can do it like this:
```markdown
## Report structure
ALWAYS use this exact template:
# [Title]
## Executive summary
## Key findings
## Recommendations
```
**Examples pattern** - It's useful to include examples. You can format them like this (but if "Input" and "Output" are in the examples you might want to deviate a little):
```markdown
## Commit message format
**Example 1:**
Input: Added user authentication with JWT tokens
Output: feat(auth): implement JWT-based authentication
```
### Writing Style
Try to explain to the model why things are important in lieu of heavy-handed musty MUSTs. Use theory of mind and try to make the skill general and not super-narrow to specific examples. Start by writing a draft and then look at it with fresh eyes and improve it.
### Test Cases
After writing the skill draft, come up with 2-3 realistic test prompts — the kind of thing a real user would actually say. Share them with the user: [you don't have to use this exact language] "Here are a few test cases I'd like to try. Do these look right, or do you want to add more?" Then run them.
Save test cases to `evals/evals.json`. Don't write assertions yet — just the prompts. You'll draft assertions in the next step while the runs are in progress.
```json
{
"skill_name": "example-skill",
"evals": [
{
"id": 1,
"prompt": "User's task prompt",
"expected_output": "Description of expected result",
"files": []
}
]
}
```
See `references/schemas.md` for the full schema (including the `assertions` field, which you'll add later).
## Running and evaluating test cases
This section is one continuous sequence — don't stop partway through. Do NOT use `/skill-test` or any other testing skill.
Put results in `<skill-name>-workspace/` as a sibling to the skill directory. Within the workspace, organize results by iteration (`iteration-1/`, `iteration-2/`, etc.) and within that, each test case gets a directory (`eval-0/`, `eval-1/`, etc.). Don't create all of this upfront — just create directories as you go.
### Step 1: Spawn all runs (with-skill AND baseline) in the same turn
For each test case, spawn two subagents in the same turn — one with the skill, one without. This is important: don't spawn the with-skill runs first and then come back for baselines later. Launch everything at once so it all finishes around the same time.
**With-skill run:**
```
Execute this task:
- Skill path: <path-to-skill>
- Task: <eval prompt>
- Input files: <eval files if any, or "none">
- Save outputs to: <workspace>/iteration-<N>/eval-<ID>/with_skill/outputs/
- Outputs to save: <what the user cares about — e.g., "the .docx file", "the final CSV">
```
**Baseline run** (same prompt, but the baseline depends on context):
- **Creating a new skill**: no skill at all. Same prompt, no skill path, save to `without_skill/outputs/`.
- **Improving an existing skill**: the old version. Before editing, snapshot the skill (`cp -r <skill-path> <workspace>/skill-snapshot/`), then point the baseline subagent at the snapshot. Save to `old_skill/outputs/`.
Write an `eval_metadata.json` for each test case (assertions can be empty for now). Give each eval a descriptive name based on what it's testing — not just "eval-0". Use this name for the directory too. If this iteration uses new or modified eval prompts, create these files for each new eval directory — don't assume they carry over from previous iterations.
```json
{
"eval_id": 0,
"eval_name": "descriptive-name-here",
"prompt": "The user's task prompt",
"assertions": []
}
```
### Step 2: While runs are in progress, draft assertions
Don't just wait for the runs to finish — you can use this time productively. Draft quantitative assertions for each test case and explain them to the user. If assertions already exist in `evals/evals.json`, review them and explain what they check.
Good assertions are objectively verifiable and have descriptive names — they should read clearly in the benchmark viewer so someone glancing at the results immediately understands what each one checks. Subjective skills (writing style, design quality) are better evaluated qualitatively — don't force assertions onto things that need human judgment.
Update the `eval_metadata.json` files and `evals/evals.json` with the assertions once drafted. Also explain to the user what they'll see in the viewer — both the qualitative outputs and the quantitative benchmark.
### Step 3: As runs complete, capture timing data
When each subagent task completes, you receive a notification containing `total_tokens` and `duration_ms`. Save this data immediately to `timing.json` in the run directory:
```json
{
"total_tokens": 84852,
"duration_ms": 23332,
"total_duration_seconds": 23.3
}
```
This is the only opportunity to capture this data — it comes through the task notification and isn't persisted elsewhere. Process each notification as it arrives rather than trying to batch them.
### Step 4: Grade, aggregate, and launch the viewer
Once all runs are done:
1. **Grade each run** — spawn a grader subagent (or grade inline) that reads `agents/grader.md` and evaluates each assertion against the outputs. Save results to `grading.json` in each run directory. The grading.json expectations array must use the fields `text`, `passed`, and `evidence` (not `name`/`met`/`details` or other variants) — the viewer depends on these exact field names. For assertions that can be checked programmatically, write and run a script rather than eyeballing it — scripts are faster, more reliable, and can be reused across iterations.
2. **Aggregate into benchmark** — run the aggregation script from the skill-creator directory:
```bash
python -m scripts.aggregate_benchmark <workspace>/iteration-N --skill-name <name>
```
This produces `benchmark.json` and `benchmark.md` with pass_rate, time, and tokens for each configuration, with mean ± stddev and the delta. If generating benchmark.json manually, see `references/schemas.md` for the exact schema the viewer expects.
Put each with_skill version before its baseline counterpart.
3. **Do an analyst pass** — read the benchmark data and surface patterns the aggregate stats might hide. See `agents/analyzer.md` (the "Analyzing Benchmark Results" section) for what to look for — things like assertions that always pass regardless of skill (non-discriminating), high-variance evals (possibly flaky), and time/token tradeoffs.
4. **Launch the viewer** with both qualitative outputs and quantitative data:
```bash
nohup python <skill-creator-path>/eval-viewer/generate_review.py \
<workspace>/iteration-N \
--skill-name "my-skill" \
--benchmark <workspace>/iteration-N/benchmark.json \
> /dev/null 2>&1 &
VIEWER_PID=$!
```
For iteration 2+, also pass `--previous-workspace <workspace>/iteration-<N-1>`.
**Cowork / headless environments:** If `webbrowser.open()` is not available or the environment has no display, use `--static <output_path>` to write a standalone HTML file instead of starting a server. Feedback will be downloaded as a `feedback.json` file when the user clicks "Submit All Reviews". After download, copy `feedback.json` into the workspace directory for the next iteration to pick up.
Note: please use generate_review.py to create the viewer; there's no need to write custom HTML.
5. **Tell the user** something like: "I've opened the results in your browser. There are two tabs — 'Outputs' lets you click through each test case and leave feedback, 'Benchmark' shows the quantitative comparison. When you're done, come back here and let me know."
### What the user sees in the viewer
The "Outputs" tab shows one test case at a time:
- **Prompt**: the task that was given
- **Output**: the files the skill produced, rendered inline where possible
- **Previous Output** (iteration 2+): collapsed section showing last iteration's output
- **Formal Grades** (if grading was run): collapsed section showing assertion pass/fail
- **Feedback**: a textbox that auto-saves as they type
- **Previous Feedback** (iteration 2+): their comments from last time, shown below the textbox
The "Benchmark" tab shows the stats summary: pass rates, timing, and token usage for each configuration, with per-eval breakdowns and analyst observations.
Navigation is via prev/next buttons or arrow keys. When done, they click "Submit All Reviews" which saves all feedback to `feedback.json`.
### Step 5: Read the feedback
When the user tells you they're done, read `feedback.json`:
```json
{
"reviews": [
{"run_id": "eval-0-with_skill", "feedback": "the chart is missing axis labels", "timestamp": "..."},
{"run_id": "eval-1-with_skill", "feedback": "", "timestamp": "..."},
{"run_id": "eval-2-with_skill", "feedback": "perfect, love this", "timestamp": "..."}
],
"status": "complete"
}
```
Empty feedback means the user thought it was fine. Focus your improvements on the test cases where the user had specific complaints.
Kill the viewer server when you're done with it:
```bash
kill $VIEWER_PID 2>/dev/null
```
---
## Improving the skill
This is the heart of the loop. You've run the test cases, the user has reviewed the results, and now you need to make the skill better based on their feedback.
### How to think about improvements
1. **Generalize from the feedback.** The big picture thing that's happening here is that we're trying to create skills that can be used a million times (maybe literally, maybe even more who knows) across many different prompts. Here you and the user are iterating on only a few examples over and over again because it helps move faster. The user knows these examples in and out and it's quick for them to assess new outputs. But if the skill you and the user are codeveloping works only for those examples, it's useless. Rather than put in fiddly overfitty changes, or oppressively constrictive MUSTs, if there's some stubborn issue, you might try branching out and using different metaphors, or recommending different patterns of working. It's relatively cheap to try and maybe you'll land on something great.
2. **Keep the prompt lean.** Remove things that aren't pulling their weight. Make sure to read the transcripts, not just the final outputs — if it looks like the skill is making the model waste a bunch of time doing things that are unproductive, you can try getting rid of the parts of the skill that are making it do that and seeing what happens.
3. **Explain the why.** Try hard to explain the **why** behind everything you're asking the model to do. Today's LLMs are *smart*. They have good theory of mind and when given a good harness can go beyond rote instructions and really make things happen. Even if the feedback from the user is terse or frustrated, try to actually understand the task and why the user is writing what they wrote, and what they actually wrote, and then transmit this understanding into the instructions. If you find yourself writing ALWAYS or NEVER in all caps, or using super rigid structures, that's a yellow flag — if possible, reframe and explain the reasoning so that the model understands why the thing you're asking for is important. That's a more humane, powerful, and effective approach.
4. **Look for repeated work across test cases.** Read the transcripts from the test runs and notice if the subagents all independently wrote similar helper scripts or took the same multi-step approach to something. If all 3 test cases resulted in the subagent writing a `create_docx.py` or a `build_chart.py`, that's a strong signal the skill should bundle that script. Write it once, put it in `scripts/`, and tell the skill to use it. This saves every future invocation from reinventing the wheel.
This task is pretty important (we are trying to create billions a year in economic value here!) and your thinking time is not the blocker; take your time and really mull things over. I'd suggest writing a draft revision and then looking at it anew and making improvements. Really do your best to get into the head of the user and understand what they want and need.
### The iteration loop
After improving the skill:
1. Apply your improvements to the skill
2. Rerun all test cases into a new `iteration-<N+1>/` directory, including baseline runs. If you're creating a new skill, the baseline is always `without_skill` (no skill) — that stays the same across iterations. If you're improving an existing skill, use your judgment on what makes sense as the baseline: the original version the user came in with, or the previous iteration.
3. Launch the reviewer with `--previous-workspace` pointing at the previous iteration
4. Wait for the user to review and tell you they're done
5. Read the new feedback, improve again, repeat
Keep going until:
- The user says they're happy
- The feedback is all empty (everything looks good)
- You're not making meaningful progress
---
## Advanced: Blind comparison
For situations where you want a more rigorous comparison between two versions of a skill (e.g., the user asks "is the new version actually better?"), there's a blind comparison system. Read `agents/comparator.md` and `agents/analyzer.md` for the details. The basic idea is: give two outputs to an independent agent without telling it which is which, and let it judge quality. Then analyze why the winner won.
This is optional, requires subagents, and most users won't need it. The human review loop is usually sufficient.
---
## Description Optimization
The description field in SKILL.md frontmatter is the primary mechanism that determines whether Claude invokes a skill. After creating or improving a skill, offer to optimize the description for better triggering accuracy.
### Step 1: Generate trigger eval queries
Create 20 eval queries — a mix of should-trigger and should-not-trigger. Save as JSON:
```json
[
{"query": "the user prompt", "should_trigger": true},
{"query": "another prompt", "should_trigger": false}
]
```
The queries must be realistic and something a Claude Code or Claude.ai user would actually type. Not abstract requests, but requests that are concrete and specific and have a good amount of detail. For instance, file paths, personal context about the user's job or situation, column names and values, company names, URLs. A little bit of backstory. Some might be in lowercase or contain abbreviations or typos or casual speech. Use a mix of different lengths, and focus on edge cases rather than making them clear-cut (the user will get a chance to sign off on them).
Bad: `"Format this data"`, `"Extract text from PDF"`, `"Create a chart"`
Good: `"ok so my boss just sent me this xlsx file (its in my downloads, called something like 'Q4 sales final FINAL v2.xlsx') and she wants me to add a column that shows the profit margin as a percentage. The revenue is in column C and costs are in column D i think"`
For the **should-trigger** queries (8-10), think about coverage. You want different phrasings of the same intent — some formal, some casual. Include cases where the user doesn't explicitly name the skill or file type but clearly needs it. Throw in some uncommon use cases and cases where this skill competes with another but should win.
For the **should-not-trigger** queries (8-10), the most valuable ones are the near-misses — queries that share keywords or concepts with the skill but actually need something different. Think adjacent domains, ambiguous phrasing where a naive keyword match would trigger but shouldn't, and cases where the query touches on something the skill does but in a context where another tool is more appropriate.
The key thing to avoid: don't make should-not-trigger queries obviously irrelevant. "Write a fibonacci function" as a negative test for a PDF skill is too easy — it doesn't test anything. The negative cases should be genuinely tricky.
### Step 2: Review with user
Present the eval set to the user for review using the HTML template:
1. Read the template from `assets/eval_review.html`
2. Replace the placeholders:
- `__EVAL_DATA_PLACEHOLDER__` → the JSON array of eval items (no quotes around it — it's a JS variable assignment)
- `__SKILL_NAME_PLACEHOLDER__` → the skill's name
- `__SKILL_DESCRIPTION_PLACEHOLDER__` → the skill's current description
3. Write to a temp file (e.g., `/tmp/eval_review_<skill-name>.html`) and open it: `open /tmp/eval_review_<skill-name>.html`
4. The user can edit queries, toggle should-trigger, add/remove entries, then click "Export Eval Set"
5. The file downloads to `~/Downloads/eval_set.json` — check the Downloads folder for the most recent version in case there are multiple (e.g., `eval_set (1).json`)
This step matters — bad eval queries lead to bad descriptions.
### Step 3: Run the optimization loop
Tell the user: "This will take some time — I'll run the optimization loop in the background and check on it periodically."
Save the eval set to the workspace, then run in the background:
```bash
python -m scripts.run_loop \
--eval-set <path-to-trigger-eval.json> \
--skill-path <path-to-skill> \
--model <model-id-powering-this-session> \
--max-iterations 5 \
--verbose
```
Use the model ID from your system prompt (the one powering the current session) so the triggering test matches what the user actually experiences.
While it runs, periodically tail the output to give the user updates on which iteration it's on and what the scores look like.
This handles the full optimization loop automatically. It splits the eval set into 60% train and 40% held-out test, evaluates the current description (running each query 3 times to get a reliable trigger rate), then calls Claude with extended thinking to propose improvements based on what failed. It re-evaluates each new description on both train and test, iterating up to 5 times. When it's done, it opens an HTML report in the browser showing the results per iteration and returns JSON with `best_description` — selected by test score rather than train score to avoid overfitting.
### How skill triggering works
Understanding the triggering mechanism helps design better eval queries. Skills appear in Claude's `available_skills` list with their name + description, and Claude decides whether to consult a skill based on that description. The important thing to know is that Claude only consults skills for tasks it can't easily handle on its own — simple, one-step queries like "read this PDF" may not trigger a skill even if the description matches perfectly, because Claude can handle them directly with basic tools. Complex, multi-step, or specialized queries reliably trigger skills when the description matches.
This means your eval queries should be substantive enough that Claude would actually benefit from consulting a skill. Simple queries like "read file X" are poor test cases — they won't trigger skills regardless of description quality.
### Step 4: Apply the result
Take `best_description` from the JSON output and update the skill's SKILL.md frontmatter. Show the user before/after and report the scores.
---
### Package and Present (only if `present_files` tool is available)
Check whether you have access to the `present_files` tool. If you don't, skip this step. If you do, package the skill and present the .skill file to the user:
```bash
python -m scripts.package_skill <path/to/skill-folder>
```
After packaging, direct the user to the resulting `.skill` file path so they can install it.
---
## Claude.ai-specific instructions
In Claude.ai, the core workflow is the same (draft → test → review → improve → repeat), but because Claude.ai doesn't have subagents, some mechanics change. Here's what to adapt:
**Running test cases**: No subagents means no parallel execution. For each test case, read the skill's SKILL.md, then follow its instructions to accomplish the test prompt yourself. Do them one at a time. This is less rigorous than independent subagents (you wrote the skill and you're also running it, so you have full context), but it's a useful sanity check — and the human review step compensates. Skip the baseline runs — just use the skill to complete the task as requested.
**Reviewing results**: If you can't open a browser (e.g., Claude.ai's VM has no display, or you're on a remote server), skip the browser reviewer entirely. Instead, present results directly in the conversation. For each test case, show the prompt and the output. If the output is a file the user needs to see (like a .docx or .xlsx), save it to the filesystem and tell them where it is so they can download and inspect it. Ask for feedback inline: "How does this look? Anything you'd change?"
**Benchmarking**: Skip the quantitative benchmarking — it relies on baseline comparisons which aren't meaningful without subagents. Focus on qualitative feedback from the user.
**The iteration loop**: Same as before — improve the skill, rerun the test cases, ask for feedback — just without the browser reviewer in the middle. You can still organize results into iteration directories on the filesystem if you have one.
**Description optimization**: This section requires the `claude` CLI tool (specifically `claude -p`) which is only available in Claude Code. Skip it if you're on Claude.ai.
**Blind comparison**: Requires subagents. Skip it.
**Packaging**: The `package_skill.py` script works anywhere with Python and a filesystem. On Claude.ai, you can run it and the user can download the resulting `.skill` file.
---
## Cowork-Specific Instructions
If you're in Cowork, the main things to know are:
- You have subagents, so the main workflow (spawn test cases in parallel, run baselines, grade, etc.) all works. (However, if you run into severe problems with timeouts, it's OK to run the test prompts in series rather than parallel.)
- You don't have a browser or display, so when generating the eval viewer, use `--static <output_path>` to write a standalone HTML file instead of starting a server. Then proffer a link that the user can click to open the HTML in their browser.
- For whatever reason, the Cowork setup seems to disincline Claude from generating the eval viewer after running the tests, so just to reiterate: whether you're in Cowork or in Claude Code, after running tests, you should always generate the eval viewer for the human to look at examples before revising the skill yourself and trying to make corrections, using `generate_review.py` (not writing your own boutique html code). Sorry in advance but I'm gonna go all caps here: GENERATE THE EVAL VIEWER *BEFORE* evaluating inputs yourself. You want to get them in front of the human ASAP!
- Feedback works differently: since there's no running server, the viewer's "Submit All Reviews" button will download `feedback.json` as a file. You can then read it from there (you may have to request access first).
- Packaging works — `package_skill.py` just needs Python and a filesystem.
- Description optimization (`run_loop.py` / `run_eval.py`) should work in Cowork just fine since it uses `claude -p` via subprocess, not a browser, but please save it until you've fully finished making the skill and the user agrees it's in good shape.
---
## Reference files
The agents/ directory contains instructions for specialized subagents. Read them when you need to spawn the relevant subagent.
- `agents/grader.md` — How to evaluate assertions against outputs
- `agents/comparator.md` — How to do blind A/B comparison between two outputs
- `agents/analyzer.md` — How to analyze why one version beat another
The references/ directory has additional documentation:
- `references/schemas.md` — JSON structures for evals.json, grading.json, etc.
---
Repeating one more time the core loop here for emphasis:
- Figure out what the skill is about
- Draft or edit the skill
- Run claude-with-access-to-the-skill on test prompts
- With the user, evaluate the outputs:
- Create benchmark.json and run `eval-viewer/generate_review.py` to help the user review them
- Run quantitative evals
- Repeat until you and the user are satisfied
- Package the final skill and return it to the user.
Please add steps to your TodoList, if you have such a thing, to make sure you don't forget. If you're in Cowork, please specifically put "Create evals JSON and run `eval-viewer/generate_review.py` so human can review test cases" in your TodoList to make sure it happens.
Good luck!
================================================
FILE: .claude/skills/skill-creator/agents/analyzer.md
================================================
# Post-hoc Analyzer Agent
Analyze blind comparison results to understand WHY the winner won and generate improvement suggestions.
## Role
After the blind comparator determines a winner, the Post-hoc Analyzer "unblids" the results by examining the skills and transcripts. The goal is to extract actionable insights: what made the winner better, and how can the loser be improved?
## Inputs
You receive these parameters in your prompt:
- **winner**: "A" or "B" (from blind comparison)
- **winner_skill_path**: Path to the skill that produced the winning output
- **winner_transcript_path**: Path to the execution transcript for the winner
- **loser_skill_path**: Path to the skill that produced the losing output
- **loser_transcript_path**: Path to the execution transcript for the loser
- **comparison_result_path**: Path to the blind comparator's output JSON
- **output_path**: Where to save the analysis results
## Process
### Step 1: Read Comparison Result
1. Read the blind comparator's output at comparison_result_path
2. Note the winning side (A or B), the reasoning, and any scores
3. Understand what the comparator valued in the winning output
### Step 2: Read Both Skills
1. Read the winner skill's SKILL.md and key referenced files
2. Read the loser skill's SKILL.md and key referenced files
3. Identify structural differences:
- Instructions clarity and specificity
- Script/tool usage patterns
- Example coverage
- Edge case handling
### Step 3: Read Both Transcripts
1. Read the winner's transcript
2. Read the loser's transcript
3. Compare execution patterns:
- How closely did each follow their skill's instructions?
- What tools were used differently?
- Where did the loser diverge from optimal behavior?
- Did either encounter errors or make recovery attempts?
### Step 4: Analyze Instruction Following
For each transcript, evaluate:
- Did the agent follow the skill's explicit instructions?
- Did the agent use the skill's provided tools/scripts?
- Were there missed opportunities to leverage skill content?
- Did the agent add unnecessary steps not in the skill?
Score instruction following 1-10 and note specific issues.
### Step 5: Identify Winner Strengths
Determine what made the winner better:
- Clearer instructions that led to better behavior?
- Better scripts/tools that produced better output?
- More comprehensive examples that guided edge cases?
- Better error handling guidance?
Be specific. Quote from skills/transcripts where relevant.
### Step 6: Identify Loser Weaknesses
Determine what held the loser back:
- Ambiguous instructions that led to suboptimal choices?
- Missing tools/scripts that forced workarounds?
- Gaps in edge case coverage?
- Poor error handling that caused failures?
### Step 7: Generate Improvement Suggestions
Based on the analysis, produce actionable suggestions for improving the loser skill:
- Specific instruction changes to make
- Tools/scripts to add or modify
- Examples to include
- Edge cases to address
Prioritize by impact. Focus on changes that would have changed the outcome.
### Step 8: Write Analysis Results
Save structured analysis to `{output_path}`.
## Output Format
Write a JSON file with this structure:
```json
{
"comparison_summary": {
"winner": "A",
"winner_skill": "path/to/winner/skill",
"loser_skill": "path/to/loser/skill",
"comparator_reasoning": "Brief summary of why comparator chose winner"
},
"winner_strengths": [
"Clear step-by-step instructions for handling multi-page documents",
"Included validation script that caught formatting errors",
"Explicit guidance on fallback behavior when OCR fails"
],
"loser_weaknesses": [
"Vague instruction 'process the document appropriately' led to inconsistent behavior",
"No script for validation, agent had to improvise and made errors",
"No guidance on OCR failure, agent gave up instead of trying alternatives"
],
"instruction_following": {
"winner": {
"score": 9,
"issues": [
"Minor: skipped optional logging step"
]
},
"loser": {
"score": 6,
"issues": [
"Did not use the skill's formatting template",
"Invented own approach instead of following step 3",
"Missed the 'always validate output' instruction"
]
}
},
"improvement_suggestions": [
{
"priority": "high",
"category": "instructions",
"suggestion": "Replace 'process the document appropriately' with explicit steps: 1) Extract text, 2) Identify sections, 3) Format per template",
"expected_impact": "Would eliminate ambiguity that caused inconsistent behavior"
},
{
"priority": "high",
"category": "tools",
"suggestion": "Add validate_output.py script similar to winner skill's validation approach",
"expected_impact": "Would catch formatting errors before final output"
},
{
"priority": "medium",
"category": "error_handling",
"suggestion": "Add fallback instructions: 'If OCR fails, try: 1) different resolution, 2) image preprocessing, 3) manual extraction'",
"expected_impact": "Would prevent early failure on difficult documents"
}
],
"transcript_insights": {
"winner_execution_pattern": "Read skill -> Followed 5-step process -> Used validation script -> Fixed 2 issues -> Produced output",
"loser_execution_pattern": "Read skill -> Unclear on approach -> Tried 3 different methods -> No validation -> Output had errors"
}
}
```
## Guidelines
- **Be specific**: Quote from skills and transcripts, don't just say "instructions were unclear"
- **Be actionable**: Suggestions should be concrete changes, not vague advice
- **Focus on skill improvements**: The goal is to improve the losing skill, not critique the agent
- **Prioritize by impact**: Which changes would most likely have changed the outcome?
- **Consider causation**: Did the skill weakness actually cause the worse output, or is it incidental?
- **Stay objective**: Analyze what happened, don't editorialize
- **Think about generalization**: Would this improvement help on other evals too?
## Categories for Suggestions
Use these categories to organize improvement suggestions:
| Category | Description |
|----------|-------------|
| `instructions` | Changes to the skill's prose instructions |
| `tools` | Scripts, templates, or utilities to add/modify |
| `examples` | Example inputs/outputs to include |
| `error_handling` | Guidance for handling failures |
| `structure` | Reorganization of skill content |
| `references` | External docs or resources to add |
## Priority Levels
- **high**: Would likely change the outcome of this comparison
- **medium**: Would improve quality but may not change win/loss
- **low**: Nice to have, marginal improvement
---
# Analyzing Benchmark Results
When analyzing benchmark results, the analyzer's purpose is to **surface patterns and anomalies** across multiple runs, not suggest skill improvements.
## Role
Review all benchmark run results and generate freeform notes that help the user understand skill performance. Focus on patterns that wouldn't be visible from aggregate metrics alone.
## Inputs
You receive these parameters in your prompt:
- **benchmark_data_path**: Path to the in-progress benchmark.json with all run results
- **skill_path**: Path to the skill being benchmarked
- **output_path**: Where to save the notes (as JSON array of strings)
## Process
### Step 1: Read Benchmark Data
1. Read the benchmark.json containing all run results
2. Note the configurations tested (with_skill, without_skill)
3. Understand the run_summary aggregates already calculated
### Step 2: Analyze Per-Assertion Patterns
For each expectation across all runs:
- Does it **always pass** in both configurations? (may not differentiate skill value)
- Does it **always fail** in both configurations? (may be broken or beyond capability)
- Does it **always pass with skill but fail without**? (skill clearly adds value here)
- Does it **always fail with skill but pass without**? (skill may be hurting)
- Is it **highly variable**? (flaky expectation or non-deterministic behavior)
### Step 3: Analyze Cross-Eval Patterns
Look for patterns across evals:
- Are certain eval types consistently harder/easier?
- Do some evals show high variance while others are stable?
- Are there surprising results that contradict expectations?
### Step 4: Analyze Metrics Patterns
Look at time_seconds, tokens, tool_calls:
- Does the skill significantly increase execution time?
- Is there high variance in resource usage?
- Are there outlier runs that skew the aggregates?
### Step 5: Generate Notes
Write freeform observations as a list of strings. Each note should:
- State a specific observation
- Be grounded in the data (not speculation)
- Help the user understand something the aggregate metrics don't show
Examples:
- "Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value"
- "Eval 3 shows high variance (50% ± 40%) - run 2 had an unusual failure that may be flaky"
- "Without-skill runs consistently fail on table extraction expectations (0% pass rate)"
- "Skill adds 13s average execution time but improves pass rate by 50%"
- "Token usage is 80% higher with skill, primarily due to script output parsing"
- "All 3 without-skill runs for eval 1 produced empty output"
### Step 6: Write Notes
Save notes to `{output_path}` as a JSON array of strings:
```json
[
"Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value",
"Eval 3 shows high variance (50% ± 40%) - run 2 had an unusual failure",
"Without-skill runs consistently fail on table extraction expectations",
"Skill adds 13s average execution time but improves pass rate by 50%"
]
```
## Guidelines
**DO:**
- Report what you observe in the data
- Be specific about which evals, expectations, or runs you're referring to
- Note patterns that aggregate metrics would hide
- Provide context that helps interpret the numbers
**DO NOT:**
- Suggest improvements to the skill (that's for the improvement step, not benchmarking)
- Make subjective quality judgments ("the output was good/bad")
- Speculate about causes without evidence
- Repeat information already in the run_summary aggregates
================================================
FILE: .claude/skills/skill-creator/agents/comparator.md
================================================
# Blind Comparator Agent
Compare two outputs WITHOUT knowing which skill produced them.
## Role
The Blind Comparator judges which output better accomplishes the eval task. You receive two outputs labeled A and B, but you do NOT know which skill produced which. This prevents bias toward a particular skill or approach.
Your judgment is based purely on output quality and task completion.
## Inputs
You receive these parameters in your prompt:
- **output_a_path**: Path to the first output file or directory
- **output_b_path**: Path to the second output file or directory
- **eval_prompt**: The original task/prompt that was executed
- **expectations**: List of expectations to check (optional - may be empty)
## Process
### Step 1: Read Both Outputs
1. Examine output A (file or directory)
2. Examine output B (file or directory)
3. Note the type, structure, and content of each
4. If outputs are directories, examine all relevant files inside
### Step 2: Understand the Task
1. Read the eval_prompt carefully
2. Identify what the task requires:
- What should be produced?
- What qualities matter (accuracy, completeness, format)?
- What would distinguish a good output from a poor one?
### Step 3: Generate Evaluation Rubric
Based on the task, generate a rubric with two dimensions:
**Content Rubric** (what the output contains):
| Criterion | 1 (Poor) | 3 (Acceptable) | 5 (Excellent) |
|-----------|----------|----------------|---------------|
| Correctness | Major errors | Minor errors | Fully correct |
| Completeness | Missing key elements | Mostly complete | All elements present |
| Accuracy | Significant inaccuracies | Minor inaccuracies | Accurate throughout |
**Structure Rubric** (how the output is organized):
| Criterion | 1 (Poor) | 3 (Acceptable) | 5 (Excellent) |
|-----------|----------|----------------|---------------|
| Organization | Disorganized | Reasonably organized | Clear, logical structure |
| Formatting | Inconsistent/broken | Mostly consistent | Professional, polished |
| Usability | Difficult to use | Usable with effort | Easy to use |
Adapt criteria to the specific task. For example:
- PDF form → "Field alignment", "Text readability", "Data placement"
- Document → "Section structure", "Heading hierarchy", "Paragraph flow"
- Data output → "Schema correctness", "Data types", "Completeness"
### Step 4: Evaluate Each Output Against the Rubric
For each output (A and B):
1. **Score each criterion** on the rubric (1-5 scale)
2. **Calculate dimension totals**: Content score, Structure score
3. **Calculate overall score**: Average of dimension scores, scaled to 1-10
### Step 5: Check Assertions (if provided)
If expectations are provided:
1. Check each expectation against output A
2. Check each expectation against output B
3. Count pass rates for each output
4. Use expectation scores as secondary evidence (not the primary decision factor)
### Step 6: Determine the Winner
Compare A and B based on (in priority order):
1. **Primary**: Overall rubric score (content + structure)
2. **Secondary**: Assertion pass rates (if applicable)
3. **Tiebreaker**: If truly equal, declare a TIE
Be decisive - ties should be rare. One output is usually better, even if marginally.
### Step 7: Write Comparison Results
Save results to a JSON file at the path specified (or `comparison.json` if not specified).
## Output Format
Write a JSON file with this structure:
```json
{
"winner": "A",
"reasoning": "Output A provides a complete solution with proper formatting and all required fields. Output B is missing the date field and has formatting inconsistencies.",
"rubric": {
"A": {
"content": {
"correctness": 5,
"completeness": 5,
"accuracy": 4
},
"structure": {
"organization": 4,
"formatting": 5,
"usability": 4
},
"content_score": 4.7,
"structure_score": 4.3,
"overall_score": 9.0
},
"B": {
"content": {
"correctness": 3,
"completeness": 2,
"accuracy": 3
},
"structure": {
"organization": 3,
"formatting": 2,
"usability": 3
},
"content_score": 2.7,
"structure_score": 2.7,
"overall_score": 5.4
}
},
"output_quality": {
"A": {
"score": 9,
"strengths": ["Complete solution", "Well-formatted", "All fields present"],
"weaknesses": ["Minor style inconsistency in header"]
},
"B": {
"score": 5,
"strengths": ["Readable output", "Correct basic structure"],
"weaknesses": ["Missing date field", "Formatting inconsistencies", "Partial data extraction"]
}
},
"expectation_results": {
"A": {
"passed": 4,
"total": 5,
"pass_rate": 0.80,
"details": [
{"text": "Output includes name", "passed": true},
{"text": "Output includes date", "passed": true},
{"text": "Format is PDF", "passed": true},
{"text": "Contains signature", "passed": false},
{"text": "Readable text", "passed": true}
]
},
"B": {
"passed": 3,
"total": 5,
"pass_rate": 0.60,
"details": [
{"text": "Output includes name", "passed": true},
{"text": "Output includes date", "passed": false},
{"text": "Format is PDF", "passed": true},
{"text": "Contains signature", "passed": false},
{"text": "Readable text", "passed": true}
]
}
}
}
```
If no expectations were provided, omit the `expectation_results` field entirely.
## Field Descriptions
- **winner**: "A", "B", or "TIE"
- **reasoning**: Clear explanation of why the winner was chosen (or why it's a tie)
- **rubric**: Structured rubric evaluation for each output
- **content**: Scores for content criteria (correctness, completeness, accuracy)
- **structure**: Scores for structure criteria (organization, formatting, usability)
- **content_score**: Average of content criteria (1-5)
- **structure_score**: Average of structure criteria (1-5)
- **overall_score**: Combined score scaled to 1-10
- **output_quality**: Summary quality assessment
- **score**: 1-10 rating (should match rubric overall_score)
- **strengths**: List of positive aspects
- **weaknesses**: List of issues or shortcomings
- **expectation_results**: (Only if expectations provided)
- **passed**: Number of expectations that passed
- **total**: Total number of expectations
- **pass_rate**: Fraction passed (0.0 to 1.0)
- **details**: Individual expectation results
## Guidelines
- **Stay blind**: DO NOT try to infer which skill produced which output. Judge purely on output quality.
- **Be specific**: Cite specific examples when explaining strengths and weaknesses.
- **Be decisive**: Choose a winner unless outputs are genuinely equivalent.
- **Output quality first**: Assertion scores are secondary to overall task completion.
- **Be objective**: Don't favor outputs based on style preferences; focus on correctness and completeness.
- **Explain your reasoning**: The reasoning field should make it clear why you chose the winner.
- **Handle edge cases**: If both outputs fail, pick the one that fails less badly. If both are excellent, pick the one that's marginally better.
================================================
FILE: .claude/skills/skill-creator/agents/grader.md
================================================
# Grader Agent
Evaluate expectations against an execution transcript and outputs.
## Role
The Grader reviews a transcript and output files, then determines whether each expectation passes or fails. Provide clear evidence for each judgment.
You have two jobs: grade the outputs, and critique the evals themselves. A passing grade on a weak assertion is worse than useless — it creates false confidence. When you notice an assertion that's trivially satisfied, or an important outcome that no assertion checks, say so.
## Inputs
You receive these parameters in your prompt:
- **expectations**: List of expectations to evaluate (strings)
- **transcript_path**: Path to the execution transcript (markdown file)
- **outputs_dir**: Directory containing output files from execution
## Process
### Step 1: Read the Transcript
1. Read the transcript file completely
2. Note the eval prompt, execution steps, and final result
3. Identify any issues or errors documented
### Step 2: Examine Output Files
1. List files in outputs_dir
2. Read/examine each file relevant to the expectations. If outputs aren't plain text, use the inspection tools provided in your prompt — don't rely solely on what the transcript says the executor produced.
3. Note contents, structure, and quality
### Step 3: Evaluate Each Assertion
For each expectation:
1. **Search for evidence** in the transcript and outputs
2. **Determine verdict**:
- **PASS**: Clear evidence the expectation is true AND the evidence reflects genuine task completion, not just surface-level compliance
- **FAIL**: No evidence, or evidence contradicts the expectation, or the evidence is superficial (e.g., correct filename but empty/wrong content)
3. **Cite the evidence**: Quote the specific text or describe what you found
### Step 4: Extract and Verify Claims
Beyond the predefined expectations, extract implicit claims from the outputs and verify them:
1. **Extract claims** from the transcript and outputs:
- Factual statements ("The form has 12 fields")
- Process claims ("Used pypdf to fill the form")
- Quality claims ("All fields were filled correctly")
2. **Verify each claim**:
- **Factual claims**: Can be checked against the outputs or external sources
- **Process claims**: Can be verified from the transcript
- **Quality claims**: Evaluate whether the claim is justified
3. **Flag unverifiable claims**: Note claims that cannot be verified with available information
This catches issues that predefined expectations might miss.
### Step 5: Read User Notes
If `{outputs_dir}/user_notes.md` exists:
1. Read it and note any uncertainties or issues flagged by the executor
2. Include relevant concerns in the grading output
3. These may reveal problems even when expectations pass
### Step 6: Critique the Evals
After grading, consider whether the evals themselves could be improved. Only surface suggestions when there's a clear gap.
Good suggestions test meaningful outcomes — assertions that are hard to satisfy without actually doing the work correctly. Think about what makes an assertion *discriminating*: it passes when the skill genuinely succeeds and fails when it doesn't.
Suggestions worth raising:
- An assertion that passed but would also pass for a clearly wrong output (e.g., checking filename existence but not file content)
- An important outcome you observed — good or bad — that no assertion covers at all
- An assertion that can't actually be verified from the available outputs
Keep the bar high. The goal is to flag things the eval author would say "good catch" about, not to nitpick every assertion.
### Step 7: Write Grading Results
Save results to `{outputs_dir}/../grading.json` (sibling to outputs_dir).
## Grading Criteria
**PASS when**:
- The transcript or outputs clearly demonstrate the expectation is true
- Specific evidence can be cited
- The evidence reflects genuine substance, not just surface compliance (e.g., a file exists AND contains correct content, not just the right filename)
**FAIL when**:
- No evidence found for the expectation
- Evidence contradicts the expectation
- The expectation cannot be verified from available information
- The evidence is superficial — the assertion is technically satisfied but the underlying task outcome is wrong or incomplete
- The output appears to meet the assertion by coincidence rather than by actually doing the work
**When uncertain**: The burden of proof to pass is on the expectation.
### Step 8: Read Executor Metrics and Timing
1. If `{outputs_dir}/metrics.json` exists, read it and include in grading output
2. If `{outputs_dir}/../timing.json` exists, read it and include timing data
## Output Format
Write a JSON file with this structure:
```json
{
"expectations": [
{
"text": "The output includes the name 'John Smith'",
"passed": true,
"evidence": "Found in transcript Step 3: 'Extracted names: John Smith, Sarah Johnson'"
},
{
"text": "The spreadsheet has a SUM formula in cell B10",
"passed": false,
"evidence": "No spreadsheet was created. The output was a text file."
},
{
"text": "The assistant used the skill's OCR script",
"passed": true,
"evidence": "Transcript Step 2 shows: 'Tool: Bash - python ocr_script.py image.png'"
}
],
"summary": {
"passed": 2,
"failed": 1,
"total": 3,
"pass_rate": 0.67
},
"execution_metrics": {
"tool_calls": {
"Read": 5,
"Write": 2,
"Bash": 8
},
"total_tool_calls": 15,
"total_steps": 6,
"errors_encountered": 0,
"output_chars": 12450,
"transcript_chars": 3200
},
"timing": {
"executor_duration_seconds": 165.0,
"grader_duration_seconds": 26.0,
"total_duration_seconds": 191.0
},
"claims": [
{
"claim": "The form has 12 fillable fields",
"type": "factual",
"verified": true,
"evidence": "Counted 12 fields in field_info.json"
},
{
"claim": "All required fields were populated",
"type": "quality",
"verified": false,
"evidence": "Reference section was left blank despite data being available"
}
],
"user_notes_summary": {
"uncertainties": ["Used 2023 data, may be stale"],
"needs_review": [],
"workarounds": ["Fell back to text overlay for non-fillable fields"]
},
"eval_feedback": {
"suggestions": [
{
"assertion": "The output includes the name 'John Smith'",
"reason": "A hallucinated document that mentions the name would also pass — consider checking it appears as the primary contact with matching phone and email from the input"
},
{
"reason": "No assertion checks whether the extracted phone numbers match the input — I observed incorrect numbers in the output that went uncaught"
}
],
"overall": "Assertions check presence but not correctness. Consider adding content verification."
}
}
```
## Field Descriptions
- **expectations**: Array of graded expectations
- **text**: The original expectation text
- **passed**: Boolean - true if expectation passes
- **evidence**: Specific quote or description supporting the verdict
- **summary**: Aggregate statistics
- **passed**: Count of passed expectations
- **failed**: Count of failed expectations
- **total**: Total expectations evaluated
- **pass_rate**: Fraction passed (0.0 to 1.0)
- **execution_metrics**: Copied from executor's metrics.json (if available)
- **output_chars**: Total character count of output files (proxy for tokens)
- **transcript_chars**: Character count of transcript
- **timing**: Wall clock timing from timing.json (if available)
- **executor_duration_seconds**: Time spent in executor subagent
- **total_duration_seconds**: Total elapsed time for the run
- **claims**: Extracted and verified claims from the output
- **claim**: The statement being verified
- **type**: "factual", "process", or "quality"
- **verified**: Boolean - whether the claim holds
- **evidence**: Supporting or contradicting evidence
- **user_notes_summary**: Issues flagged by the executor
- **uncertainties**: Things the executor wasn't sure about
- **needs_review**: Items requiring human attention
- **workarounds**: Places where the skill didn't work as expected
- **eval_feedback**: Improvement suggestions for the evals (only when warranted)
- **suggestions**: List of concrete suggestions, each with a `reason` and optionally an `assertion` it relates to
- **overall**: Brief assessment — can be "No suggestions, evals look solid" if nothing to flag
## Guidelines
- **Be objective**: Base verdicts on evidence, not assumptions
- **Be specific**: Quote the exact text that supports your verdict
- **Be thorough**: Check both transcript and output files
- **Be consistent**: Apply the same standard to each expectation
- **Explain failures**: Make it clear why evidence was insufficient
- **No partial credit**: Each expectation is pass or fail, not partial
================================================
FILE: .claude/skills/skill-creator/assets/eval_review.html
================================================
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Eval Set Review - __SKILL_NAME_PLACEHOLDER__</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Poppins:wght@500;600&family=Lora:wght@400;500&display=swap" rel="stylesheet">
<style>
* { box-sizing: border-box; margin: 0; padding: 0; }
body { font-family: 'Lora', Georgia, serif; background: #faf9f5; padding: 2rem; color: #141413; }
h1 { font-family: 'Poppins', sans-serif; margin-bottom: 0.5rem; font-size: 1.5rem; }
.description { color: #b0aea5; margin-bottom: 1.5rem; font-style: italic; max-width: 900px; }
.controls { margin-bottom: 1rem; display: flex; gap: 0.5rem; }
.btn { font-family: 'Poppins', sans-serif; padding: 0.5rem 1rem; border: none; border-radius: 6px; cursor: pointer; font-size: 0.875rem; font-weight: 500; }
.btn-add { background: #6a9bcc; color: white; }
.btn-add:hover { background: #5889b8; }
.btn-export { background: #d97757; color: white; }
.btn-export:hover { background: #c4613f; }
table { width: 100%; max-width: 1100px; border-collapse: collapse; background: white; border-radius: 6px; overflow: hidden; box-shadow: 0 1px 3px rgba(0,0,0,0.08); }
th { font-family: 'Poppins', sans-serif; background: #141413; color: #faf9f5; padding: 0.75rem 1rem; text-align: left; font-size: 0.875rem; }
td { padding: 0.75rem 1rem; border-bottom: 1px solid #e8e6dc; vertical-align: top; }
tr:nth-child(even) td { background: #faf9f5; }
tr:hover td { background: #f3f1ea; }
.section-header td { background: #e8e6dc; font-family: 'Poppins', sans-serif; font-weight: 500; font-size: 0.8rem; color: #141413; text-transform: uppercase; letter-spacing: 0.05em; }
.query-input { width: 100%; padding: 0.4rem; border: 1px solid #e8e6dc; border-radius: 4px; font-size: 0.875rem; font-family: 'Lora', Georgia, serif; resize: vertical; min-height: 60px; }
.query-input:focus { outline: none; border-color: #d97757; box-shadow: 0 0 0 2px rgba(217,119,87,0.15); }
.toggle { position: relative; display: inline-block; width: 44px; height: 24px; }
.toggle input { opacity: 0; width: 0; height: 0; }
.toggle .slider { position: absolute; inset: 0; background: #b0aea5; border-radius: 24px; cursor: pointer; transition: 0.2s; }
.toggle .slider::before { content: ""; position: absolute; width: 18px; height: 18px; left: 3px; bottom: 3px; background: white; border-radius: 50%; transition: 0.2s; }
.toggle input:checked + .slider { background: #d97757; }
.toggle input:checked + .slider::before { transform: translateX(20px); }
.btn-delete { background: #c44; color: white; padding: 0.3rem 0.6rem; border: none; border-radius: 4px; cursor: pointer; font-size: 0.75rem; font-family: 'Poppins', sans-serif; }
.btn-delete:hover { background: #a33; }
.summary { margin-top: 1rem; color: #b0aea5; font-size: 0.875rem; }
</style>
</head>
<body>
<h1>Eval Set Review: <span id="skill-name">__SKILL_NAME_PLACEHOLDER__</span></h1>
<p class="description">Current description: <span id="skill-desc">__SKILL_DESCRIPTION_PLACEHOLDER__</span></p>
<div class="controls">
<button class="btn btn-add" onclick="addRow()">+ Add Query</button>
<button class="btn btn-export" onclick="exportEvalSet()">Export Eval Set</button>
</div>
<table>
<thead>
<tr>
<th style="width:65%">Query</th>
<th style="width:18%">Should Trigger</th>
<th style="width:10%">Actions</th>
</tr>
</thead>
<tbody id="eval-body"></tbody>
</table>
<p class="summary" id="summary"></p>
<script>
const EVAL_DATA = __EVAL_DATA_PLACEHOLDER__;
let evalItems = [...EVAL_DATA];
function render() {
const tbody = document.getElementById('eval-body');
tbody.innerHTML = '';
// Sort: should-trigger first, then should-not-trigger
const sorted = evalItems
.map((item, origIdx) => ({ ...item, origIdx }))
.sort((a, b) => (b.should_trigger ? 1 : 0) - (a.should_trigger ? 1 : 0));
let lastGroup = null;
sorted.forEach(item => {
const group = item.should_trigger ? 'trigger' : 'no-trigger';
if (group !== lastGroup) {
const headerRow = document.createElement('tr');
headerRow.className = 'section-header';
headerRow.innerHTML = `<td colspan="3">${item.should_trigger ? 'Should Trigger' : 'Should NOT Trigger'}</td>`;
tbody.appendChild(headerRow);
lastGroup = group;
}
const idx = item.origIdx;
const tr = document.createElement('tr');
tr.innerHTML = `
<td><textarea class="query-input" onchange="updateQuery(${idx}, this.value)">${escapeHtml(item.query)}</textarea></td>
<td>
<label class="toggle">
<input type="checkbox" ${item.should_trigger ? 'checked' : ''} onchange="updateTrigger(${idx}, this.checked)">
<span class="slider"></span>
</label>
<span style="margin-left:8px;font-size:0.8rem;color:#b0aea5">${item.should_trigger ? 'Yes' : 'No'}</span>
</td>
<td><button class="btn-delete" onclick="deleteRow(${idx})">Delete</button></td>
`;
tbody.appendChild(tr);
});
updateSummary();
}
function escapeHtml(text) {
const div = document.createElement('div');
div.textContent = text;
return div.innerHTML;
}
function updateQuery(idx, value) { evalItems[idx].query = value; updateSummary(); }
function updateTrigger(idx, value) { evalItems[idx].should_trigger = value; render(); }
function deleteRow(idx) { evalItems.splice(idx, 1); render(); }
function addRow() {
evalItems.push({ query: '', should_trigger: true });
render();
const inputs = document.querySelectorAll('.query-input');
inputs[inputs.length - 1].focus();
}
function updateSummary() {
const trigger = evalItems.filter(i => i.should_trigger).length;
const noTrigger = evalItems.filter(i => !i.should_trigger).length;
document.getElementById('summary').textContent =
`${evalItems.length} queries total: ${trigger} should trigger, ${noTrigger} should not trigger`;
}
function exportEvalSet() {
const valid = evalItems.filter(i => i.query.trim() !== '');
const data = valid.map(i => ({ query: i.query.trim(), should_trigger: i.should_trigger }));
const blob = new Blob([JSON.stringify(data, null, 2)], { type: 'application/json' });
const url = URL.createObjectURL(blob);
const a = document.createElement('a');
a.href = url;
a.download = 'eval_set.json';
document.body.appendChild(a);
a.click();
document.body.removeChild(a);
URL.revokeObjectURL(url);
}
render();
</script>
</body>
</html>
================================================
FILE: .claude/skills/skill-creator/eval-viewer/generate_review.py
================================================
#!/usr/bin/env python3
"""Generate and serve a review page for eval results.
Reads the workspace directory, discovers runs (directories with outputs/),
embeds all output data into a self-contained HTML page, and serves it via
a tiny HTTP server. Feedback auto-saves to feedback.json in the workspace.
Usage:
python generate_review.py <workspace-path> [--port PORT] [--skill-name NAME]
python generate_review.py <workspace-path> --previous-feedback /path/to/old/feedback.json
No dependencies beyond the Python stdlib are required.
"""
import argparse
import base64
import json
import mimetypes
import os
import re
import signal
import subprocess
import sys
import time
import webbrowser
from functools import partial
from http.server import HTTPServer, BaseHTTPRequestHandler
from pathlib import Path
# Files to exclude from output listings
METADATA_FILES = {"transcript.md", "user_notes.md", "metrics.json"}
# Extensions we render as inline text
TEXT_EXTENSIONS = {
".txt", ".md", ".json", ".csv", ".py", ".js", ".ts", ".tsx", ".jsx",
".yaml", ".yml", ".xml", ".html", ".css", ".sh", ".rb", ".go", ".rs",
".java", ".c", ".cpp", ".h", ".hpp", ".sql", ".r", ".toml",
}
# Extensions we render as inline images
IMAGE_EXTENSIONS = {".png", ".jpg", ".jpeg", ".gif", ".svg", ".webp"}
# MIME type overrides for common types
MIME_OVERRIDES = {
".svg": "image/svg+xml",
".xlsx": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
".docx": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
".pptx": "application/vnd.openxmlformats-officedocument.presentationml.presentation",
}
def get_mime_type(path: Path) -> str:
ext = path.suffix.lower()
if ext in MIME_OVERRIDES:
return MIME_OVERRIDES[ext]
mime, _ = mimetypes.guess_type(str(path))
return mime or "application/octet-stream"
def find_runs(workspace: Path) -> list[dict]:
"""Recursively find directories that contain an outputs/ subdirectory."""
runs: list[dict] = []
_find_runs_recursive(workspace, workspace, runs)
runs.sort(key=lambda r: (r.get("eval_id", float("inf")), r["id"]))
return runs
def _find_runs_recursive(root: Path, current: Path, runs: list[dict]) -> None:
if not current.is_dir():
return
outputs_dir = current / "outputs"
if outputs_dir.is_dir():
run = build_run(root, current)
if run:
runs.append(run)
return
skip = {"node_modules", ".git", "__pycache__", "skill", "inputs"}
for child in sorted(current.iterdir()):
if child.is_dir() and child.name not in skip:
_find_runs_recursive(root, child, runs)
def build_run(root: Path, run_dir: Path) -> dict | None:
"""Build a run dict with prompt, outputs, and grading data."""
prompt = ""
eval_id = None
# Try eval_metadata.json
for candidate in [run_dir / "eval_metadata.json", run_dir.parent / "eval_metadata.json"]:
if candidate.exists():
try:
metadata = json.loads(candidate.read_text())
prompt = metadata.get("prompt", "")
eval_id = metadata.get("eval_id")
except (json.JSONDecodeError, OSError):
pass
if prompt:
break
# Fall back to transcript.md
if not prompt:
for candidate in [run_dir / "transcript.md", run_dir / "outputs" / "transcript.md"]:
if candidate.exists():
try:
text = candidate.read_text()
match = re.search(r"## Eval Prompt\n\n([\s\S]*?)(?=\n##|$)", text)
if match:
prompt = match.group(1).strip()
except OSError:
pass
if prompt:
break
if not prompt:
prompt = "(No prompt found)"
run_id = str(run_dir.relative_to(root)).replace("/", "-").replace("\\", "-")
# Collect output files
outputs_dir = run_dir / "outputs"
output_files: list[dict] = []
if outputs_dir.is_dir():
for f in sorted(outputs_dir.iterdir()):
if f.is_file() and f.name not in METADATA_FILES:
output_files.append(embed_file(f))
# Load grading if present
grading = None
for candidate in [run_dir / "grading.json", run_dir.parent / "grading.json"]:
if candidate.exists():
try:
grading = json.loads(candidate.read_text())
except (json.JSONDecodeError, OSError):
pass
if grading:
break
return {
"id": run_id,
"prompt": prompt,
"eval_id": eval_id,
"outputs": output_files,
"grading": grading,
}
def embed_file(path: Path) -> dict:
"""Read a file and return an embedded representation."""
ext = path.suffix.lower()
mime = get_mime_type(path)
if ext in TEXT_EXTENSIONS:
try:
content = path.read_text(errors="replace")
except OSError:
content = "(Error reading file)"
return {
"name": path.name,
"type": "text",
"content": content,
}
elif ext in IMAGE_EXTENSIONS:
try:
raw = path.read_bytes()
b64 = base64.b64encode(raw).decode("ascii")
except OSError:
return {"name": path.name, "type": "error", "content": "(Error reading file)"}
return {
"name": path.name,
"type": "image",
"mime": mime,
"data_uri": f"data:{mime};base64,{b64}",
}
elif ext == ".pdf":
try:
raw = path.read_bytes()
b64 = base64.b64encode(raw).decode("ascii")
except OSError:
return {"name": path.name, "type": "error", "content": "(Error reading file)"}
return {
"name": path.name,
"type": "pdf",
"data_uri": f"data:{mime};base64,{b64}",
}
elif ext == ".xlsx":
try:
raw = path.read_bytes()
b64 = base64.b64encode(raw).decode("ascii")
except OSError:
return {"name": path.name, "type": "error", "content": "(Error reading file)"}
return {
"name": path.name,
"type": "xlsx",
"data_b64": b64,
}
else:
# Binary / unknown — base64 download link
try:
raw = path.read_bytes()
b64 = base64.b64encode(raw).decode("ascii")
except OSError:
return {"name": path.name, "type": "error", "content": "(Error reading file)"}
return {
"name": path.name,
"type": "binary",
"mime": mime,
"data_uri": f"data:{mime};base64,{b64}",
}
def load_previous_iteration(workspace: Path) -> dict[str, dict]:
"""Load previous iteration's feedback and outputs.
Returns a map of run_id -> {"feedback": str, "outputs": list[dict]}.
"""
result: dict[str, dict] = {}
# Load feedback
feedback_map: dict[str, str] = {}
feedback_path = workspace / "feedback.json"
if feedback_path.exists():
try:
data = json.loads(feedback_path.read_text())
feedback_map = {
r["run_id"]: r["feedback"]
for r in data.get("reviews", [])
if r.get("feedback", "").strip()
}
except (json.JSONDecodeError, OSError, KeyError):
pass
# Load runs (to get outputs)
prev_runs = find_runs(workspace)
for run in prev_runs:
result[run["id"]] = {
"feedback": feedback_map.get(run["id"], ""),
"outputs": run.get("outputs", []),
}
# Also add feedback for run_ids that had feedback but no matching run
for run_id, fb in feedback_map.items():
if run_id not in result:
result[run_id] = {"feedback": fb, "outputs": []}
return result
def generate_html(
runs: list[dict],
skill_name: str,
previous: dict[str, dict] | None = None,
benchmark: dict | None = None,
) -> str:
"""Generate the complete standalone HTML page with embedded data."""
template_path = Path(__file__).parent / "viewer.html"
template = template_path.read_text()
# Build previous_feedback and previous_outputs maps for the template
previous_feedback: dict[str, str] = {}
previous_outputs: dict[str, list[dict]] = {}
if previous:
for run_id, data in previous.items():
if data.get("feedback"):
previous_feedback[run_id] = data["feedback"]
if data.get("outputs"):
previous_outputs[run_id] = data["outputs"]
embedded = {
"skill_name": skill_name,
"runs": runs,
"previous_feedback": previous_feedback,
"previous_outputs": previous_outputs,
}
if benchmark:
embedded["benchmark"] = benchmark
data_json = json.dumps(embedded)
return template.replace("/*__EMBEDDED_DATA__*/", f"const EMBEDDED_DATA = {data_json};")
# ---------------------------------------------------------------------------
# HTTP server (stdlib only, zero dependencies)
# ---------------------------------------------------------------------------
def _kill_port(port: int) -> None:
"""Kill any process listening on the given port."""
try:
result = subprocess.run(
["lsof", "-ti", f":{port}"],
capture_output=True, text=True, timeout=5,
)
for pid_str in result.stdout.strip().split("\n"):
if pid_str.strip():
try:
os.kill(int(pid_str.strip()), signal.SIGTERM)
except (ProcessLookupError, ValueError):
pass
if result.stdout.strip():
time.sleep(0.5)
except subprocess.TimeoutExpired:
pass
except FileNotFoundError:
print("Note: lsof not found, cannot check if port is in use", file=sys.stderr)
class ReviewHandler(BaseHTTPRequestHandler):
"""Serves the review HTML and handles feedback saves.
Regenerates the HTML on each page load so that refreshing the browser
picks up new eval outputs without restarting the server.
"""
def __init__(
self,
workspace: Path,
skill_name: str,
feedback_path: Path,
previous: dict[str, dict],
benchmark_path: Path | None,
*args,
**kwargs,
):
self.workspace = workspace
self.skill_name = skill_name
self.feedback_path = feedback_path
self.previous = previous
self.benchmark_path = benchmark_path
super().__init__(*args, **kwargs)
def do_GET(self) -> None:
if self.path == "/" or self.path == "/index.html":
# Regenerate HTML on each request (re-scans workspace for new outputs)
runs = find_runs(self.workspace)
benchmark = None
if self.benchmark_path and self.benchmark_path.exists():
try:
benchmark = json.loads(self.benchmark_path.read_text())
except (json.JSONDecodeError, OSError):
pass
html = generate_html(runs, self.skill_name, self.previous, benchmark)
content = html.encode("utf-8")
self.send_response(200)
self.send_header("Content-Type", "text/html; charset=utf-8")
self.send_header("Content-Length", str(len(content)))
self.end_headers()
self.wfile.write(content)
elif self.path == "/api/feedback":
data = b"{}"
if self.feedback_path.exists():
data = self.feedback_path.read_bytes()
self.send_response(200)
self.send_header("Content-Type", "application/json")
self.send_header("Content-Length", str(len(data)))
self.end_headers()
self.wfile.write(data)
else:
self.send_error(404)
def do_POST(self) -> None:
if self.path == "/api/feedback":
length = int(self.headers.get("Content-Length", 0))
body = self.rfile.read(length)
try:
data = json.loads(body)
if not isinstance(data, dict) or "reviews" not in data:
raise ValueError("Expected JSON object with 'reviews' key")
self.feedback_path.write_text(json.dumps(data, indent=2) + "\n")
resp = b'{"ok":true}'
self.send_response(200)
except (json.JSONDecodeError, OSError, ValueError) as e:
resp = json.dumps({"error": str(e)}).encode()
self.send_response(500)
self.send_header("Content-Type", "application/json")
self.send_header("Content-Length", str(len(resp)))
self.end_headers()
self.wfile.write(resp)
else:
self.send_error(404)
def log_message(self, format: str, *args: object) -> None:
# Suppress request logging to keep terminal clean
pass
def main() -> None:
parser = argparse.ArgumentParser(description="Generate and serve eval review")
parser.add_argument("workspace", type=Path, help="Path to workspace directory")
parser.add_argument("--port", "-p", type=int, default=3117, help="Server port (default: 3117)")
parser.add_argument("--skill-name", "-n", type=str, default=None, help="Skill name for header")
parser.add_argument(
"--previous-workspace", type=Path, default=None,
help="Path to previous iteration's workspace (shows old outputs and feedback as context)",
)
parser.add_argument(
"--benchmark", type=Path, default=None,
help="Path to benchmark.json to show in the Benchmark tab",
)
parser.add_argument(
"--static", "-s", type=Path, default=None,
help="Write standalone HTML to this path instead of starting a server",
)
args = parser.parse_args()
workspace = args.workspace.resolve()
if not workspace.is_dir():
print(f"Error: {workspace} is not a directory", file=sys.stderr)
sys.exit(1)
runs = find_runs(workspace)
if not runs:
print(f"No runs found in {workspace}", file=sys.stderr)
sys.exit(1)
skill_name = args.skill_name or workspace.name.replace("-workspace", "")
feedback_path = workspace / "feedback.json"
previous: dict[str, dict] = {}
if args.previous_workspace:
previous = load_previous_iteration(args.previous_workspace.resolve())
benchmark_path = args.benchmark.resolve() if args.benchmark else None
benchmark = None
if benchmark_path and benchmark_path.exists():
try:
benchmark = json.loads(benchmark_path.read_text())
except (json.JSONDecodeError, OSError):
pass
if args.static:
html = generate_html(runs, skill_name, previous, benchmark)
args.static.parent.mkdir(parents=True, exist_ok=True)
args.static.write_text(html)
print(f"\n Static viewer written to: {args.static}\n")
sys.exit(0)
# Kill any existing process on the target port
port = args.port
_kill_port(port)
handler = partial(ReviewHandler, workspace, skill_name, feedback_path, previous, benchmark_path)
try:
server = HTTPServer(("127.0.0.1", port), handler)
except OSError:
# Port still in use after kill attempt — find a free one
server = HTTPServer(("127.0.0.1", 0), handler)
port = server.server_address[1]
url = f"http://localhost:{port}"
print(f"\n Eval Viewer")
print(f" ─────────────────────────────────")
print(f" URL: {url}")
print(f" Workspace: {workspace}")
print(f" Feedback: {feedback_path}")
if previous:
print(f" Previous: {args.previous_workspace} ({len(previous)} runs)")
if benchmark_path:
print(f" Benchmark: {benchmark_path}")
print(f"\n Press Ctrl+C to stop.\n")
webbrowser.open(url)
try:
server.serve_forever()
except KeyboardInterrupt:
print("\nStopped.")
server.server_close()
if __name__ == "__main__":
main()
================================================
FILE: .claude/skills/skill-creator/eval-viewer/viewer.html
================================================
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Eval Review</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Poppins:wght@500;600&family=Lora:wght@400;500&display=swap" rel="stylesheet">
<script src="https://cdn.sheetjs.com/xlsx-0.20.3/package/dist/xlsx.full.min.js" integrity="sha384-EnyY0/GSHQGSxSgMwaIPzSESbqoOLSexfnSMN2AP+39Ckmn92stwABZynq1JyzdT" crossorigin="anonymous"></script>
<style>
:root {
--bg: #faf9f5;
--surface: #ffffff;
--border: #e8e6dc;
--text: #141413;
--text-muted: #b0aea5;
--accent: #d97757;
--accent-hover: #c4613f;
--green: #788c5d;
--green-bg: #eef2e8;
--red: #c44;
--red-bg: #fceaea;
--header-bg: #141413;
--header-text: #faf9f5;
--radius: 6px;
}
* { box-sizing: border-box; margin: 0; padding: 0; }
body {
font-family: 'Lora', Georgia, serif;
background: var(--bg);
color: var(--text);
height: 100vh;
display: flex;
flex-direction: column;
}
/* ---- Header ---- */
.header {
background: var(--header-bg);
color: var(--header-text);
padding: 1rem 2rem;
display: flex;
justify-content: space-between;
align-items: center;
flex-shrink: 0;
}
.header h1 {
font-family: 'Poppins', sans-serif;
font-size: 1.25rem;
font-weight: 600;
}
.header .instructions {
font-size: 0.8rem;
opacity: 0.7;
margin-top: 0.25rem;
}
.header .progress {
font-size: 0.875rem;
opacity: 0.8;
text-align: right;
}
/* ---- Main content ---- */
.main {
flex: 1;
overflow-y: auto;
padding: 1.5rem 2rem;
display: flex;
flex-direction: column;
gap: 1.25rem;
}
/* ---- Sections ---- */
.section {
background: var(--surface);
border: 1px solid var(--border);
border-radius: var(--radius);
flex-shrink: 0;
}
.section-header {
font-family: 'Poppins', sans-serif;
padding: 0.75rem 1rem;
font-size: 0.75rem;
font-weight: 500;
text-transform: uppercase;
letter-spacing: 0.05em;
color: var(--text-muted);
border-bottom: 1px solid var(--border);
background: var(--bg);
}
.section-body {
padding: 1rem;
}
/* ---- Config badge ---- */
.config-badge {
display: inline-block;
padding: 0.2rem 0.625rem;
border-radius: 9999px;
font-family: 'Poppins', sans-serif;
font-size: 0.6875rem;
font-weight: 600;
text-transform: uppercase;
gitextract_5gwete13/
├── .claude/
│ └── skills/
│ ├── api-generator/
│ │ └── SKILL.md
│ ├── find-skills/
│ │ └── SKILL.md
│ ├── go-concurrency-patterns/
│ │ └── SKILL.md
│ ├── golang-patterns/
│ │ └── SKILL.md
│ ├── golang-pro/
│ │ ├── SKILL.md
│ │ └── references/
│ │ ├── concurrency.md
│ │ ├── generics.md
│ │ ├── interfaces.md
│ │ ├── project-structure.md
│ │ └── testing.md
│ └── skill-creator/
│ ├── LICENSE.txt
│ ├── SKILL.md
│ ├── agents/
│ │ ├── analyzer.md
│ │ ├── comparator.md
│ │ └── grader.md
│ ├── assets/
│ │ └── eval_review.html
│ ├── eval-viewer/
│ │ ├── generate_review.py
│ │ └── viewer.html
│ ├── references/
│ │ └── schemas.md
│ └── scripts/
│ ├── aggregate_benchmark.py
│ ├── generate_report.py
│ ├── improve_description.py
│ ├── package_skill.py
│ ├── quick_validate.py
│ ├── run_eval.py
│ ├── run_loop.py
│ └── utils.py
├── .gitignore
├── CLAUDE.md
├── LICENSE
├── README-EN.md
├── README.md
├── cmd/
│ └── server/
│ └── main.go
├── config.yml
├── docs/
│ ├── docs.go
│ ├── swagger.json
│ └── swagger.yaml
├── go.mod
├── go.sum
├── internal/
│ ├── api/
│ │ ├── handler/
│ │ │ ├── backup_handler.go
│ │ │ ├── dst_api_handler.go
│ │ │ ├── dst_config_handler.go
│ │ │ ├── dst_map_handler.go
│ │ │ ├── game_config_handler.go
│ │ │ ├── game_handler.go
│ │ │ ├── kv_handler.go
│ │ │ ├── level_handler.go
│ │ │ ├── level_log_handler.go
│ │ │ ├── login_handler.go
│ │ │ ├── mod_handler.go
│ │ │ ├── player_handler.go
│ │ │ ├── player_log_handler.go
│ │ │ ├── statistics_handler.go
│ │ │ └── update.go
│ │ └── router.go
│ ├── collect/
│ │ ├── collect.go
│ │ └── collect_map.go
│ ├── config/
│ │ └── config.go
│ ├── database/
│ │ └── sqlite.go
│ ├── middleware/
│ │ ├── auth.go
│ │ ├── cluster.go
│ │ ├── error.go
│ │ └── start_before.go
│ ├── model/
│ │ ├── LogRecord.go
│ │ ├── announce.go
│ │ ├── autoCheck.go
│ │ ├── backup.go
│ │ ├── backupSnapshot.go
│ │ ├── cluster.go
│ │ ├── connect.go
│ │ ├── jobTask.go
│ │ ├── kv.go
│ │ ├── modInfo.go
│ │ ├── modKv.go
│ │ ├── model.go
│ │ ├── playerLog.go
│ │ ├── regenerate.go
│ │ ├── spawnRole.go
│ │ └── webLink.go
│ ├── pkg/
│ │ ├── context/
│ │ │ └── cluster.go
│ │ ├── response/
│ │ │ └── response.go
│ │ └── utils/
│ │ ├── collectionUtils/
│ │ │ └── collectionUtils.go
│ │ ├── dateUtils.go
│ │ ├── dstUtils/
│ │ │ └── dstUtils.go
│ │ ├── envUtils.go
│ │ ├── fileUtils/
│ │ │ └── fileUtls.go
│ │ ├── luaUtils/
│ │ │ └── luaUtils.go
│ │ ├── shellUtils/
│ │ │ └── shellUitls.go
│ │ ├── systemUtils/
│ │ │ └── SystemUtils.go
│ │ └── zip/
│ │ └── zip.go
│ └── service/
│ ├── archive/
│ │ └── path_resolver.go
│ ├── backup/
│ │ └── backup_service.go
│ ├── dstConfig/
│ │ ├── dst_config.go
│ │ ├── factory.go
│ │ └── one_dst_config.go
│ ├── dstMap/
│ │ └── dst_map.go
│ ├── dstPath/
│ │ ├── dst_path.go
│ │ ├── linux_dst.go
│ │ └── window_dst.go
│ ├── game/
│ │ ├── factory.go
│ │ ├── linux_process.go
│ │ ├── process.go
│ │ ├── windowGameCli.go
│ │ └── window_process.go
│ ├── gameArchive/
│ │ └── game_archive.go
│ ├── gameConfig/
│ │ └── game_config.go
│ ├── level/
│ │ └── level.go
│ ├── levelConfig/
│ │ └── level_config.go
│ ├── login/
│ │ └── login_service.go
│ ├── mod/
│ │ └── mod_service.go
│ ├── player/
│ │ └── player_service.go
│ └── update/
│ ├── factory.go
│ ├── linux_update.go
│ ├── update.go
│ └── window_update.go
├── scripts/
│ ├── build_linux.sh
│ ├── build_swagger.sh
│ ├── build_window.sh
│ ├── docker/
│ │ ├── Dockerfile
│ │ ├── README.md
│ │ ├── docker-entrypoint.sh
│ │ ├── docker_build.sh
│ │ └── docker_dst_config
│ ├── docker-build-mac/
│ │ ├── Dockerfile
│ │ ├── README.md
│ │ ├── docker-entrypoint.sh
│ │ ├── docker_dst_config
│ │ └── dst-mac-arm64-env-install.md
│ └── py-dst-cli/
│ ├── README.md
│ ├── dst_version.py
│ ├── dst_world_setting.json
│ ├── main.py
│ ├── parse_TooManyItemPlus_items.py
│ ├── parse_mod.py
│ ├── parse_world_setting.py
│ ├── parse_world_webp.py
│ ├── requirements.txt
│ └── steamapikey.txt
└── static/
├── Caves/
│ ├── leveldataoverride.lua
│ ├── modoverrides.lua
│ └── server.ini
├── Master/
│ ├── leveldataoverride.lua
│ ├── modoverrides.lua
│ └── server.ini
├── customcommands.lua
└── template/
├── caves_server.ini
├── cluster.ini
├── cluster2.ini
├── master_server.ini
├── server.ini
└── test.go
SYMBOL INDEX (620 symbols across 93 files)
FILE: .claude/skills/skill-creator/eval-viewer/generate_review.py
function get_mime_type (line 52) | def get_mime_type(path: Path) -> str:
function find_runs (line 60) | def find_runs(workspace: Path) -> list[dict]:
function _find_runs_recursive (line 68) | def _find_runs_recursive(root: Path, current: Path, runs: list[dict]) ->...
function build_run (line 85) | def build_run(root: Path, run_dir: Path) -> dict | None:
function embed_file (line 149) | def embed_file(path: Path) -> dict:
function load_previous_iteration (line 213) | def load_previous_iteration(workspace: Path) -> dict[str, dict]:
function generate_html (line 250) | def generate_html(
function _kill_port (line 288) | def _kill_port(port: int) -> None:
class ReviewHandler (line 308) | class ReviewHandler(BaseHTTPRequestHandler):
method __init__ (line 315) | def __init__(
method do_GET (line 332) | def do_GET(self) -> None:
method do_POST (line 361) | def do_POST(self) -> None:
method log_message (line 382) | def log_message(self, format: str, *args: object) -> None:
function main (line 387) | def main() -> None:
FILE: .claude/skills/skill-creator/scripts/aggregate_benchmark.py
function calculate_stats (line 45) | def calculate_stats(values: list[float]) -> dict:
function load_run_results (line 67) | def load_run_results(benchmark_dir: Path) -> dict:
function aggregate_results (line 176) | def aggregate_results(results: dict) -> dict:
function generate_benchmark (line 227) | def generate_benchmark(benchmark_dir: Path, skill_name: str = "", skill_...
function generate_markdown (line 281) | def generate_markdown(benchmark: dict) -> str:
function main (line 338) | def main():
FILE: .claude/skills/skill-creator/scripts/generate_report.py
function generate_html (line 16) | def generate_html(data: dict, auto_refresh: bool = False, skill_name: st...
function main (line 304) | def main():
FILE: .claude/skills/skill-creator/scripts/improve_description.py
function improve_description (line 19) | def improve_description(
function main (line 193) | def main():
FILE: .claude/skills/skill-creator/scripts/package_skill.py
function should_exclude (line 27) | def should_exclude(rel_path: Path) -> bool:
function package_skill (line 42) | def package_skill(skill_path, output_dir=None):
function main (line 111) | def main():
FILE: .claude/skills/skill-creator/scripts/quick_validate.py
function validate_skill (line 12) | def validate_skill(skill_path):
FILE: .claude/skills/skill-creator/scripts/run_eval.py
function find_project_root (line 22) | def find_project_root() -> Path:
function run_single_query (line 35) | def run_single_query(
function run_eval (line 184) | def run_eval(
function main (line 259) | def main():
FILE: .claude/skills/skill-creator/scripts/run_loop.py
function split_eval_set (line 26) | def split_eval_set(eval_set: list[dict], holdout: float, seed: int = 42)...
function run_loop (line 49) | def run_loop(
function main (line 248) | def main():
FILE: .claude/skills/skill-creator/scripts/utils.py
function parse_skill_md (line 7) | def parse_skill_md(skill_path: Path) -> tuple[str, str, str]:
FILE: cmd/server/main.go
function main (line 28) | func main() {
FILE: docs/docs.go
constant docTemplate (line 6) | docTemplate = `{
function init (line 2596) | func init() {
FILE: internal/api/handler/backup_handler.go
type BackupHandler (line 16) | type BackupHandler struct
method RegisterRoute (line 26) | func (h *BackupHandler) RegisterRoute(router *gin.RouterGroup) {
method DeleteBackup (line 48) | func (h *BackupHandler) DeleteBackup(ctx *gin.Context) {
method DownloadBackup (line 72) | func (h *BackupHandler) DownloadBackup(ctx *gin.Context) {
method GetBackupList (line 84) | func (h *BackupHandler) GetBackupList(ctx *gin.Context) {
method RenameBackup (line 102) | func (h *BackupHandler) RenameBackup(ctx *gin.Context) {
method RestoreBackup (line 129) | func (h *BackupHandler) RestoreBackup(ctx *gin.Context) {
method UploadBackup (line 149) | func (h *BackupHandler) UploadBackup(ctx *gin.Context) {
method CreateBackup (line 169) | func (h *BackupHandler) CreateBackup(ctx *gin.Context) {
method SaveBackupSnapshotsSetting (line 195) | func (h *BackupHandler) SaveBackupSnapshotsSetting(ctx *gin.Context) {
method GetBackupSnapshotsSetting (line 228) | func (h *BackupHandler) GetBackupSnapshotsSetting(ctx *gin.Context) {
method BackupSnapshotsList (line 249) | func (h *BackupHandler) BackupSnapshotsList(ctx *gin.Context) {
function NewBackupHandler (line 20) | func NewBackupHandler(backupService *backup.BackupService) *BackupHandler {
FILE: internal/api/handler/dst_api_handler.go
type DstHomeDetailParam (line 16) | type DstHomeDetailParam struct
function NewDstHomeDetailParam (line 21) | func NewDstHomeDetailParam() *DstHomeDetailParam {
type DstHomeServerParam (line 25) | type DstHomeServerParam struct
function NewDstHomeServerParam (line 41) | func NewDstHomeServerParam() *DstHomeServerParam {
type DstApiHandler (line 45) | type DstApiHandler struct
method RegisterRoute (line 52) | func (h *DstApiHandler) RegisterRoute(router *gin.RouterGroup) {
method GetDstHomeServerList (line 65) | func (h *DstApiHandler) GetDstHomeServerList(c *gin.Context) {
method GetDstHomeDetailList (line 144) | func (h *DstApiHandler) GetDstHomeDetailList(c *gin.Context) {
method GetDstHomeServerList2 (line 183) | func (h *DstApiHandler) GetDstHomeServerList2(ctx *gin.Context) {
method GetDstHomeDetailList2 (line 229) | func (h *DstApiHandler) GetDstHomeDetailList2(ctx *gin.Context) {
method GiteeProxy (line 255) | func (h *DstApiHandler) GiteeProxy(c *gin.Context) {
function NewDstApiHandler (line 48) | func NewDstApiHandler() *DstApiHandler {
FILE: internal/api/handler/dst_config_handler.go
type DstConfigHandler (line 14) | type DstConfigHandler struct
method RegisterRoute (line 26) | func (h *DstConfigHandler) RegisterRoute(router *gin.RouterGroup) {
method GetDstConfig (line 39) | func (h *DstConfigHandler) GetDstConfig(ctx *gin.Context) {
method SaveDstConfig (line 69) | func (h *DstConfigHandler) SaveDstConfig(ctx *gin.Context) {
function NewDstConfigHandler (line 19) | func NewDstConfigHandler(dstConfig dstConfig.Config, archive *archive.Pa...
FILE: internal/api/handler/dst_map_handler.go
type DstMapHandler (line 21) | type DstMapHandler struct
method RegisterRoute (line 33) | func (d *DstMapHandler) RegisterRoute(router *gin.RouterGroup) {
method GenDstMap (line 49) | func (d *DstMapHandler) GenDstMap(ctx *gin.Context) {
method GetDstMapImage (line 96) | func (d *DstMapHandler) GetDstMapImage(ctx *gin.Context) {
method HasWalrusHutPlains (line 126) | func (d *DstMapHandler) HasWalrusHutPlains(ctx *gin.Context) {
method GetSessionFile (line 166) | func (d *DstMapHandler) GetSessionFile(ctx *gin.Context) {
method GetPlayerSessionFile (line 204) | func (d *DstMapHandler) GetPlayerSessionFile(ctx *gin.Context) {
function NewDstMapHandler (line 26) | func NewDstMapHandler(archiveResolver *archive.PathResolver, generator *...
function findLatestPlayerFile (line 248) | func findLatestPlayerFile(directory string) (string, error) {
function extractSessionPrefix (line 284) | func extractSessionPrefix(sessionFile string) string {
constant sessionPrefix (line 292) | sessionPrefix = "/save/session/"
function extractSessionID (line 295) | func extractSessionID(p string) string {
function findLatestMetaFile (line 308) | func findLatestMetaFile(directory string) (string, error) {
FILE: internal/api/handler/game_config_handler.go
type GameConfigHandler (line 13) | type GameConfigHandler struct
method RegisterRoute (line 23) | func (p *GameConfigHandler) RegisterRoute(router *gin.RouterGroup) {
method GetClusterIni (line 45) | func (p *GameConfigHandler) GetClusterIni(ctx *gin.Context) {
method SaveClusterIni (line 72) | func (p *GameConfigHandler) SaveClusterIni(ctx *gin.Context) {
method GetAdminList (line 107) | func (p *GameConfigHandler) GetAdminList(ctx *gin.Context) {
method SaveAdminList (line 134) | func (p *GameConfigHandler) SaveAdminList(ctx *gin.Context) {
method GetBlackList (line 171) | func (p *GameConfigHandler) GetBlackList(ctx *gin.Context) {
method SaveBlackList (line 198) | func (p *GameConfigHandler) SaveBlackList(ctx *gin.Context) {
method GetWhithList (line 235) | func (p *GameConfigHandler) GetWhithList(ctx *gin.Context) {
method SaveWhithList (line 262) | func (p *GameConfigHandler) SaveWhithList(ctx *gin.Context) {
method GetConfig (line 300) | func (p *GameConfigHandler) GetConfig(ctx *gin.Context) {
method SaveConfig (line 318) | func (p *GameConfigHandler) SaveConfig(ctx *gin.Context) {
function NewGameConfigHandler (line 17) | func NewGameConfigHandler(gameConfig *gameConfig.GameConfig) *GameConfig...
FILE: internal/api/handler/game_handler.go
type GameHandler (line 25) | type GameHandler struct
method RegisterRoute (line 43) | func (p *GameHandler) RegisterRoute(router *gin.RouterGroup) {
method Stop (line 63) | func (p *GameHandler) Stop(ctx *gin.Context) {
method Start (line 88) | func (p *GameHandler) Start(ctx *gin.Context) {
method StartAll (line 111) | func (p *GameHandler) StartAll(ctx *gin.Context) {
method StopAll (line 129) | func (p *GameHandler) StopAll(ctx *gin.Context) {
method Command (line 148) | func (p *GameHandler) Command(ctx *gin.Context) {
method Status (line 196) | func (p *GameHandler) Status(ctx *gin.Context) {
method GameArchive (line 292) | func (p *GameHandler) GameArchive(ctx *gin.Context) {
method SystemInfoStream (line 310) | func (p *GameHandler) SystemInfoStream(ctx *gin.Context) {
method sendSystemInfoData (line 340) | func (p *GameHandler) sendSystemInfoData(ctx *gin.Context, clusterName...
method GetSystemInfo (line 371) | func (p *GameHandler) GetSystemInfo(clusterName string) *SystemInfo {
function NewGameHandler (line 33) | func NewGameHandler(process game.Process, levelService *level.LevelServi...
type LevelStatus (line 176) | type LevelStatus struct
type SystemInfo (line 362) | type SystemInfo struct
FILE: internal/api/handler/kv_handler.go
type KvHandler (line 13) | type KvHandler struct
method RegisterRoute (line 23) | func (i *KvHandler) RegisterRoute(router *gin.RouterGroup) {
method GetKv (line 35) | func (i *KvHandler) GetKv(ctx *gin.Context) {
method SaveKv (line 58) | func (i *KvHandler) SaveKv(ctx *gin.Context) {
function NewKvHandler (line 17) | func NewKvHandler(db *gorm.DB) *KvHandler {
FILE: internal/api/handler/level_handler.go
type LevelHandler (line 13) | type LevelHandler struct
method RegisterRoute (line 23) | func (h *LevelHandler) RegisterRoute(router *gin.RouterGroup) {
method GetLevelList (line 41) | func (h *LevelHandler) GetLevelList(ctx *gin.Context) {
method UpdateLevel (line 61) | func (h *LevelHandler) UpdateLevel(ctx *gin.Context) {
method CreateLevel (line 100) | func (h *LevelHandler) CreateLevel(ctx *gin.Context) {
method DeleteLevel (line 139) | func (h *LevelHandler) DeleteLevel(ctx *gin.Context) {
method UpdateLevels (line 169) | func (h *LevelHandler) UpdateLevels(ctx *gin.Context) {
function NewLevelHandler (line 17) | func NewLevelHandler(levelService *level.LevelService) *LevelHandler {
FILE: internal/api/handler/level_log_handler.go
type LevelLogHandler (line 22) | type LevelLogHandler struct
method RegisterRoute (line 31) | func (h *LevelLogHandler) RegisterRoute(router *gin.RouterGroup) {
method Stream (line 47) | func (h *LevelLogHandler) Stream(c *gin.Context) {
method GetServerLog (line 124) | func (h *LevelLogHandler) GetServerLog(ctx *gin.Context) {
method DownloadServerLog (line 163) | func (h *LevelLogHandler) DownloadServerLog(ctx *gin.Context) {
function NewLevelLogHandler (line 26) | func NewLevelLogHandler(archive *archive.PathResolver) *LevelLogHandler {
function writeSSE (line 177) | func writeSSE(w io.Writer, event, data string) {
type FileLogReader (line 193) | type FileLogReader struct
method Snapshot (line 203) | func (r *FileLogReader) Snapshot(
method Follow (line 268) | func (r *FileLogReader) Follow(
function NewFileLogReader (line 197) | func NewFileLogReader() *FileLogReader {
FILE: internal/api/handler/login_handler.go
constant PasswordPath (line 17) | PasswordPath = "./password.txt"
type LoginHandler (line 20) | type LoginHandler struct
method RegisterRoute (line 30) | func (h *LoginHandler) RegisterRoute(router *gin.RouterGroup) {
method GetUserInfo (line 48) | func (h *LoginHandler) GetUserInfo(ctx *gin.Context) {
method Login (line 65) | func (h *LoginHandler) Login(ctx *gin.Context) {
method Logout (line 89) | func (h *LoginHandler) Logout(ctx *gin.Context) {
method ChangePassword (line 107) | func (h *LoginHandler) ChangePassword(ctx *gin.Context) {
method UpdateUserInfo (line 133) | func (h *LoginHandler) UpdateUserInfo(ctx *gin.Context) {
method InitFirst (line 179) | func (h *LoginHandler) InitFirst(ctx *gin.Context) {
method CheckIsFirst (line 227) | func (h *LoginHandler) CheckIsFirst(ctx *gin.Context) {
function NewLoginHandler (line 24) | func NewLoginHandler(loginService *login.LoginService) *LoginHandler {
FILE: internal/api/handler/mod_handler.go
type ModHandler (line 16) | type ModHandler struct
method RegisterRoute (line 28) | func (h *ModHandler) RegisterRoute(router *gin.RouterGroup) {
method SearchModList (line 58) | func (h *ModHandler) SearchModList(ctx *gin.Context) {
method GetModInfo (line 83) | func (h *ModHandler) GetModInfo(ctx *gin.Context) {
method GetMyModList (line 122) | func (h *ModHandler) GetMyModList(ctx *gin.Context) {
method UpdateAllModInfos (line 163) | func (h *ModHandler) UpdateAllModInfos(ctx *gin.Context) {
method DeleteMod (line 186) | func (h *ModHandler) DeleteMod(ctx *gin.Context) {
method DeleteSetupWorkshop (line 207) | func (h *ModHandler) DeleteSetupWorkshop(ctx *gin.Context) {
method GetModInfoFile (line 228) | func (h *ModHandler) GetModInfoFile(ctx *gin.Context) {
method SaveModInfoFile (line 249) | func (h *ModHandler) SaveModInfoFile(ctx *gin.Context) {
method UpdateMod (line 276) | func (h *ModHandler) UpdateMod(ctx *gin.Context) {
method AddModInfoFile (line 325) | func (h *ModHandler) AddModInfoFile(ctx *gin.Context) {
method GetUgcModAcf (line 370) | func (h *ModHandler) GetUgcModAcf(ctx *gin.Context) {
method DeleteUgcModFile (line 394) | func (h *ModHandler) DeleteUgcModFile(ctx *gin.Context) {
function NewModHandler (line 21) | func NewModHandler(modService *mod.ModService, dstConfig dstConfig.Confi...
FILE: internal/api/handler/player_handler.go
type PlayerHandler (line 12) | type PlayerHandler struct
method GetPlayerList (line 24) | func (p *PlayerHandler) GetPlayerList(ctx *gin.Context) {
method GetPlayerAllList (line 35) | func (p *PlayerHandler) GetPlayerAllList(ctx *gin.Context) {
method RegisterRoute (line 46) | func (p *PlayerHandler) RegisterRoute(router *gin.RouterGroup) {
function NewPlayerHandler (line 17) | func NewPlayerHandler(playerService *player.PlayerService, gameProcess g...
FILE: internal/api/handler/player_log_handler.go
type PlayerLogHandler (line 15) | type PlayerLogHandler struct
method RegisterRoute (line 22) | func (l *PlayerLogHandler) RegisterRoute(router *gin.RouterGroup) {
method PlayerLogQueryPage (line 42) | func (l *PlayerLogHandler) PlayerLogQueryPage(ctx *gin.Context) {
method DeletePlayerLog (line 125) | func (l *PlayerLogHandler) DeletePlayerLog(ctx *gin.Context) {
method DeletePlayerLogAll (line 150) | func (l *PlayerLogHandler) DeletePlayerLogAll(ctx *gin.Context) {
function NewPlayerLogHandler (line 18) | func NewPlayerLogHandler() *PlayerLogHandler {
FILE: internal/api/handler/statistics_handler.go
type StatisticsHandler (line 16) | type StatisticsHandler struct
method RegisterRoute (line 54) | func (s *StatisticsHandler) RegisterRoute(router *gin.RouterGroup) {
method CountActiveUser (line 65) | func (s *StatisticsHandler) CountActiveUser(ctx *gin.Context) {
method CountLoginUser (line 136) | func (s *StatisticsHandler) CountLoginUser(ctx *gin.Context) {
method TopUserActiveTimes (line 159) | func (s *StatisticsHandler) TopUserActiveTimes(ctx *gin.Context) {
method TopUserLoginimes (line 190) | func (s *StatisticsHandler) TopUserLoginimes(ctx *gin.Context) {
method TopDeaths (line 218) | func (s *StatisticsHandler) TopDeaths(ctx *gin.Context) {
method CountRoleRate (line 246) | func (s *StatisticsHandler) CountRoleRate(ctx *gin.Context) {
method LastThNRegenerate (line 286) | func (s *StatisticsHandler) LastThNRegenerate(ctx *gin.Context) {
function NewStatisticsHandler (line 19) | func NewStatisticsHandler() *StatisticsHandler {
type UserStatistics (line 23) | type UserStatistics struct
type TopStatistics (line 28) | type TopStatistics struct
type RoleRateStatistics (line 39) | type RoleRateStatistics struct
function findStamp (line 44) | func findStamp(stamp int64, data []UserStatistics) *UserStatistics {
function startDate (line 270) | func startDate(ctx *gin.Context) time.Time {
function endDate (line 278) | func endDate(ctx *gin.Context) time.Time {
FILE: internal/api/handler/update.go
type UpdateHandler (line 13) | type UpdateHandler struct
method RegisterRoute (line 23) | func (h *UpdateHandler) RegisterRoute(router *gin.RouterGroup) {
method Update (line 33) | func (h *UpdateHandler) Update(ctx *gin.Context) {
function NewUpdateHandler (line 17) | func NewUpdateHandler(update update.Update) *UpdateHandler {
FILE: internal/api/router.go
function NewRoute (line 32) | func NewRoute(cfg *config.Config, db *gorm.DB) *gin.Engine {
function initCollectors (line 53) | func initCollectors(archive *archive.PathResolver, dstConfigService dstC...
function RegisterStaticFile (line 64) | func RegisterStaticFile(app *gin.Engine) {
function Register (line 87) | func Register(cfg *config.Config, db *gorm.DB, router *gin.RouterGroup) {
FILE: internal/collect/collect.go
type Collect (line 18) | type Collect struct
method Stop (line 44) | func (c *Collect) Stop() {
method ReCollect (line 48) | func (c *Collect) ReCollect(baseLogPath, clusterName string) {
method StartCollect (line 62) | func (c *Collect) StartCollect() {
method parseSpawnRequestLog (line 82) | func (c *Collect) parseSpawnRequestLog(text string) {
method parseRegenerateLog (line 111) | func (c *Collect) parseRegenerateLog(text string) {
method parseNewIncomingLog (line 124) | func (c *Collect) parseNewIncomingLog(lines []string) {
method tailServeLog (line 213) | func (c *Collect) tailServeLog(fileName string) {
method parseChatLog (line 271) | func (c *Collect) parseChatLog(text string) {
method parseSay (line 303) | func (c *Collect) parseSay(text string) {
method parseResurrect (line 345) | func (c *Collect) parseResurrect(text string) {
method parseDeath (line 349) | func (c *Collect) parseDeath(text string) {
method parseLeave (line 421) | func (c *Collect) parseLeave(text string) {
method parseJoin (line 425) | func (c *Collect) parseJoin(text string) {
method tailServerChatLog (line 466) | func (c *Collect) tailServerChatLog(fileName string) {
method getSpawnRole (line 501) | func (c *Collect) getSpawnRole(name string) *model.Spawn {
method getConnectInfo (line 507) | func (c *Collect) getConnectInfo(name string) *model.Connect {
method parseAnnouncement (line 513) | func (c *Collect) parseAnnouncement(text string) {
function NewCollect (line 27) | func NewCollect(baseLogPath string, clusterName string) *Collect {
FILE: internal/collect/collect_map.go
type CollectMap (line 10) | type CollectMap struct
method AddNewCollect (line 21) | func (cm *CollectMap) AddNewCollect(clusterName string, baseLogPath st...
method RemoveCollect (line 30) | func (cm *CollectMap) RemoveCollect(clusterName string) {
function NewCollectMap (line 15) | func NewCollectMap() *CollectMap {
FILE: internal/config/config.go
type Config (line 10) | type Config struct
constant ConfigPath (line 28) | ConfigPath = "./config.yml"
constant DefaultPort (line 29) | DefaultPort = "8083"
function Load (line 34) | func Load() *Config {
FILE: internal/database/sqlite.go
function InitDB (line 15) | func InitDB(config *config.Config) *gorm.DB {
FILE: internal/middleware/auth.go
function apiFilter (line 17) | func apiFilter(s []string, str string) bool {
function Authentication (line 29) | func Authentication(loginService *login.LoginService) gin.HandlerFunc {
function SseHeadersMiddleware (line 53) | func SseHeadersMiddleware() gin.HandlerFunc {
FILE: internal/middleware/cluster.go
constant clusterNameKey (line 12) | clusterNameKey = "cluster_name"
constant dstConfigKey (line 13) | dstConfigKey = "dst_config"
function ClusterMiddleware (line 17) | func ClusterMiddleware(dstConfigService dstConfig.Config) gin.HandlerFunc {
FILE: internal/middleware/error.go
function Recover (line 11) | func Recover(c *gin.Context) {
function errorToString (line 34) | func errorToString(r interface{}) string {
FILE: internal/middleware/start_before.go
function StartBeforeMiddleware (line 15) | func StartBeforeMiddleware(archive *archive.PathResolver, levelConfigUti...
function copyOsFile (line 22) | func copyOsFile() {
function customcommandsFile (line 26) | func customcommandsFile(ctx *gin.Context, archive *archive.PathResolver,...
function makeRunVersion (line 39) | func makeRunVersion(ctx *gin.Context, archive *archive.PathResolver, lev...
FILE: internal/model/LogRecord.go
type Action (line 5) | type Action
constant RUN (line 8) | RUN Action = iota
constant STOP (line 9) | STOP
constant NORMAL (line 10) | NORMAL
type LogRecord (line 13) | type LogRecord struct
FILE: internal/model/announce.go
type Announce (line 5) | type Announce struct
FILE: internal/model/autoCheck.go
type AutoCheck (line 5) | type AutoCheck struct
FILE: internal/model/backup.go
type Backup (line 5) | type Backup struct
FILE: internal/model/backupSnapshot.go
type BackupSnapshot (line 5) | type BackupSnapshot struct
FILE: internal/model/cluster.go
type Cluster (line 5) | type Cluster struct
FILE: internal/model/connect.go
type Connect (line 5) | type Connect struct
FILE: internal/model/jobTask.go
type JobTask (line 5) | type JobTask struct
FILE: internal/model/kv.go
type KV (line 5) | type KV struct
FILE: internal/model/modInfo.go
type ModInfo (line 5) | type ModInfo struct
FILE: internal/model/modKv.go
type ModKV (line 5) | type ModKV struct
FILE: internal/model/playerLog.go
type PlayerLog (line 5) | type PlayerLog struct
FILE: internal/model/regenerate.go
type Regenerate (line 5) | type Regenerate struct
FILE: internal/model/spawnRole.go
type Spawn (line 5) | type Spawn struct
FILE: internal/model/webLink.go
type WebLink (line 5) | type WebLink struct
FILE: internal/pkg/context/cluster.go
constant clusterNameKey (line 10) | clusterNameKey = "cluster_name"
constant dstConfigKey (line 11) | dstConfigKey = "dst_config"
function GetClusterName (line 16) | func GetClusterName(c *gin.Context) string {
function GetDstConfig (line 27) | func GetDstConfig(c *gin.Context) *dstConfig.DstConfig {
FILE: internal/pkg/response/response.go
type Response (line 9) | type Response struct
type Page (line 15) | type Page struct
function OkWithData (line 24) | func OkWithData(data interface{}, ctx *gin.Context) {
function OkWithMessage (line 33) | func OkWithMessage(message string, ctx *gin.Context) {
function FailWithMessage (line 42) | func FailWithMessage(message string, ctx *gin.Context) {
function OkWithPage (line 51) | func OkWithPage(data interface{}, total, page, size int64, ctx *gin.Cont...
FILE: internal/pkg/utils/collectionUtils/collectionUtils.go
function ToSet (line 5) | func ToSet(list []string) []string {
FILE: internal/pkg/utils/dateUtils.go
function Bod (line 7) | func Bod(t time.Time) time.Time {
function Truncate (line 12) | func Truncate(t time.Time) time.Time {
function Get_stamp_day (line 16) | func Get_stamp_day(start_time, end_time time.Time) (args []int64) {
function Get_stamp_month (line 34) | func Get_stamp_month(start_time, end_time time.Time) (args []int64) {
FILE: internal/pkg/utils/dstUtils/dstUtils.go
function EscapePath (line 15) | func EscapePath(path string) string {
function WorkshopIds (line 27) | func WorkshopIds(content string) []string {
function DedicatedServerModsSetup (line 42) | func DedicatedServerModsSetup(dstConfig dstConfig.DstConfig, modConfig s...
function GetModSetup2 (line 76) | func GetModSetup2(dstConfig dstConfig.DstConfig) string {
function ParseTemplate (line 80) | func ParseTemplate(templatePath string, data interface{}) string {
FILE: internal/pkg/utils/envUtils.go
function IsWindow (line 5) | func IsWindow() bool {
FILE: internal/pkg/utils/fileUtils/fileUtls.go
function Exists (line 16) | func Exists(path string) bool {
function IsDir (line 24) | func IsDir(path string) bool {
function IsFile (line 32) | func IsFile(path string) bool {
function CreateDir (line 36) | func CreateDir(dirName string) bool {
function CreateFile (line 51) | func CreateFile(fileName string) error {
function WriterTXT (line 60) | func WriterTXT(filename, content string) error {
function ReadLnFile (line 91) | func ReadLnFile(filePath string) ([]string, error) {
function ReadFile (line 114) | func ReadFile(filePath string) (string, error) {
function WriterLnFile (line 124) | func WriterLnFile(filename string, lines []string) error {
function ReverseRead (line 149) | func ReverseRead(filename string, lineNum uint) ([]string, error) {
function DeleteFile (line 187) | func DeleteFile(path string) error {
function DeleteDir (line 196) | func DeleteDir(path string) error {
function Rename (line 211) | func Rename(filePath, newName string) (err error) {
function FindWorldDirs (line 216) | func FindWorldDirs(rootPath string) ([]string, error) {
function ListDirectories (line 238) | func ListDirectories(root string) ([]string, error) {
function CreateFileIfNotExists (line 270) | func CreateFileIfNotExists(path string) error {
function CreateDirIfNotExists (line 302) | func CreateDirIfNotExists(filepath string) {
function Copy (line 308) | func Copy(srcPath, outFileDir string) error {
function copyHelper (line 349) | func copyHelper(srcPath, outFileDir string) error {
FILE: internal/pkg/utils/luaUtils/luaUtils.go
type Clock (line 11) | type Clock struct
type Segs (line 20) | type Segs struct
type IsRandom (line 26) | type IsRandom struct
type Lengths (line 33) | type Lengths struct
type Seasons (line 40) | type Seasons struct
type Data (line 52) | type Data struct
function mapTableToStruct (line 57) | func mapTableToStruct(table *lua.LTable, v reflect.Value) error {
function luaValueToValue (line 115) | func luaValueToValue(lv lua.LValue, v reflect.Value) error {
function mapTableToMap (line 177) | func mapTableToMap(table *lua.LTable, m map[string]interface{}) {
function LuaTable2Map (line 199) | func LuaTable2Map(script string) (map[string]interface{}, error) {
function LuaTable2Struct (line 214) | func LuaTable2Struct(script string, v reflect.Value) error {
FILE: internal/pkg/utils/shellUtils/shellUitls.go
function ExecuteCommandInWin (line 12) | func ExecuteCommandInWin(command string) (string, error) {
function ExecuteCommand (line 22) | func ExecuteCommand(command string) (string, error) {
function Shell (line 40) | func Shell(cmd string) (res string, err error) {
type Charset (line 76) | type Charset
constant UTF8 (line 79) | UTF8 = Charset("UTF-8")
constant GB18030 (line 80) | GB18030 = Charset("GB18030")
function ConvertByte2String (line 83) | func ConvertByte2String(byte []byte, charset Charset) string {
function Chmod (line 98) | func Chmod(filePath string) error {
FILE: internal/pkg/utils/systemUtils/SystemUtils.go
function Home (line 25) | func Home() (string, error) {
function HomePath (line 41) | func HomePath() string {
function homeUnix (line 49) | func homeUnix() (string, error) {
function homeWindows (line 71) | func homeWindows() (string, error) {
type HostInfo (line 85) | type HostInfo struct
type MemInfo (line 92) | type MemInfo struct
type CpuInfo (line 99) | type CpuInfo struct
type DiskInfo (line 106) | type DiskInfo struct
type deviceInfo (line 109) | type deviceInfo struct
function GetDiskInfo (line 119) | func GetDiskInfo() *DiskInfo {
function GetCpuInfo (line 150) | func GetCpuInfo() *CpuInfo {
function GetHostInfo (line 162) | func GetHostInfo() *HostInfo {
function GetMemInfo (line 173) | func GetMemInfo() *MemInfo {
function GetPublicIP (line 189) | func GetPublicIP() (string, error) {
FILE: internal/pkg/utils/zip/zip.go
function zipDir (line 13) | func zipDir(dirPath string, zipWriter *zip.Writer, basePath string) error {
function Zip (line 53) | func Zip(sourceDir, targetZip string) error {
function Unzip (line 73) | func Unzip(zipFile, destDir string) error {
function Unzip3 (line 117) | func Unzip3(source, destination string) error {
function Unzip2 (line 215) | func Unzip2(zipFile, destDir, newName string) error {
FILE: internal/service/archive/path_resolver.go
type PathResolver (line 16) | type PathResolver struct
method KleiBasePath (line 33) | func (r *PathResolver) KleiBasePath(clusterName string) string {
method ClusterPath (line 64) | func (r *PathResolver) ClusterPath(cluster string) string {
method LevelPath (line 71) | func (r *PathResolver) LevelPath(cluster, level string) string {
method DataFilePath (line 78) | func (r *PathResolver) DataFilePath(
method ClusterIniPath (line 90) | func (r *PathResolver) ClusterIniPath(clusterName string) string {
method ClusterTokenPath (line 94) | func (r *PathResolver) ClusterTokenPath(clusterName string) string {
method AdminlistPath (line 98) | func (r *PathResolver) AdminlistPath(clusterName string) string {
method BlocklistPath (line 102) | func (r *PathResolver) BlocklistPath(clusterName string) string {
method BlacklistPath (line 105) | func (r *PathResolver) BlacklistPath(clusterName string) string {
method WhitelistPath (line 109) | func (r *PathResolver) WhitelistPath(clusterName string) string {
method ModoverridesPath (line 113) | func (r *PathResolver) ModoverridesPath(clusterName, levelName string)...
method LeveldataoverridePath (line 117) | func (r *PathResolver) LeveldataoverridePath(clusterName, levelName st...
method ServerIniPath (line 121) | func (r *PathResolver) ServerIniPath(clusterName string, levelName str...
method ServerLogPath (line 125) | func (r *PathResolver) ServerLogPath(cluster string, levelName string)...
method GetUgcWorkshopModPath (line 129) | func (r *PathResolver) GetUgcWorkshopModPath(clusterName, levelName, w...
method GetUgcModPath (line 141) | func (r *PathResolver) GetUgcModPath(clusterName string) string {
method GetUgcAcfPath (line 152) | func (r *PathResolver) GetUgcAcfPath(clusterName, levelName string) st...
method GetModSetup (line 164) | func (r *PathResolver) GetModSetup(clusterName string) string {
method IsBeta (line 173) | func (r *PathResolver) IsBeta(clusterName string) bool {
method GetLocalDstVersion (line 181) | func (r *PathResolver) GetLocalDstVersion(clusterName string) (int64, ...
method GetLastDstVersion (line 194) | func (r *PathResolver) GetLastDstVersion() (int64, error) {
method dstVersion (line 212) | func (r *PathResolver) dstVersion(versionTextPath string) (int64, erro...
function NewPathResolver (line 21) | func NewPathResolver(dstConfig dstConfig.Config) (*PathResolver, error) {
FILE: internal/service/backup/backup_service.go
type BackupService (line 24) | type BackupService struct
method GetBackupList (line 52) | func (b *BackupService) GetBackupList(clusterName string) []BackupInfo {
method RenameBackup (line 90) | func (b *BackupService) RenameBackup(ctx *gin.Context, fileName, newNa...
method DeleteBackup (line 104) | func (b *BackupService) DeleteBackup(ctx *gin.Context, fileNames []str...
method RestoreBackup (line 126) | func (b *BackupService) RestoreBackup(ctx *gin.Context, backupName str...
method CreateBackup (line 163) | func (b *BackupService) CreateBackup(clusterName, backupName string) {
method DownloadBackup (line 197) | func (b *BackupService) DownloadBackup(c *gin.Context) {
method UploadBackup (line 221) | func (b *BackupService) UploadBackup(c *gin.Context) {
method ScheduleBackupSnapshots (line 248) | func (b *BackupService) ScheduleBackupSnapshots() {
method CreateSnapshotBackup (line 302) | func (b *BackupService) CreateSnapshotBackup(prefix, clusterName strin...
method DeleteBackupSnapshots (line 322) | func (b *BackupService) DeleteBackupSnapshots(prefix string, maxSnapsh...
method backupPath (line 351) | func (b *BackupService) backupPath() string {
method GenGameBackUpName (line 369) | func (b *BackupService) GenGameBackUpName(clusterName string) string {
method GenBackUpSnapshotName (line 376) | func (b *BackupService) GenBackUpSnapshotName(prefix, clusterName stri...
type BackupInfo (line 30) | type BackupInfo struct
type BackupSnapshot (line 37) | type BackupSnapshot struct
function NewBackupService (line 44) | func NewBackupService(archive *archive.PathResolver, dstConfig dstConfig...
function sumMd5 (line 292) | func sumMd5(filePath string) string {
FILE: internal/service/dstConfig/dst_config.go
type DstConfig (line 3) | type DstConfig struct
type Config (line 20) | type Config interface
FILE: internal/service/dstConfig/factory.go
function NewDstConfig (line 7) | func NewDstConfig(db *gorm.DB) Config {
FILE: internal/service/dstConfig/one_dst_config.go
constant dst_config_path (line 14) | dst_config_path = "./dst_config"
type OneDstConfig (line 16) | type OneDstConfig struct
method kleiBasePath (line 26) | func (o *OneDstConfig) kleiBasePath(config DstConfig) string {
method GetDstConfig (line 55) | func (o *OneDstConfig) GetDstConfig(clusterName string) (DstConfig, er...
method SaveDstConfig (line 173) | func (o *OneDstConfig) SaveDstConfig(clusterName string, dstConfig Dst...
function NewOneDstConfig (line 20) | func NewOneDstConfig(db *gorm.DB) OneDstConfig {
FILE: internal/service/dstMap/dst_map.go
type Color (line 15) | type Color struct
type DSTMapGenerator (line 21) | type DSTMapGenerator struct
method ReadSaveFile (line 126) | func (g *DSTMapGenerator) ReadSaveFile(filePath string) (string, error) {
method DecodeMapData (line 176) | func (g *DSTMapGenerator) DecodeMapData(tilesBase64 string) ([]int, er...
method CreateMapImage (line 226) | func (g *DSTMapGenerator) CreateMapImage(tileIds []int, width, height,...
method GenerateMap (line 265) | func (g *DSTMapGenerator) GenerateMap(saveFilePath, outputPath string,...
function NewDSTMapGenerator (line 27) | func NewDSTMapGenerator() *DSTMapGenerator {
function RestoreTileId (line 144) | func RestoreTileId(original int, colors map[int]Color) int {
function ExtractDimensions (line 299) | func ExtractDimensions(filePath string) (int, int, error) {
function main (line 335) | func main() {
FILE: internal/service/dstPath/dst_path.go
type DstPath (line 11) | type DstPath interface
function EscapePath (line 15) | func EscapePath(path string) string {
function GetBaseUpdateCmd (line 27) | func GetBaseUpdateCmd(cluster dstConfig.DstConfig) string {
FILE: internal/service/dstPath/linux_dst.go
type LinuxDstPath (line 10) | type LinuxDstPath struct
method UpdateCommand (line 18) | func (d LinuxDstPath) UpdateCommand(clusterName string) (string, error) {
function NewLinuxDstPath (line 14) | func NewLinuxDstPath(dstConfig dstConfig.Config) *LinuxDstPath {
FILE: internal/service/dstPath/window_dst.go
type WindowDstPath (line 8) | type WindowDstPath struct
method UpdateCommand (line 16) | func (d WindowDstPath) UpdateCommand(clusterName string) (string, erro...
function NewWindowDst (line 12) | func NewWindowDst(dstConfig dstConfig.Config) *WindowDstPath {
FILE: internal/service/game/factory.go
function NewGame (line 9) | func NewGame(dstConfig dstConfig.Config, levelConfigUtils *levelConfig.L...
FILE: internal/service/game/linux_process.go
type LinuxProcess (line 14) | type LinuxProcess struct
method SessionName (line 27) | func (p *LinuxProcess) SessionName(clusterName, levelName string) stri...
method Start (line 31) | func (p *LinuxProcess) Start(clusterName, levelName string) error {
method launchLevel (line 42) | func (p *LinuxProcess) launchLevel(clusterName, levelName string) error {
method shutdownLevel (line 87) | func (p *LinuxProcess) shutdownLevel(clusterName, levelName string) er...
method killLevel (line 101) | func (p *LinuxProcess) killLevel(clusterName, level string) error {
method Stop (line 112) | func (p *LinuxProcess) Stop(clusterName, levelName string) error {
method stop (line 119) | func (p *LinuxProcess) stop(clusterName, levelName string) error {
method StartAll (line 142) | func (p *LinuxProcess) StartAll(clusterName string) error {
method StopAll (line 177) | func (p *LinuxProcess) StopAll(clusterName string) error {
method stopAll (line 184) | func (p *LinuxProcess) stopAll(clusterName string) error {
method Status (line 210) | func (p *LinuxProcess) Status(clusterName, levelName string) (bool, er...
method Command (line 220) | func (p *LinuxProcess) Command(clusterName, levelName, command string)...
method PsAuxSpecified (line 226) | func (p *LinuxProcess) PsAuxSpecified(clusterName, levelName string) D...
function NewLinuxProcess (line 20) | func NewLinuxProcess(dstConfig dstConfig.Config, levelConfigUtils *level...
constant ClearScreenCmd (line 250) | ClearScreenCmd = "screen -wipe "
function ClearScreen (line 253) | func ClearScreen() bool {
FILE: internal/service/game/process.go
type DstPsAux (line 3) | type DstPsAux struct
type Process (line 10) | type Process interface
FILE: internal/service/game/windowGameCli.go
type ClusterContainer (line 340) | type ClusterContainer struct
method StartLevel (line 352) | func (receiver *ClusterContainer) StartLevel(cluster, levelName string...
method StopLevel (line 372) | func (receiver *ClusterContainer) StopLevel(cluster, levelName string) {
method Send (line 381) | func (receiver *ClusterContainer) Send(cluster, levelName, message str...
method Status (line 389) | func (receiver *ClusterContainer) Status(cluster, levelName string) bo...
method MemUsage (line 398) | func (receiver *ClusterContainer) MemUsage(cluster, levelName string) ...
method CpuUsage (line 407) | func (receiver *ClusterContainer) CpuUsage(cluster, levelName string) ...
method Remove (line 416) | func (receiver *ClusterContainer) Remove(cluster, levelName string) {
function NewClusterContainer (line 345) | func NewClusterContainer() *ClusterContainer {
type LevelInstance (line 423) | type LevelInstance struct
method Status (line 458) | func (receiver *LevelInstance) Status() bool {
method Start (line 462) | func (receiver *LevelInstance) Start() {
method Stop (line 561) | func (receiver *LevelInstance) Stop() {
method Send (line 575) | func (receiver *LevelInstance) Send(cmd string) error {
method GetProcessMemInfo (line 593) | func (receiver *LevelInstance) GetProcessMemInfo() float64 {
method GetProcessCpuInfo (line 609) | func (receiver *LevelInstance) GetProcessCpuInfo() float64 {
function NewLevelInstance (line 440) | func NewLevelInstance(cluster, levelName string, bin int, steamcmd, dstS...
FILE: internal/service/game/window_process.go
type WindowProcess (line 11) | type WindowProcess struct
method SessionName (line 25) | func (p *WindowProcess) SessionName(clusterName, levelName string) str...
method Start (line 29) | func (p *WindowProcess) Start(clusterName, levelName string) error {
method Stop (line 41) | func (p *WindowProcess) Stop(clusterName, levelName string) error {
method StartAll (line 46) | func (p *WindowProcess) StartAll(clusterName string) error {
method StopAll (line 78) | func (p *WindowProcess) StopAll(clusterName string) error {
method Status (line 105) | func (p *WindowProcess) Status(clusterName, levelName string) (bool, e...
method Command (line 109) | func (p *WindowProcess) Command(clusterName, levelName, command string...
method PsAuxSpecified (line 114) | func (p *WindowProcess) PsAuxSpecified(clusterName, levelName string) ...
function NewWindowProcess (line 17) | func NewWindowProcess(dstConfig *dstConfig.Config, levelConfigUtils *lev...
FILE: internal/service/gameArchive/game_archive.go
type GameArchive (line 26) | type GameArchive struct
method GetGameArchive (line 101) | func (d *GameArchive) GetGameArchive(clusterName string) GameArchiveIn...
method GetPublicIP (line 202) | func (d *GameArchive) GetPublicIP() (string, error) {
method GetPrivateIP (line 241) | func (d *GameArchive) GetPrivateIP() (string, error) {
method getSubPathLevel (line 268) | func (d *GameArchive) getSubPathLevel(rootP, curPath string) int {
method FindLatestMetaFile (line 278) | func (d *GameArchive) FindLatestMetaFile(rootDir string) (string, erro...
method Snapshoot (line 350) | func (d *GameArchive) Snapshoot(clusterName string) Meta {
function NewGameArchive (line 32) | func NewGameArchive(gameConfig *gameConfig.GameConfig, level *level.Leve...
type GameArchiveInfo (line 40) | type GameArchiveInfo struct
type Clock (line 55) | type Clock struct
type Segs (line 64) | type Segs struct
type IsRandom (line 70) | type IsRandom struct
type Lengths (line 77) | type Lengths struct
type Seasons (line 84) | type Seasons struct
type Meta (line 96) | type Meta struct
function findLatestMetaFile (line 299) | func findLatestMetaFile(directory string) (string, error) {
FILE: internal/service/gameConfig/game_config.go
constant ClusterIniTemplate (line 17) | ClusterIniTemplate = "./static/template/cluster2.ini"
constant MasterServerIniTemplate (line 18) | MasterServerIniTemplate = "./static/template/master_server.ini"
constant CavesServerIniTemplate (line 19) | CavesServerIniTemplate = "./static/template/caves_server.ini"
constant ServerIniTemplate (line 20) | ServerIniTemplate = "./static/template/server.ini"
type ClusterIni (line 23) | type ClusterIni struct
type ServerIni (line 59) | type ServerIni struct
type ClusterIniConfig (line 76) | type ClusterIniConfig struct
type GameConfig (line 80) | type GameConfig struct
method GetClusterIniConfig (line 92) | func (p *GameConfig) GetClusterIniConfig(clusterName string) (ClusterI...
method SaveClusterIniConfig (line 107) | func (p *GameConfig) SaveClusterIniConfig(clusterName string, config *...
method GetClusterIni (line 119) | func (p *GameConfig) GetClusterIni(clusterName string) (ClusterIni, er...
method SaveClusterIni (line 214) | func (p *GameConfig) SaveClusterIni(clusterName string, clusterIni *Cl...
method GetClusterToken (line 220) | func (p *GameConfig) GetClusterToken(clusterName string) (string, erro...
method SaveClusterToken (line 224) | func (p *GameConfig) SaveClusterToken(clusterName string, token string...
method GetAdminList (line 228) | func (p *GameConfig) GetAdminList(clusterName string) ([]string, error) {
method GetBlackList (line 232) | func (p *GameConfig) GetBlackList(clusterName string) ([]string, error) {
method GetWhithList (line 236) | func (p *GameConfig) GetWhithList(clusterName string) ([]string, error) {
method SaveAdminList (line 240) | func (p *GameConfig) SaveAdminList(clusterName string, list []string) ...
method SaveBlackList (line 252) | func (p *GameConfig) SaveBlackList(clusterName string, list []string) ...
method SaveWhithList (line 264) | func (p *GameConfig) SaveWhithList(clusterName string, list []string) ...
method GetHomeConfig (line 294) | func (p *GameConfig) GetHomeConfig(clusterName string) (HomeConfigVO, ...
method SaveConfig (line 335) | func (p *GameConfig) SaveConfig(clusterName string, homeConfig HomeCon...
function NewGameConfig (line 85) | func NewGameConfig(archive *archive.PathResolver, levelConfigUtils *leve...
type HomeConfigVO (line 276) | type HomeConfigVO struct
FILE: internal/service/level/level.go
type LevelService (line 20) | type LevelService struct
method GetLevelList (line 38) | func (l *LevelService) GetLevelList(clusterName string) []levelConfig....
method GetLevel (line 105) | func (l *LevelService) GetLevel(clusterName string, levelName string) ...
method GetServerIni (line 143) | func (l *LevelService) GetServerIni(filepath string, isMaster bool) le...
method UpdateLevels (line 186) | func (l *LevelService) UpdateLevels(clusterName string, levels []level...
method UpdateLevel (line 200) | func (l *LevelService) UpdateLevel(clusterName string, level *levelCon...
method CreateLevel (line 223) | func (l *LevelService) CreateLevel(clusterName string, level *levelCon...
method DeleteLevel (line 250) | func (l *LevelService) DeleteLevel(clusterName string, levelName strin...
method initLevel (line 279) | func (l *LevelService) initLevel(levelFolderPath string, level *levelC...
method ParseTemplate (line 296) | func (l *LevelService) ParseTemplate(serverIni levelConfig.ServerIni) ...
method generateUUID (line 302) | func (l *LevelService) generateUUID() string {
function NewLevelService (line 28) | func NewLevelService(gameProcess game.Process, dstConfig dstConfig.Confi...
FILE: internal/service/levelConfig/level_config.go
type Item (line 13) | type Item struct
type LevelConfig (line 20) | type LevelConfig struct
type LevelInfo (line 25) | type LevelInfo struct
type ServerIni (line 35) | type ServerIni struct
function NewMasterServerIni (line 53) | func NewMasterServerIni() ServerIni {
function NewCavesServerIni (line 63) | func NewCavesServerIni() ServerIni {
type LevelConfigUtils (line 75) | type LevelConfigUtils struct
method initLevel (line 85) | func (p *LevelConfigUtils) initLevel(levelFolderPath string, level *Le...
method GetLevelConfig (line 101) | func (p *LevelConfigUtils) GetLevelConfig(clusterName string) (*LevelC...
method SaveLevelConfig (line 169) | func (p *LevelConfigUtils) SaveLevelConfig(clusterName string, levelCo...
function NewLevelConfigUtils (line 79) | func NewLevelConfigUtils(archive *archive.PathResolver) *LevelConfigUtils {
FILE: internal/service/login/login_service.go
constant PasswordPath (line 18) | PasswordPath = "./password.txt"
type LoginService (line 21) | type LoginService struct
method GetUserInfo (line 38) | func (l *LoginService) GetUserInfo() UserInfo {
method Login (line 57) | func (l *LoginService) Login(userInfo UserInfo, ctx *gin.Context) *res...
method Logout (line 97) | func (l *LoginService) Logout(ctx *gin.Context) {
method DirectLogin (line 106) | func (l *LoginService) DirectLogin(ctx *gin.Context) {
method ChangeUser (line 116) | func (l *LoginService) ChangeUser(username, password string) {
method ChangePassword (line 131) | func (l *LoginService) ChangePassword(newPassword string) *response.Re...
method InitUserInfo (line 155) | func (l *LoginService) InitUserInfo(userInfo UserInfo) {
method IsWhiteIP (line 163) | func (l *LoginService) IsWhiteIP(ctx *gin.Context) bool {
type UserInfo (line 25) | type UserInfo struct
function NewLoginService (line 32) | func NewLoginService(config *config.Config) *LoginService {
FILE: internal/service/mod/mod_service.go
constant steamAPIKey (line 37) | steamAPIKey = "73DF9F781D195DFD3D19DED1CB72EEE6"
constant appID (line 38) | appID = 322330
constant language (line 39) | language = 6
type ModService (line 42) | type ModService struct
method SearchModList (line 131) | func (s *ModService) SearchModList(text string, page, size int, lang s...
method SubscribeModByModId (line 242) | func (s *ModService) SubscribeModByModId(clusterName, modId, lang stri...
method GetMyModList (line 363) | func (s *ModService) GetMyModList() ([]model.ModInfo, error) {
method GetModByModId (line 370) | func (s *ModService) GetModByModId(modId string) (*model.ModInfo, erro...
method DeleteMod (line 377) | func (s *ModService) DeleteMod(clusterName, modId string) error {
method UpdateAllModInfos (line 392) | func (s *ModService) UpdateAllModInfos(clusterName, lang string) error {
method DeleteSetupWorkshop (line 443) | func (s *ModService) DeleteSetupWorkshop(clusterName string) error {
method SaveModInfo (line 475) | func (s *ModService) SaveModInfo(modInfo *model.ModInfo) error {
method AddModInfo (line 480) | func (s *ModService) AddModInfo(clusterName, lang, modid, modinfo, mod...
method GetUgcModInfo (line 501) | func (s *ModService) GetUgcModInfo(clusterName, levelName string) ([]W...
method DeleteUgcModFile (line 568) | func (s *ModService) DeleteUgcModFile(clusterName, levelName, workshop...
method parseACFFile (line 579) | func (s *ModService) parseACFFile(filePath string) map[string]Workshop...
method getModInfoConfig (line 643) | func (s *ModService) getModInfoConfig(clusterName, lang, modId string)...
method getV1ModInfoConfig (line 708) | func (s *ModService) getV1ModInfoConfig(clusterName, lang, modid, file...
method getDstUcgsModsInstalledPath (line 778) | func (s *ModService) getDstUcgsModsInstalledPath(clusterName, modid st...
method readModInfo (line 800) | func (s *ModService) readModInfo(lang, modId, modinfoPath string) map[...
method parseModInfoLua (line 810) | func (s *ModService) parseModInfoLua(lang, modId, script string) map[s...
method getVersion (line 841) | func (s *ModService) getVersion(tags interface{}) string {
method searchModInfoByWorkshopId (line 863) | func (s *ModService) searchModInfoByWorkshopId(modID int) ModInfo {
method getLocalModInfo (line 923) | func (s *ModService) getLocalModInfo(clusterName, lang, modId string) ...
method addModInfoToDb (line 945) | func (s *ModService) addModInfoToDb(clusterName, lang, modid string) e...
method getModInfo2 (line 984) | func (s *ModService) getModInfo2(modID string) (*model.ModInfo, error) {
method getPublishedFileDetailsBatched (line 1053) | func (s *ModService) getPublishedFileDetailsBatched(workshopIds []stri...
method getPublishedFileDetailsWithGet (line 1075) | func (s *ModService) getPublishedFileDetailsWithGet(workshopIds []stri...
method getPublishedFileDetails (line 1112) | func (s *ModService) getPublishedFileDetails(workshopIds []string) ([]...
method unzipToDir (line 1164) | func (s *ModService) unzipToDir(zipReader *zip.Reader, destDir string)...
function NewModService (line 48) | func NewModService(db *gorm.DB, config dstConfig.Config, pathResolver *a...
type SearchResult (line 57) | type SearchResult struct
type ModInfo (line 66) | type ModInfo struct
type Publishedfiledetail (line 87) | type Publishedfiledetail struct
type WorkshopItemDetail (line 115) | type WorkshopItemDetail struct
type WorkshopItem (line 124) | type WorkshopItem struct
function toInterface (line 1218) | func toInterface(lv lua.LValue) interface{} {
function toMap (line 1247) | func toMap(t *lua.LTable) map[string]interface{} {
function isTableArray (line 1265) | func isTableArray(t *lua.LTable) bool {
function isWorkshopId (line 1283) | func isWorkshopId(id string) bool {
function isModId (line 1289) | func isModId(str string) (int, bool) {
FILE: internal/service/player/player_service.go
type PlayerInfo (line 15) | type PlayerInfo struct
type PlayerService (line 23) | type PlayerService struct
method GetPlayerList (line 33) | func (p *PlayerService) GetPlayerList(clusterName string, levelName st...
method GetPlayerAllList (line 118) | func (p *PlayerService) GetPlayerAllList(clusterName string, gameProce...
function NewPlayerService (line 27) | func NewPlayerService(archive *archive.PathResolver) *PlayerService {
FILE: internal/service/update/factory.go
function NewUpdateService (line 8) | func NewUpdateService(dstConfig dstConfig.Config) Update {
FILE: internal/service/update/linux_update.go
type LinuxUpdate (line 9) | type LinuxUpdate struct
method Update (line 19) | func (u LinuxUpdate) Update(clusterName string) error {
function NewLinuxUpdate (line 13) | func NewLinuxUpdate(dstConfig dstConfig.Config) *LinuxUpdate {
FILE: internal/service/update/update.go
type Update (line 12) | type Update interface
function EscapePath (line 16) | func EscapePath(path string) string {
function GetBaseUpdateCmd (line 28) | func GetBaseUpdateCmd(cluster dstConfig.DstConfig) string {
function WindowUpdateCommand (line 48) | func WindowUpdateCommand(cluster dstConfig.DstConfig) (string, error) {
function LinuxUpdateCommand (line 57) | func LinuxUpdateCommand(cluster dstConfig.DstConfig) (string, error) {
FILE: internal/service/update/window_update.go
type WindowUpdate (line 9) | type WindowUpdate struct
method Update (line 19) | func (u WindowUpdate) Update(clusterName string) error {
function NewWindowUpdate (line 13) | func NewWindowUpdate(dstConfig dstConfig.Config) *WindowUpdate {
FILE: scripts/py-dst-cli/dst_version.py
function get_dst_version (line 8) | def get_dst_version(steamcdn=None):
FILE: scripts/py-dst-cli/main.py
function gen_world_iamge_job (line 18) | def gen_world_iamge_job(path):
function gen_world_setting_job (line 22) | def gen_world_setting_job(path):
function run (line 31) | def run():
FILE: scripts/py-dst-cli/parse_TooManyItemPlus_items.py
function parse_po_file (line 13) | def parse_po_file(po_file):
function apply_spice_rule (line 40) | def apply_spice_rule(item, base_translation):
function generate_translations (line 50) | def generate_translations(input_folder, po_file, output_file):
FILE: scripts/py-dst-cli/parse_mod.py
function search_mod_list (line 35) | def search_mod_list(text='', page=1, num=25):
function get_mod_base_info (line 92) | def get_mod_base_info(modId: int):
function check_is_dst_mod (line 125) | def check_is_dst_mod(mod_info):
function get_mod_info (line 132) | def get_mod_info(modId: int):
function get_mod_config_file_by_url (line 154) | def get_mod_config_file_by_url(file_url: str):
function get_mod_config_file_by_steamcmd (line 181) | def get_mod_config_file_by_steamcmd(modId: int):
function get_mod_info_dict (line 197) | def get_mod_info_dict(modId:int):
function lua_runtime (line 215) | def lua_runtime(data: bytes):
function table_dict (line 240) | def table_dict(lua_table):
function get_dst_version (line 262) | def get_dst_version():
class ModHandler (line 280) | class ModHandler(BaseHTTPRequestHandler):
method do_GET (line 282) | def do_GET(self):
method mod_api (line 292) | def mod_api(self):
method search_api (line 301) | def search_api(self):
method dst_version_api (line 317) | def dst_version_api(self):
FILE: scripts/py-dst-cli/parse_world_setting.py
function table_dict (line 12) | def table_dict(lua_table):
function dict_table (line 28) | def dict_table(py_dict, lua_temp): # dict 转 table。列表之类类型的转过去会有索引,table_...
function scan (line 37) | def scan(dict_scan, num, key_set): # 返回指定深度的 keys 集合, key_set初始传入空set
function parse_po (line 46) | def parse_po(path_po): # 把 .po 文件按照 msgctxt: msgstr 的格式转为字典,再以 . 的深度分割 ...
function split_key (line 75) | def split_key(dict_split, list_split, value): # 以列表值为 keys 补全字典深度。用于分割 ...
function creat_newdata (line 83) | def creat_newdata(path_cus, new_cus): # 删去local、不必要的require 和不需要的内容
function parse_cus (line 94) | def parse_cus(lua_cus, po):
function parse_option (line 136) | def parse_option(group_dict, path_base):
function parse_world_setting (line 222) | def parse_world_setting(path_base="data"):
FILE: scripts/py-dst-cli/parse_world_webp.py
function download_dst_scripts (line 13) | def download_dst_scripts(steamcdn=None):
FILE: static/template/test.go
function main (line 14) | func main() {
function CheckErr (line 38) | func CheckErr(err error) {
function GetHostInfo (line 44) | func GetHostInfo() {
function GetCpuInfo (line 49) | func GetCpuInfo() {
function GetMemInfo (line 53) | func GetMemInfo() {
function GetDiskInfo (line 63) | func GetDiskInfo() {
Condensed preview — 151 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (1,185K chars).
[
{
"path": ".claude/skills/api-generator/SKILL.md",
"chars": 21829,
"preview": "---\nname: api-generator\ndescription: Specialized skill for developing and refactoring the DST (Don't Starve Together) Ad"
},
{
"path": ".claude/skills/find-skills/SKILL.md",
"chars": 4627,
"preview": "---\nname: find-skills\ndescription: Helps users discover and install agent skills when they ask questions like \"how do I "
},
{
"path": ".claude/skills/go-concurrency-patterns/SKILL.md",
"chars": 13685,
"preview": "---\nname: go-concurrency-patterns\ndescription: Master Go concurrency with goroutines, channels, sync primitives, and con"
},
{
"path": ".claude/skills/golang-patterns/SKILL.md",
"chars": 13917,
"preview": "---\nname: golang-patterns\ndescription: Idiomatic Go patterns, best practices, and conventions for building robust, effic"
},
{
"path": ".claude/skills/golang-pro/SKILL.md",
"chars": 3728,
"preview": "---\nname: golang-pro\ndescription: Use when building Go applications requiring concurrent programming, microservices arch"
},
{
"path": ".claude/skills/golang-pro/references/concurrency.md",
"chars": 6523,
"preview": "# Concurrency Patterns\n\n## Goroutine Lifecycle Management\n\n```go\npackage main\n\nimport (\n \"context\"\n \"fmt\"\n \"syn"
},
{
"path": ".claude/skills/golang-pro/references/generics.md",
"chars": 8606,
"preview": "# Generics and Type Parameters\n\n## Basic Type Parameters\n\n```go\npackage main\n\n// Generic function with type parameter\nfu"
},
{
"path": ".claude/skills/golang-pro/references/interfaces.md",
"chars": 8361,
"preview": "# Interface Design and Composition\n\n## Small, Focused Interfaces\n\n```go\n// Single-method interfaces (idiomatic Go)\ntype "
},
{
"path": ".claude/skills/golang-pro/references/project-structure.md",
"chars": 10084,
"preview": "# Project Structure and Module Management\n\n## Standard Project Layout\n\n```\nmyproject/\n├── cmd/ # Main"
},
{
"path": ".claude/skills/golang-pro/references/testing.md",
"chars": 9461,
"preview": "# Testing and Benchmarking\n\n## Table-Driven Tests\n\n```go\npackage math\n\nimport \"testing\"\n\nfunc Add(a, b int) int {\n re"
},
{
"path": ".claude/skills/skill-creator/LICENSE.txt",
"chars": 11357,
"preview": "\n Apache License\n Version 2.0, January 2004\n "
},
{
"path": ".claude/skills/skill-creator/SKILL.md",
"chars": 32189,
"preview": "---\nname: skill-creator\ndescription: Create new skills, modify and improve existing skills, and measure skill performanc"
},
{
"path": ".claude/skills/skill-creator/agents/analyzer.md",
"chars": 10374,
"preview": "# Post-hoc Analyzer Agent\n\nAnalyze blind comparison results to understand WHY the winner won and generate improvement su"
},
{
"path": ".claude/skills/skill-creator/agents/comparator.md",
"chars": 7281,
"preview": "# Blind Comparator Agent\n\nCompare two outputs WITHOUT knowing which skill produced them.\n\n## Role\n\nThe Blind Comparator "
},
{
"path": ".claude/skills/skill-creator/agents/grader.md",
"chars": 9031,
"preview": "# Grader Agent\n\nEvaluate expectations against an execution transcript and outputs.\n\n## Role\n\nThe Grader reviews a transc"
},
{
"path": ".claude/skills/skill-creator/assets/eval_review.html",
"chars": 7058,
"preview": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n <meta charset=\"UTF-8\">\n <meta name=\"viewport\" content=\"width=device-width, in"
},
{
"path": ".claude/skills/skill-creator/eval-viewer/generate_review.py",
"chars": 16295,
"preview": "#!/usr/bin/env python3\n\"\"\"Generate and serve a review page for eval results.\n\nReads the workspace directory, discovers r"
},
{
"path": ".claude/skills/skill-creator/eval-viewer/viewer.html",
"chars": 44975,
"preview": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n <meta charset=\"UTF-8\">\n <meta name=\"viewport\" content=\"width=device-width, in"
},
{
"path": ".claude/skills/skill-creator/references/schemas.md",
"chars": 12058,
"preview": "# JSON Schemas\n\nThis document defines the JSON schemas used by skill-creator.\n\n---\n\n## evals.json\n\nDefines the evals for"
},
{
"path": ".claude/skills/skill-creator/scripts/aggregate_benchmark.py",
"chars": 14284,
"preview": "#!/usr/bin/env python3\n\"\"\"\nAggregate individual run results into benchmark summary statistics.\n\nReads grading.json files"
},
{
"path": ".claude/skills/skill-creator/scripts/generate_report.py",
"chars": 12837,
"preview": "#!/usr/bin/env python3\n\"\"\"Generate an HTML report from run_loop.py output.\n\nTakes the JSON output from run_loop.py and g"
},
{
"path": ".claude/skills/skill-creator/scripts/improve_description.py",
"chars": 10719,
"preview": "#!/usr/bin/env python3\n\"\"\"Improve a skill description based on eval results.\n\nTakes eval results (from run_eval.py) and "
},
{
"path": ".claude/skills/skill-creator/scripts/package_skill.py",
"chars": 4214,
"preview": "#!/usr/bin/env python3\n\"\"\"\nSkill Packager - Creates a distributable .skill file of a skill folder\n\nUsage:\n python uti"
},
{
"path": ".claude/skills/skill-creator/scripts/quick_validate.py",
"chars": 3972,
"preview": "#!/usr/bin/env python3\n\"\"\"\nQuick validation script for skills - minimal version\n\"\"\"\n\nimport sys\nimport os\nimport re\nimpo"
},
{
"path": ".claude/skills/skill-creator/scripts/run_eval.py",
"chars": 11464,
"preview": "#!/usr/bin/env python3\n\"\"\"Run trigger evaluation for a skill description.\n\nTests whether a skill's description causes Cl"
},
{
"path": ".claude/skills/skill-creator/scripts/run_loop.py",
"chars": 13685,
"preview": "#!/usr/bin/env python3\n\"\"\"Run the eval + improve loop until all pass or max iterations reached.\n\nCombines run_eval.py an"
},
{
"path": ".claude/skills/skill-creator/scripts/utils.py",
"chars": 1661,
"preview": "\"\"\"Shared utilities for skill-creator scripts.\"\"\"\n\nfrom pathlib import Path\n\n\n\ndef parse_skill_md(skill_path: Path) -> t"
},
{
"path": ".gitignore",
"chars": 468,
"preview": "# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.\n\n# dependencies\n/dist\n/dst\n.idea\n\n"
},
{
"path": "CLAUDE.md",
"chars": 7511,
"preview": "# CLAUDE.md\n\nThis file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.\n\n## "
},
{
"path": "LICENSE",
"chars": 35149,
"preview": " GNU GENERAL PUBLIC LICENSE\n Version 3, 29 June 2007\n\n Copyright (C) 2007 Free "
},
{
"path": "README-EN.md",
"chars": 1928,
"preview": "# dst-admin-go\n> dst-admin-go manage web\n>\n> preview https://carrot-hu23.github.io/dst-admin-go-preview/\n\n[English](READ"
},
{
"path": "README.md",
"chars": 1856,
"preview": "# dst-admin-go\n> 饥荒联机版管理后台\n> \n> 预览 https://carrot-hu23.github.io/dst-admin-go-preview/\n\n[English](README-EN.md)/[中文](REA"
},
{
"path": "cmd/server/main.go",
"chars": 859,
"preview": "// @title DST Admin Go API\n// @version 1.0\n// @description 饥荒联机版服务器管理后台 API 文档\n// @termsOfService "
},
{
"path": "config.yml",
"chars": 328,
"preview": "#绑定地址\nbindAddress: \"\"\n#启动端口\nport: 8082\n#数据库\ndatabase: dst-db\n#自动检测 单位都是 分钟\nautoCheck:\n # 森林状态检测间隔时间\n masterInterval: 5"
},
{
"path": "docs/docs.go",
"chars": 83408,
"preview": "// Package docs Code generated by swaggo/swag. DO NOT EDIT\npackage docs\n\nimport \"github.com/swaggo/swag\"\n\nconst docTempl"
},
{
"path": "docs/swagger.json",
"chars": 82741,
"preview": "{\n \"swagger\": \"2.0\",\n \"info\": {\n \"description\": \"饥荒联机版服务器管理后台 API 文档\",\n \"title\": \"DST Admin Go API\","
},
{
"path": "docs/swagger.yaml",
"chars": 36581,
"preview": "basePath: /\ndefinitions:\n dstConfig.DstConfig:\n properties:\n backup:\n type: string\n beta:\n t"
},
{
"path": "go.mod",
"chars": 3439,
"preview": "module dst-admin-go\n\ngo 1.23.0\n\ntoolchain go1.24.7\n\nrequire (\n\tgithub.com/gin-contrib/sessions v1.0.1\n\tgithub.com/gin-go"
},
{
"path": "go.sum",
"chars": 20964,
"preview": "github.com/KyleBanks/depth v1.2.1 h1:5h8fQADFrWtarTdtDudMmGsC7GPbOAu6RVB3ffsVFHc=\ngithub.com/KyleBanks/depth v1.2.1/go.m"
},
{
"path": "internal/api/handler/backup_handler.go",
"chars": 6952,
"preview": "package handler\n\nimport (\n\t\"dst-admin-go/internal/database\"\n\t\"dst-admin-go/internal/model\"\n\t\"dst-admin-go/internal/pkg/c"
},
{
"path": "internal/api/handler/dst_api_handler.go",
"chars": 8130,
"preview": "package handler\n\nimport (\n\t\"bytes\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"log\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"time\"\n\n\t\"github.com/gin"
},
{
"path": "internal/api/handler/dst_config_handler.go",
"chars": 2332,
"preview": "package handler\n\nimport (\n\t\"dst-admin-go/internal/collect\"\n\t\"dst-admin-go/internal/pkg/context\"\n\t\"dst-admin-go/internal/"
},
{
"path": "internal/api/handler/dst_map_handler.go",
"chars": 9045,
"preview": "package handler\n\nimport (\n\t\"dst-admin-go/internal/pkg/context\"\n\t\"dst-admin-go/internal/pkg/response\"\n\t\"dst-admin-go/inte"
},
{
"path": "internal/api/handler/game_config_handler.go",
"chars": 8746,
"preview": "package handler\n\nimport (\n\t\"dst-admin-go/internal/pkg/context\"\n\t\"dst-admin-go/internal/pkg/response\"\n\t\"dst-admin-go/inte"
},
{
"path": "internal/api/handler/game_handler.go",
"chars": 12112,
"preview": "package handler\n\nimport (\n\t\"dst-admin-go/internal/middleware\"\n\t\"dst-admin-go/internal/pkg/context\"\n\t\"dst-admin-go/intern"
},
{
"path": "internal/api/handler/kv_handler.go",
"chars": 1494,
"preview": "package handler\n\nimport (\n\t\"dst-admin-go/internal/model\"\n\t\"dst-admin-go/internal/pkg/response\"\n\t\"log\"\n\t\"net/http\"\n\n\t\"git"
},
{
"path": "internal/api/handler/level_handler.go",
"chars": 4477,
"preview": "package handler\n\nimport (\n\t\"dst-admin-go/internal/pkg/context\"\n\t\"dst-admin-go/internal/pkg/response\"\n\t\"dst-admin-go/inte"
},
{
"path": "internal/api/handler/level_log_handler.go",
"chars": 7030,
"preview": "package handler\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"context\"\n\tclusterContext \"dst-admin-go/internal/pkg/context\"\n\t\"dst-admin-g"
},
{
"path": "internal/api/handler/login_handler.go",
"chars": 5761,
"preview": "package handler\n\nimport (\n\t\"dst-admin-go/internal/database\"\n\t\"dst-admin-go/internal/model\"\n\t\"dst-admin-go/internal/pkg/r"
},
{
"path": "internal/api/handler/mod_handler.go",
"chars": 10932,
"preview": "package handler\n\nimport (\n\t\"dst-admin-go/internal/model\"\n\t\"dst-admin-go/internal/pkg/context\"\n\t\"dst-admin-go/internal/pk"
},
{
"path": "internal/api/handler/player_handler.go",
"chars": 1187,
"preview": "package handler\n\nimport (\n\t\"dst-admin-go/internal/pkg/context\"\n\t\"dst-admin-go/internal/service/game\"\n\t\"dst-admin-go/inte"
},
{
"path": "internal/api/handler/player_log_handler.go",
"chars": 4089,
"preview": "package handler\n\nimport (\n\t\"dst-admin-go/internal/database\"\n\t\"dst-admin-go/internal/model\"\n\t\"dst-admin-go/internal/pkg/r"
},
{
"path": "internal/api/handler/statistics_handler.go",
"chars": 8294,
"preview": "package handler\n\nimport (\n\t\"dst-admin-go/internal/database\"\n\t\"dst-admin-go/internal/model\"\n\t\"dst-admin-go/internal/pkg/r"
},
{
"path": "internal/api/handler/update.go",
"chars": 934,
"preview": "package handler\n\nimport (\n\t\"dst-admin-go/internal/pkg/context\"\n\t\"dst-admin-go/internal/pkg/response\"\n\t\"dst-admin-go/inte"
},
{
"path": "internal/api/router.go",
"chars": 5590,
"preview": "package api\n\nimport (\n\t\"dst-admin-go/internal/api/handler\"\n\t\"dst-admin-go/internal/collect\"\n\t\"dst-admin-go/internal/conf"
},
{
"path": "internal/collect/collect.go",
"chars": 12755,
"preview": "package collect\n\nimport (\n\t\"dst-admin-go/internal/database\"\n\t\"dst-admin-go/internal/model\"\n\t\"fmt\"\n\t\"log\"\n\t\"path/filepath"
},
{
"path": "internal/collect/collect_map.go",
"chars": 669,
"preview": "package collect\n\nimport (\n\t\"dst-admin-go/internal/service/archive\"\n\t\"sync\"\n)\n\nvar CollectorMap *CollectMap\n\ntype Collect"
},
{
"path": "internal/config/config.go",
"chars": 1236,
"preview": "package config\n\nimport (\n\t\"fmt\"\n\t\"io/ioutil\"\n\n\t\"gopkg.in/yaml.v3\"\n)\n\ntype Config struct {\n\tBindAddress string `yam"
},
{
"path": "internal/database/sqlite.go",
"chars": 782,
"preview": "package database\n\nimport (\n\t\"dst-admin-go/internal/config\"\n\t\"dst-admin-go/internal/model\"\n\t\"log\"\n\n\t\"github.com/glebarez/"
},
{
"path": "internal/middleware/auth.go",
"chars": 1344,
"preview": "package middleware\n\nimport (\n\t\"dst-admin-go/internal/service/login\"\n\t\"log\"\n\t\"net/http\"\n\t\"strings\"\n\n\t\"github.com/gin-cont"
},
{
"path": "internal/middleware/cluster.go",
"chars": 890,
"preview": "package middleware\n\nimport (\n\t\"dst-admin-go/internal/pkg/response\"\n\t\"dst-admin-go/internal/service/dstConfig\"\n\t\"net/http"
},
{
"path": "internal/middleware/error.go",
"chars": 720,
"preview": "package middleware\n\nimport (\n\t\"log\"\n\t\"net/http\"\n\t\"runtime/debug\"\n\n\t\"github.com/gin-gonic/gin\"\n)\n\nfunc Recover(c *gin.Con"
},
{
"path": "internal/middleware/start_before.go",
"chars": 1987,
"preview": "package middleware\n\nimport (\n\t\"dst-admin-go/internal/pkg/context\"\n\t\"dst-admin-go/internal/pkg/utils/fileUtils\"\n\t\"dst-adm"
},
{
"path": "internal/model/LogRecord.go",
"chars": 253,
"preview": "package model\n\nimport \"gorm.io/gorm\"\n\ntype Action int\n\nconst (\n\tRUN Action = iota\n\tSTOP\n\tNORMAL\n)\n\ntype LogRecord struct"
},
{
"path": "internal/model/announce.go",
"chars": 309,
"preview": "package model\n\nimport \"gorm.io/gorm\"\n\ntype Announce struct {\n\tgorm.Model\n\tEnable bool `json:\"enable\"`\n\tFrequency"
},
{
"path": "internal/model/autoCheck.go",
"chars": 459,
"preview": "package model\n\nimport \"gorm.io/gorm\"\n\ntype AutoCheck struct {\n\tgorm.Model\n\tName string `json:\"name\"`\n\tClusterNam"
},
{
"path": "internal/model/backup.go",
"chars": 286,
"preview": "package model\n\nimport \"gorm.io/gorm\"\n\ntype Backup struct {\n\tgorm.Model\n\tName string `json:\"name\"`\n\tDescription st"
},
{
"path": "internal/model/backupSnapshot.go",
"chars": 273,
"preview": "package model\n\nimport \"gorm.io/gorm\"\n\ntype BackupSnapshot struct {\n\tgorm.Model\n\tName string `json:\"name\"`\n\tInter"
},
{
"path": "internal/model/cluster.go",
"chars": 651,
"preview": "package model\n\nimport \"gorm.io/gorm\"\n\ntype Cluster struct {\n\tgorm.Model\n\tClusterName string `gorm:\"uniqueIndex\" json"
},
{
"path": "internal/model/connect.go",
"chars": 214,
"preview": "package model\n\nimport \"gorm.io/gorm\"\n\ntype Connect struct {\n\tgorm.Model\n\tIp string\n\tName string\n\tKuId "
},
{
"path": "internal/model/jobTask.go",
"chars": 455,
"preview": "package model\n\nimport \"gorm.io/gorm\"\n\ntype JobTask struct {\n\tgorm.Model\n\tClusterName string `json:\"clusterName\"`\n\tLevel"
},
{
"path": "internal/model/kv.go",
"chars": 125,
"preview": "package model\n\nimport \"gorm.io/gorm\"\n\ntype KV struct {\n\tgorm.Model\n\tKey string `json:\"key\"`\n\tValue string `json:\"value"
},
{
"path": "internal/model/modInfo.go",
"chars": 575,
"preview": "package model\n\nimport \"gorm.io/gorm\"\n\ntype ModInfo struct {\n\tgorm.Model\n\tAuth string `json:\"auth\"`\n\tConsumerAp"
},
{
"path": "internal/model/modKv.go",
"chars": 200,
"preview": "package model\n\nimport \"gorm.io/gorm\"\n\ntype ModKV struct {\n\tgorm.Model\n\tUserId string `json:\"userId\"`\n\tModId int `j"
},
{
"path": "internal/model/model.go",
"chars": 80,
"preview": "// Package model defines the data models used by the application.\npackage model\n"
},
{
"path": "internal/model/playerLog.go",
"chars": 398,
"preview": "package model\n\nimport \"gorm.io/gorm\"\n\ntype PlayerLog struct {\n\tgorm.Model\n\tName string `json:\"name\"`\n\tRole "
},
{
"path": "internal/model/regenerate.go",
"chars": 118,
"preview": "package model\n\nimport \"gorm.io/gorm\"\n\ntype Regenerate struct {\n\tgorm.Model\n\tClusterName string `json:\"clusterName\"`\n}\n"
},
{
"path": "internal/model/spawnRole.go",
"chars": 152,
"preview": "package model\n\nimport \"gorm.io/gorm\"\n\ntype Spawn struct {\n\tgorm.Model\n\tName string\n\tRole string\n\tTime "
},
{
"path": "internal/model/webLink.go",
"chars": 193,
"preview": "package model\n\nimport \"gorm.io/gorm\"\n\ntype WebLink struct {\n\tgorm.Model\n\tTitle string `json:\"title\"`\n\tUrl string `js"
},
{
"path": "internal/pkg/context/cluster.go",
"chars": 705,
"preview": "package context\n\nimport (\n\t\"dst-admin-go/internal/service/dstConfig\"\n\n\t\"github.com/gin-gonic/gin\"\n)\n\nconst (\n\tclusterNam"
},
{
"path": "internal/pkg/response/response.go",
"chars": 1287,
"preview": "package response\n\nimport (\n\t\"net/http\"\n\n\t\"github.com/gin-gonic/gin\"\n)\n\ntype Response struct {\n\tCode int `json:\"c"
},
{
"path": "internal/pkg/utils/collectionUtils/collectionUtils.go",
"chars": 286,
"preview": "package collectionUtils\n\nimport \"log\"\n\nfunc ToSet(list []string) []string {\n\n\tvar m = map[string]string{}\n\tvar set []str"
},
{
"path": "internal/pkg/utils/dateUtils.go",
"chars": 833,
"preview": "package utils\n\nimport (\n\t\"time\"\n)\n\nfunc Bod(t time.Time) time.Time {\n\tyear, month, day := t.Date()\n\treturn time.Date(yea"
},
{
"path": "internal/pkg/utils/dstUtils/dstUtils.go",
"chars": 2356,
"preview": "package dstUtils\n\nimport (\n\t\"bytes\"\n\t\"dst-admin-go/internal/pkg/utils/fileUtils\"\n\t\"dst-admin-go/internal/service/dstConf"
},
{
"path": "internal/pkg/utils/envUtils.go",
"chars": 102,
"preview": "package utils\n\nimport \"runtime\"\n\nfunc IsWindow() bool {\n\tos := runtime.GOOS\n\treturn os == \"windows\"\n}\n"
},
{
"path": "internal/pkg/utils/fileUtils/fileUtls.go",
"chars": 6586,
"preview": "package fileUtils\n\nimport (\n\t\"bufio\"\n\t\"dst-admin-go/internal/pkg/utils/systemUtils\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"log\"\n\t\"os\"\n"
},
{
"path": "internal/pkg/utils/luaUtils/luaUtils.go",
"chars": 5544,
"preview": "package luaUtils\n\nimport (\n\t\"fmt\"\n\t\"log\"\n\t\"reflect\"\n\n\tlua \"github.com/yuin/gopher-lua\"\n)\n\ntype Clock struct {\n\tTotalTime"
},
{
"path": "internal/pkg/utils/shellUtils/shellUitls.go",
"chars": 2400,
"preview": "package shellUtils\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"os\"\n\t\"os/exec\"\n\n\t\"golang.org/x/text/encoding/simplifiedchinese\"\n)\n\nfunc E"
},
{
"path": "internal/pkg/utils/systemUtils/SystemUtils.go",
"chars": 4564,
"preview": "package systemUtils\n\nimport (\n\t\"bytes\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"net/http\"\n\t\"os\"\n\t\"os/exec\"\n\t\"os/user\"\n\t\"runtime\"\n"
},
{
"path": "internal/pkg/utils/zip/zip.go",
"chars": 5357,
"preview": "package zip\n\nimport (\n\t\"archive/zip\"\n\t\"errors\"\n\t\"io\"\n\t\"log\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n)\n\nfunc zipDir(dirPath str"
},
{
"path": "internal/service/archive/path_resolver.go",
"chars": 6282,
"preview": "package archive\n\nimport (\n\t\"dst-admin-go/internal/pkg/utils/fileUtils\"\n\t\"dst-admin-go/internal/service/dstConfig\"\n\t\"io\"\n"
},
{
"path": "internal/service/backup/backup_service.go",
"chars": 9965,
"preview": "package backup\n\nimport (\n\t\"dst-admin-go/internal/database\"\n\t\"dst-admin-go/internal/model\"\n\t\"dst-admin-go/internal/pkg/co"
},
{
"path": "internal/service/dstConfig/dst_config.go",
"chars": 802,
"preview": "package dstConfig\n\ntype DstConfig struct {\n\tSteamcmd string `json:\"steamcmd\"`\n\tForce_install_dir "
},
{
"path": "internal/service/dstConfig/factory.go",
"chars": 142,
"preview": "package dstConfig\n\nimport (\n\t\"gorm.io/gorm\"\n)\n\nfunc NewDstConfig(db *gorm.DB) Config {\n\tdstConfig := NewOneDstConfig(db)"
},
{
"path": "internal/service/dstConfig/one_dst_config.go",
"chars": 5587,
"preview": "package dstConfig\n\nimport (\n\t\"dst-admin-go/internal/pkg/utils/fileUtils\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"strconv\"\n\t\""
},
{
"path": "internal/service/dstMap/dst_map.go",
"chars": 8374,
"preview": "package dstMap\n\nimport (\n\t\"encoding/base64\"\n\t\"fmt\"\n\t\"image\"\n\t\"image/color\"\n\t\"image/png\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"regexp\"\n)\n\n"
},
{
"path": "internal/service/dstPath/dst_path.go",
"chars": 1022,
"preview": "package dstPath\n\nimport (\n\t\"dst-admin-go/internal/service/dstConfig\"\n\t\"fmt\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"strings\"\n)\n\nty"
},
{
"path": "internal/service/dstPath/linux_dst.go",
"chars": 961,
"preview": "package dstPath\n\nimport (\n\t\"dst-admin-go/internal/pkg/utils/fileUtils\"\n\t\"dst-admin-go/internal/service/dstConfig\"\n\t\"fmt\""
},
{
"path": "internal/service/dstPath/window_dst.go",
"chars": 603,
"preview": "package dstPath\n\nimport (\n\t\"dst-admin-go/internal/service/dstConfig\"\n\t\"fmt\"\n)\n\ntype WindowDstPath struct {\n\tdstConfig ds"
},
{
"path": "internal/service/game/factory.go",
"chars": 370,
"preview": "package game\n\nimport (\n\t\"dst-admin-go/internal/service/dstConfig\"\n\t\"dst-admin-go/internal/service/levelConfig\"\n\t\"runtime"
},
{
"path": "internal/service/game/linux_process.go",
"chars": 7605,
"preview": "package game\n\nimport (\n\t\"dst-admin-go/internal/pkg/utils/dstUtils\"\n\t\"dst-admin-go/internal/pkg/utils/shellUtils\"\n\t\"dst-a"
},
{
"path": "internal/service/game/process.go",
"chars": 568,
"preview": "package game\n\ntype DstPsAux struct {\n\tCpuUage string `json:\"cpuUage\"`\n\tMemUage string `json:\"memUage\"`\n\tVSZ string `"
},
{
"path": "internal/service/game/windowGameCli.go",
"chars": 15895,
"preview": "//package game\n//\n//import (\n//\t\"bufio\"\n//\t\"fmt\"\n//\t\"io\"\n//\t\"log\"\n//\t\"os\"\n//\t\"os/exec\"\n//\t\"strings\"\n//\t\"sync\"\n//\t\"sync/a"
},
{
"path": "internal/service/game/window_process.go",
"chars": 2874,
"preview": "package game\n\nimport (\n\t\"dst-admin-go/internal/service/dstConfig\"\n\t\"dst-admin-go/internal/service/levelConfig\"\n\t\"fmt\"\n\t\""
},
{
"path": "internal/service/gameArchive/game_archive.go",
"chars": 8914,
"preview": "package gameArchive\n\nimport (\n\t\"dst-admin-go/internal/config\"\n\t\"dst-admin-go/internal/pkg/utils/dstUtils\"\n\t\"dst-admin-go"
},
{
"path": "internal/service/gameConfig/game_config.go",
"chars": 11620,
"preview": "package gameConfig\n\nimport (\n\t\"dst-admin-go/internal/pkg/utils/collectionUtils\"\n\t\"dst-admin-go/internal/pkg/utils/dstUti"
},
{
"path": "internal/service/level/level.go",
"chars": 9002,
"preview": "package level\n\nimport (\n\t\"dst-admin-go/internal/pkg/utils/dstUtils\"\n\t\"dst-admin-go/internal/pkg/utils/fileUtils\"\n\t\"dst-a"
},
{
"path": "internal/service/levelConfig/level_config.go",
"chars": 4797,
"preview": "package levelConfig\n\nimport (\n\t\"dst-admin-go/internal/pkg/utils/dstUtils\"\n\t\"dst-admin-go/internal/pkg/utils/fileUtils\"\n\t"
},
{
"path": "internal/service/login/login_service.go",
"chars": 5094,
"preview": "package login\n\nimport (\n\t\"dst-admin-go/internal/config\"\n\t\"dst-admin-go/internal/pkg/response\"\n\t\"dst-admin-go/internal/pk"
},
{
"path": "internal/service/mod/mod_service.go",
"chars": 35781,
"preview": "package mod\n\nimport (\n\t\"archive/zip\"\n\t\"bytes\"\n\t\"crypto/tls\"\n\t\"dst-admin-go/internal/model\"\n\t\"dst-admin-go/internal/pkg/u"
},
{
"path": "internal/service/player/player_service.go",
"chars": 3620,
"preview": "package player\n\nimport (\n\t\"dst-admin-go/internal/pkg/utils/fileUtils\"\n\t\"dst-admin-go/internal/service/archive\"\n\t\"dst-adm"
},
{
"path": "internal/service/update/factory.go",
"chars": 285,
"preview": "package update\n\nimport (\n\t\"dst-admin-go/internal/pkg/utils\"\n\t\"dst-admin-go/internal/service/dstConfig\"\n)\n\nfunc NewUpdate"
},
{
"path": "internal/service/update/linux_update.go",
"chars": 655,
"preview": "package update\n\nimport (\n\t\"dst-admin-go/internal/pkg/utils/shellUtils\"\n\t\"dst-admin-go/internal/service/dstConfig\"\n\t\"log\""
},
{
"path": "internal/service/update/update.go",
"chars": 1864,
"preview": "package update\n\nimport (\n\t\"dst-admin-go/internal/pkg/utils/fileUtils\"\n\t\"dst-admin-go/internal/service/dstConfig\"\n\t\"fmt\"\n"
},
{
"path": "internal/service/update/window_update.go",
"chars": 702,
"preview": "package update\n\nimport (\n\t\"dst-admin-go/internal/pkg/utils/shellUtils\"\n\t\"dst-admin-go/internal/service/dstConfig\"\n\t\"log\""
},
{
"path": "scripts/build_linux.sh",
"chars": 87,
"preview": "rm -rf dst-admin-go\nGOOS=linux GOARCH=amd64 go build -o dst-admin-go cmd/server/main.go"
},
{
"path": "scripts/build_swagger.sh",
"chars": 39,
"preview": "swag init -g cmd/server/main.go -o docs"
},
{
"path": "scripts/build_window.sh",
"chars": 73,
"preview": "GOOS=windows GOARCH=amd64 go build -o dst-admin-go.exe cmd/server/main.go"
},
{
"path": "scripts/docker/Dockerfile",
"chars": 1048,
"preview": "# 使用官方的Ubuntu基础镜像\nFROM ubuntu:20.04\n\nLABEL maintainer=\"hujinbo23 jinbohu23@outlook.com\"\nLABEL description=\"DoNotStarveTo"
},
{
"path": "scripts/docker/README.md",
"chars": 5480,
"preview": "# Docker 部署脚本\n\n用于构建 DST Admin Go 的标准 Docker 镜像(Linux x86_64 架构)。\n\n## 目录内容\n\n- `Dockerfile` - Docker 镜像构建文件(基于 Ubuntu 20.0"
},
{
"path": "scripts/docker/docker-entrypoint.sh",
"chars": 1744,
"preview": "#!/bin/bash\n\n# 修正最大文件描述符数,部分docker版本给的默认值过高,会导致screen运行卡顿\nulimit -Sn 10000\n\n# 获取传入的参数\nsteam_cmd_path='/app/steamcmd'\nste"
},
{
"path": "scripts/docker/docker_build.sh",
"chars": 142,
"preview": "#!/bin/bash\n\n# 获取命令行参数\nTAG=$1\n\n# 构建镜像\ndocker build -t hujinbo23/dst-admin-go:$TAG .\n\n# 推送镜像到Docker Hub\ndocker push hujin"
},
{
"path": "scripts/docker/docker_dst_config",
"chars": 148,
"preview": "steamcmd=/app/steamcmd\nforce_install_dir=/app/dst-dedicated-server\ncluster=MyDediServer\nbackup=/app/backup\nmod_download_"
},
{
"path": "scripts/docker-build-mac/Dockerfile",
"chars": 1768,
"preview": "FROM ubuntu:22.04\n\nENV DEBIAN_FRONTEND=noninteractive\nENV DST_DIR=/dst-server\n\n\n# ===== 安装必要依赖 =====\nRUN apt update && a"
},
{
"path": "scripts/docker-build-mac/README.md",
"chars": 6581,
"preview": "# Docker 部署脚本(Mac ARM64)\n\n用于在 Mac ARM64(Apple Silicon: M1/M2/M3)平台上构建和运行 DST Admin Go 的特殊 Docker 镜像。\n\n## 背景说明\n\n饥荒联机版(Don"
},
{
"path": "scripts/docker-build-mac/docker-entrypoint.sh",
"chars": 783,
"preview": "#!/bin/bash\n\n# 修正最大文件描述符数,部分docker版本给的默认值过高,会导致screen运行卡顿\nulimit -Sn 10000\n\n# 启用 amd64 架构\ndpkg --add-architecture amd64\n"
},
{
"path": "scripts/docker-build-mac/docker_dst_config",
"chars": 150,
"preview": "steamcmd=/app/steamcmd\nforce_install_dir=/app/dst-dedicated-server\ncluster=MyDediServer\nbackup=/app/backup\nmod_download_"
},
{
"path": "scripts/docker-build-mac/dst-mac-arm64-env-install.md",
"chars": 1603,
"preview": "# dst-mac-arm64-env-install\n\n安装基础依赖\n\n```shell\napt update\napt install -y wget unzip tar\n```\n\n\n\n下载 DepotDownloader\n\n```she"
},
{
"path": "scripts/py-dst-cli/README.md",
"chars": 5409,
"preview": "# py-dst-cli\n\nPython 工具集,用于解析和处理饥荒联机版(Don't Starve Together)的各类配置文件、MOD 信息和游戏资源。\n\n## 功能概述\n\n本工具提供以下核心功能:\n\n1. **世界配置解析** -"
},
{
"path": "scripts/py-dst-cli/dst_version.py",
"chars": 1027,
"preview": "import steam.client\nimport steam.client.cdn\nimport steam.core.cm\nimport steam.webapi\nfrom steam.exceptions import SteamE"
},
{
"path": "scripts/py-dst-cli/dst_world_setting.json",
"chars": 76655,
"preview": "{\"zh\": {\"forest\": {\"WORLDGEN_GROUP\": {\"monsters\": {\"order\": 5, \"text\": \"敌对生物以及刷新点\", \"atlas\": {\"name\": \"worldgen_customiz"
},
{
"path": "scripts/py-dst-cli/main.py",
"chars": 2299,
"preview": "import json\nfrom http.server import BaseHTTPRequestHandler, HTTPServer\n\nimport dst_version\nimport parse_world_setting\nim"
},
{
"path": "scripts/py-dst-cli/parse_TooManyItemPlus_items.py",
"chars": 2784,
"preview": "import os\nimport re\nimport json\n\n# 定义修饰词规则\nSPICE_TRANSLATIONS = {\n \"_SPICE_GARLIC\": \"蒜\",\n \"_SPICE_CHILI\": \"辣\",\n "
},
{
"path": "scripts/py-dst-cli/parse_mod.py",
"chars": 11307,
"preview": "import lupa\nfrom functools import reduce\n\nimport steam.client\nimport steam.client.cdn\nimport steam.core.cm\nimport steam."
},
{
"path": "scripts/py-dst-cli/parse_world_setting.py",
"chars": 12215,
"preview": "##!/usr/bin/python3\n# -*- coding: utf-8 -*-\n\nimport os\nfrom functools import reduce\nfrom os.path import join as pjoin\nfr"
},
{
"path": "scripts/py-dst-cli/parse_world_webp.py",
"chars": 3664,
"preview": "# -*- coding: utf-8 -*-\nimport os\nimport zipfile\nfrom shutil import rmtree\n\nimport gevent\nimport steam.client\nimport ste"
},
{
"path": "scripts/py-dst-cli/steamapikey.txt",
"chars": 32,
"preview": "73DF9F781D195DFD3D19DED1CB72EEE6"
},
{
"path": "static/Caves/leveldataoverride.lua",
"chars": 3711,
"preview": "return {\n\tbackground_node_range = {\n\t\t0,\n\t\t1,\n\t},\n\tdesc = \"探查洞穴…… 一起!\",\n\thideminimap = false,\n\tid = \"DST_CAVE\",\n\tlocatio"
},
{
"path": "static/Caves/modoverrides.lua",
"chars": 11,
"preview": "return { }"
},
{
"path": "static/Caves/server.ini",
"chars": 179,
"preview": "[NETWORK]\nserver_port = 10998\n\n\n[SHARD]\nis_master = false\nname = Caves\nid = 2\n\n\n[ACCOUNT]\nencode_user_path = false\n\n\n[ST"
},
{
"path": "static/Master/leveldataoverride.lua",
"chars": 6137,
"preview": "return {\n\tdesc = \"永不结束的饥荒沙盒模式。永远可以在绚丽之门复活。\",\n\thideminimap = false,\n\tid = \"ENDLESS\",\n\tlocation = \"forest\",\n\tmax_playlist_"
},
{
"path": "static/Master/modoverrides.lua",
"chars": 11,
"preview": "return { }"
},
{
"path": "static/Master/server.ini",
"chars": 113,
"preview": "[NETWORK]\nserver_port = 10999\n\n\n[SHARD]\nis_master = true\nname = Master\nid = 1\n[ACCOUNT]\nencode_user_path = false\n"
},
{
"path": "static/customcommands.lua",
"chars": 8294,
"preview": "function list()\n for i, v in ipairs(AllPlayers) do\n print(string.format(\"[%d] (%s) %s <%s>\", i, v.userid, v.na"
},
{
"path": "static/template/caves_server.ini",
"chars": 158,
"preview": "[NETWORK]\nserver_port = {{.ServerPort}}\n\n\n[SHARD]\nis_master = {{.IsMaster}}\nname = {{.Name}}\nid = {{.Id}}\n\n\n[ACCOUNT]\nen"
},
{
"path": "static/template/cluster.ini",
"chars": 1331,
"preview": "[GAMEPLAY]\n#游戏模式,可选 survival,endless,wilderness\ngame_mode = {{.GameMode}}\n#最大玩家人数 (上限16)\nmax_players = {{.MaxPlayers}}\n#"
},
{
"path": "static/template/cluster2.ini",
"chars": 1506,
"preview": "[GAMEPLAY]\n#游戏模式,可选 survival,endless,wilderness\ngame_mode = {{.GameMode}}\n#最大玩家人数 (上限16)\nmax_players = {{.MaxPlayers}}\n#"
},
{
"path": "static/template/master_server.ini",
"chars": 156,
"preview": "[NETWORK]\nserver_port = {{.ServerPort}}\n\n\n[SHARD]\nis_master = {{.IsMaster}}\nname = {{.Name}}\nid = {{.Id}}\n\n\n[ACCOUNT]\nen"
},
{
"path": "static/template/server.ini",
"chars": 255,
"preview": "[NETWORK]\nserver_port = {{.ServerPort}}\n\n\n[SHARD]\nis_master = {{.IsMaster}}\nname = {{.Name}}\nid = {{.Id}}\n\n\n[ACCOUNT]\nen"
},
{
"path": "static/template/test.go",
"chars": 1582,
"preview": "package main\n\nimport (\n\t\"bytes\"\n\t\"dst-admin-go/vo\"\n\t\"fmt\"\n\t\"text/template\"\n\n\t\"github.com/shirou/gopsutil/disk\"\n\t\"github."
}
]
// ... and 1 more files (download for full content)
About this extraction
This page contains the full source code of the hujinbo23/dst-admin-go GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 151 files (1.0 MB), approximately 283.4k tokens, and a symbol index with 620 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.