AI Tools — Claude Skill & LLM Context
EasyOp ships with two sets of AI helpers: a Claude Code skill that teaches Claude how to write operations in your project, and LLM context files that any AI assistant can load to understand the gem's internals.
claude-plugin/ folder into your project (or point your AI assistant at llms/overview.md) and Claude will write correct, idiomatic EasyOp code without needing to re-explain the API every session.
Claude Code Skill
The skill lives in claude-plugin/skills/easyop/. It is a structured prompt bundle that Claude Code auto-activates whenever you mention operations, flows, or the easyop gem.
Skill structure
claude-plugin/
├── .claude-plugin/
│ └── plugin.json # Plugin metadata & activation triggers
└── skills/
└── easyop/
├── SKILL.md # Core skill — loaded automatically when relevant
├── references/
│ ├── ctx.md # Ctx API reference
│ ├── operations.md # Operation DSL (hooks, rescue, schema)
│ ├── flow.md # Flow + FlowBuilder reference
│ └── hooks-and-rescue.md
└── examples/
├── basic_operation.rb
├── flow.rb
├── rails_controller.rb
└── testing.rb
Installing the skill
There are two ways to install the skill into a project.
Option A — Copy to your project (recommended, works with any Claude Code version):
# From your project root
cp -r path/to/easyop/claude-plugin/.claude-plugin .
cp -r path/to/easyop/claude-plugin/skills .
Then commit both .claude-plugin/ and skills/ to your repo. Every developer on the team gets the skill automatically.
Option B — Reference from CLAUDE.md (if your project already has a CLAUDE.md):
## EasyOp
@path/to/easyop/claude-plugin/skills/easyop/SKILL.md
What the skill does
Once installed, Claude will:
- Generate correct
include Easyop::Operationboilerplate withcall, typedparams, andrescue_from - Compose operations into flows with
include Easyop::Flowand theflowDSL - Add
rollbackmethods to flow steps when you describe multi-step DB operations - Use
skip_iffor optional steps and lambda guards for inline conditions - Choose the right plugin (
Recording,Async,Transactional) based on your intent - Write RSpec specs that call operations directly without HTTP or controller overhead
Activation triggers
The skill activates automatically when you say things like:
- "create an operation for …"
- "compose these into a flow"
- "add rollback to this flow step"
- "run this operation in the background"
- "wrap this in a transaction"
- "replace this service object with an operation"
- "how is easyop different from interactor"
LLM Context Files
The llms/ folder contains two Markdown files optimised for pasting into any AI chat (ChatGPT, Gemini, Cursor, etc.) to give the model a full picture of the gem before asking it to write or review code.
llms/
├── overview.md # File map, module responsibilities, key design decisions
└── usage.md # Common patterns and recipes
overview.md
Load this before asking an AI to modify or extend the gem itself. It covers:
- The full file map with one-line descriptions of each module
- How
Ctx,Hooks,Rescuable,Schema,Flow, and all four plugins relate to each other - Plugin architecture: how
Plugins::Baseworks, thepluginDSL, prepend-based wrapping
usage.md
Load this before asking an AI to write application code using the gem. It covers:
- Basic operation, chainable callbacks, bang variant
- Hooks,
rescue_from, typed schemas - Flows: rollback,
skip_if, lambda guards, nested flows - All four plugins with configuration examples
- Rails controller integration patterns
- RSpec patterns for unit and integration testing
How to use the LLM files
In Claude.ai / ChatGPT / Gemini — paste the file content as a system message or first user message before your question:
<context>
[paste contents of llms/usage.md here]
</context>
Now write an operation that validates a payment and records it...
In Cursor / Windsurf / Copilot Chat — open llms/usage.md and use "Add to context" (or the equivalent @-mention), then ask your question in the chat panel.
Programmatically — when building an AI pipeline or agent, inject the file content as a system prompt:
require "anthropic"
client = Anthropic::Client.new
llm_context = File.read("path/to/easyop/llms/usage.md")
response = client.messages(
model: "claude-opus-4-6",
max_tokens: 2048,
system: llm_context,
messages: [{ role: "user", content: "Write an operation that processes a refund..." }]
)
Example prompts
Once the skill or LLM context is loaded, these prompts produce idiomatic EasyOp code:
| Prompt | What Claude generates |
|---|---|
| "Create an operation that registers a new user with email validation" | Operation with params schema, rescue_from ActiveRecord::RecordInvalid, and before hook for normalisation |
| "Turn this checkout logic into a flow with rollback" | include Easyop::Flow with inner-class steps, each with a rollback method |
| "Make this operation run in the background" | Adds plugin Easyop::Plugins::Async to the base class and shows .call_async usage |
| "I need an audit trail for all operations" | Adds plugin Easyop::Plugins::Recording, migration for operation_logs, and recording false opt-out where needed |
| "Write an RSpec suite for this flow" | Unit specs for each step in isolation plus an integration spec that runs the full flow |
Building a custom plugin with AI assistance
The plugin architecture is documented in Plugins → Building Your Own. When asking Claude to scaffold a new plugin, include this context snippet in your prompt:
# Every plugin must:
# 1. Inherit from Easyop::Plugins::Base
# 2. Implement self.install(operation_class, **options)
# 3. Use prepend on a Module to wrap _easyop_run (or around hooks for lighter wrapping)
class Easyop::Plugins::MyPlugin < Easyop::Plugins::Base
def self.install(op, **opts)
op.prepend(Wrapper)
op.extend(ClassMethods)
op.instance_variable_set(:@my_option, opts[:my_option])
end
module Wrapper
def _easyop_run
# before-call work
super
# after-call work
end
end
module ClassMethods
def my_option
@my_option
end
end
end
Keeping context fresh
The llms/ and claude-plugin/ files are maintained alongside the gem source. When you upgrade easyop:
- Re-copy
claude-plugin/into your project to pick up new skill references - Re-paste
llms/usage.mdif you use it in an AI pipeline - Check the CHANGELOG for new DSL features that the AI should know about