Everything you need to start generating production-ready schemas from natural language.
Get up and running with SchemaGen in less than 5 minutes.
Create a free account at app.schemagen.tech/signup
Navigate to Settings → API Keys and generate your first API key.
export SCHEMAGEN_API_KEY=your_api_key_here
Use our CLI or API to generate a schema from a natural language description.
curl -X POST https://api.schemagen.tech/v1/generate \
-H "Authorization: Bearer $SCHEMAGEN_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"description": "Create a user table with email, password, and name",
"format": "json-schema"
}'
Using npm:
npm install -g @schemagen/cli
Using yarn:
yarn global add @schemagen/cli
Using pip (Python):
pip install schemagen
schemagen --version
Generate a JSON Schema from a description:
schemagen generate \
--description "User model with email, password, and optional profile info" \
--format json-schema \
--output user.schema.json
Generate a database migration:
schemagen migrate \
--description "User table with email, password, created_at" \
--database postgresql \
--output migrations/001_create_users.sql
Generate sample data:
schemagen data \
--schema user.schema.json \
--count 100 \
--output sample-users.json
Generate a schema programmatically:
// JavaScript/Node.js
const SchemaGen = require('@schemagen/sdk');
const client = new SchemaGen({
apiKey: process.env.SCHEMAGEN_API_KEY
});
const result = await client.generate({
description: 'Product catalog with SKU, name, price, and inventory',
format: 'json-schema',
options: {
strict: true,
includeExamples: true
}
});
console.log(result.schema);
SchemaGen generates fully compliant JSON Schema (Draft 7, 2019-09, 2020-12) with rich validation rules.
Input:
"Create a user schema with required email (must be valid email format), optional age (18-120), and roles array that must contain at least one of: user, admin, or moderator"
Output:
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"title": "User",
"description": "User account schema",
"required": ["email"],
"properties": {
"email": {
"type": "string",
"format": "email",
"description": "User email address",
"examples": ["user@example.com"]
},
"age": {
"type": "integer",
"minimum": 18,
"maximum": 120,
"description": "User age"
},
"roles": {
"type": "array",
"items": {
"type": "string",
"enum": ["user", "admin", "moderator"]
},
"minItems": 1,
"uniqueItems": true,
"description": "User roles"
}
},
"additionalProperties": false
}
The more specific your description, the better the output. Include validation rules, examples, and constraints in your natural language input for optimal results.
Generate production-ready migration scripts for any database. Includes proper indexes, constraints, and rollback plans.
-- Migration: 20260118_create_users_table.up.sql
-- Generated by SchemaGen
CREATE TABLE IF NOT EXISTS users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email VARCHAR(255) NOT NULL UNIQUE,
password_hash VARCHAR(255) NOT NULL,
name VARCHAR(100),
age INTEGER CHECK (age >= 18 AND age <= 120),
roles TEXT[] NOT NULL DEFAULT ARRAY['user']::TEXT[],
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);
-- Indexes for performance
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_users_created_at ON users(created_at);
-- Trigger to automatically update updated_at
CREATE OR REPLACE FUNCTION update_updated_at_column()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = CURRENT_TIMESTAMP;
RETURN NEW;
END;
$$ language 'plpgsql';
CREATE TRIGGER update_users_updated_at
BEFORE UPDATE ON users
FOR EACH ROW
EXECUTE FUNCTION update_updated_at_column();
-- Migration: 20260118_create_users_table.down.sql
-- Rollback generated by SchemaGen
DROP TRIGGER IF EXISTS update_users_updated_at ON users;
DROP FUNCTION IF EXISTS update_updated_at_column();
DROP INDEX IF EXISTS idx_users_created_at;
DROP INDEX IF EXISTS idx_users_email;
DROP TABLE IF EXISTS users;
Generate realistic test data that respects all your schema constraints and validation rules.
schemagen data \
--schema user.schema.json \
--count 1000 \
--format json \
--output test-data.json
const data = await client.generateData({
schema: userSchema,
count: 100,
options: {
realistic: true,
locale: 'en_US'
}
});
Integrate schema validation into your CI/CD pipeline to catch breaking changes before they reach production.
# .github/workflows/schema-validation.yml
name: Schema Validation
on:
pull_request:
paths:
- 'schemas/**'
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install SchemaGen CLI
run: npm install -g @schemagen/cli
- name: Validate Schemas
env:
SCHEMAGEN_API_KEY: ${{ secrets.SCHEMAGEN_API_KEY }}
run: |
schemagen validate ./schemas/**/*.json
- name: Check for Breaking Changes
env:
SCHEMAGEN_API_KEY: ${{ secrets.SCHEMAGEN_API_KEY }}
run: |
schemagen diff \
--base main \
--head HEAD \
--fail-on-breaking
# .gitlab-ci.yml
schema_validation:
stage: test
image: node:18
before_script:
- npm install -g @schemagen/cli
script:
- schemagen validate ./schemas
- schemagen diff --base main --head HEAD --fail-on-breaking
only:
- merge_requests
variables:
SCHEMAGEN_API_KEY: $API_KEY
Complete REST API for programmatic access to all SchemaGen features.
https://api.schemagen.tech/v1
All API requests require an API key passed in the Authorization header:
Authorization: Bearer YOUR_API_KEY
POST /generate
Generate a schema
POST /migrate
Generate migration scripts
POST /data
Generate sample data
POST /validate
Validate a schema
POST /diff
Compare schemas
We're here to help you get the most out of SchemaGen.