SQL to JSON Converter
Convert SQL CREATE TABLE statements to a JSON schema — column names, types, nullable, primary key, unique, and defaults. All in your browser.
| Column | Type | Constraints |
|---|---|---|
| id | INT | PKNOT NULL |
| username | VARCHAR(50) | UNIQUENOT NULL |
| VARCHAR(255) | UNIQUENOT NULL | |
| password_hash | CHAR(60) | NOT NULL |
| age | INT | nullable |
| status | VARCHAR(20) | DEFAULT 'active'nullable |
| created_at | TIMESTAMP | DEFAULT CURRENT_TIMESTAMPnullable |
| Column | Type | Constraints |
|---|---|---|
| id | INT | PKNOT NULL |
| user_id | INT | NOT NULL |
| title | VARCHAR(500) | UNIQUENOT NULL |
| content | TEXT | nullable |
| published | BOOLEAN | DEFAULT FALSEnullable |
| view_count | INT | DEFAULT 0nullable |
| created_at | TIMESTAMP | DEFAULT CURRENT_TIMESTAMPnullable |
2 tables • 14 total columns
You have a PostgreSQL CREATE TABLE statement with 20 columns, constraints, and defaults, and you need a JSON schema representing the table structure for your API documentation or frontend validation. Manually mapping VARCHAR(255) → "type": "string", "maxLength": 255 and INTEGER NOT NULL → "type": "integer", "required": true for 20 columns is tedious and error-prone.
Why This Converter (Not the JSON to SQL Generator)
PureDevTools has a JSON to SQL Generator that goes the other direction — converting JSON data to INSERT statements. This tool converts SQL CREATE TABLE statements to JSON schema — with column names, types, constraints, and defaults extracted automatically. Use the JSON-to-SQL tool for data import; use this tool for schema extraction.
What Is a SQL to JSON Converter?
A SQL to JSON converter transforms SQL CREATE TABLE statements into structured JSON objects that represent the table schema. Each column’s name, data type, nullability, primary key flag, uniqueness constraint, and default value are extracted and emitted as a clean JSON array — making the schema machine-readable, portable, and easy to embed in documentation or code.
Typical use cases:
- API documentation — generate a JSON schema for REST or GraphQL API response shapes directly from your database model
- ORM bootstrapping — use the JSON output to scaffold TypeScript interfaces, Python dataclasses, or Go structs
- Schema migration tooling — diff two JSON schemas to detect breaking changes between database versions
- Database auditing — enumerate all columns and their constraints without connecting to a live database
- Code generation pipelines — feed the JSON schema into a template engine to produce model files automatically
Supported SQL Dialects
The converter handles the most common SQL flavors used in production:
- MySQL / MariaDB — backtick-quoted identifiers,
AUTO_INCREMENT,UNSIGNED,CHARSET - PostgreSQL — double-quote identifiers,
SERIAL,GENERATED ALWAYS AS IDENTITY - SQLite —
AUTOINCREMENT,INTEGER PRIMARY KEY - SQL Server (T-SQL) — bracket-quoted identifiers
[name],IDENTITY(1,1) - Standard SQL — unquoted identifiers,
DEFAULTexpressions
Parsed Constraints
For each column, the tool extracts:
| Constraint | What it captures |
|---|---|
| type | Data type including precision — VARCHAR(255), DECIMAL(10,2), BIGINT |
| nullable | false when NOT NULL or PRIMARY KEY is present; true otherwise |
| primaryKey | true for inline PRIMARY KEY or table-level PRIMARY KEY (col) |
| unique | true for inline UNIQUE or table-level UNIQUE KEY / UNIQUE INDEX |
| default | The literal default expression: 'active', 0, CURRENT_TIMESTAMP, NULL |
Output Format
The JSON output is an array of table objects. Each table has a tableName and a columns array:
[
{
"tableName": "users",
"columns": [
{
"name": "id",
"type": "INT",
"nullable": false,
"primaryKey": true,
"unique": false,
"default": null
},
{
"name": "email",
"type": "VARCHAR(255)",
"nullable": false,
"primaryKey": false,
"unique": true,
"default": null
},
{
"name": "created_at",
"type": "TIMESTAMP",
"nullable": true,
"primaryKey": false,
"unique": false,
"default": "CURRENT_TIMESTAMP"
}
]
}
]
This format maps directly to how ORMs and schema validation libraries (Zod, Yup, JSON Schema) represent models, making it straightforward to write a thin adapter layer.
Multiple Tables
Paste an entire SQL schema file containing any number of CREATE TABLE statements. The tool processes each one in order and returns a top-level JSON array with one object per table.
CREATE TABLE categories (
id INT PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(100) NOT NULL UNIQUE
);
CREATE TABLE products (
id INT PRIMARY KEY AUTO_INCREMENT,
category_id INT NOT NULL,
name VARCHAR(255) NOT NULL,
price DECIMAL(10,2) NOT NULL DEFAULT 0.00,
in_stock BOOLEAN DEFAULT TRUE
);
Both categories and products appear in the JSON output as separate array entries.
Table-Level Constraints
Constraints defined at the table level are applied correctly to individual columns:
CREATE TABLE orders (
order_id INT,
user_id INT NOT NULL,
status VARCHAR(20) DEFAULT 'pending',
PRIMARY KEY (order_id),
UNIQUE KEY uk_order_user (user_id)
);
Here order_id gets primaryKey: true and nullable: false, and user_id gets unique: true — even though neither has inline constraints.
Common Data Type Patterns
The converter preserves the full type string including precision:
| SQL Type | JSON type value |
|---|---|
INT | "INT" |
VARCHAR(255) | "VARCHAR(255)" |
DECIMAL(10,2) | "DECIMAL(10,2)" |
CHAR(36) | "CHAR(36)" |
BIGINT UNSIGNED | "BIGINT" |
TEXT | "TEXT" |
BOOLEAN | "BOOLEAN" |
TIMESTAMP | "TIMESTAMP" |
JSON | "JSON" |
Frequently Asked Questions
What happens to table-level PRIMARY KEY declarations?
The tool parses PRIMARY KEY (col1, col2) at the end of a CREATE TABLE body and marks each listed column with "primaryKey": true and "nullable": false, exactly as if each column had an inline PRIMARY KEY keyword.
Are composite primary keys supported?
Yes. PRIMARY KEY (order_id, product_id) marks both order_id and product_id as "primaryKey": true. The JSON output reflects this on each individual column; there is no separate composite key array, keeping the schema flat and easy to iterate.
Does the converter handle SQL comments?
Yes. Both single-line comments (-- comment) and block comments (/* comment */) are stripped before parsing. You can paste directly from a schema file that includes comments.
What about AUTO_INCREMENT and SERIAL?
These are recognized and silently skipped — they describe generation behaviour, not a structural constraint that maps cleanly to JSON. The column will have "nullable": false if combined with NOT NULL or PRIMARY KEY, as expected.
What if my SQL uses backtick, bracket, or double-quote identifiers?
All three quoting styles are supported. `id`, "id", and [id] all parse to the unquoted name id in the JSON output.
Is my SQL sent to a server? No. All parsing runs entirely in your browser using JavaScript. Your SQL is never transmitted anywhere, stored, or logged. The tool works offline once the page has loaded.
What does nullable: true mean when no constraint is specified?
In SQL, columns default to nullable unless NOT NULL or PRIMARY KEY is explicitly declared. The converter follows this rule: any column without either constraint gets "nullable": true.
Can I use the JSON output with JSON Schema validators?
The output is not JSON Schema (draft-07 / draft-2020-12) format — it is a simpler, flat representation. You can use it as source data to generate a proper JSON Schema by mapping each column’s type to a JSON Schema type and adding required based on nullable: false.