JSON to SQL INSERT Generator
Paste a JSON array, get ready-to-run SQL INSERT statements — batch mode, CREATE TABLE, NULL handling, MySQL/PostgreSQL/SQL Server quoting
Options
594 characters
3 rows · 8 columns · 1 statement · 350 chars
You exported 500 records from an API as a JSON array and need to load them into a PostgreSQL table. Writing INSERT INTO users (id, name, email, created_at) VALUES (1, 'Alice', 'alice@example.com', '2024-03-15T10:30:00Z') for 500 rows — with proper quoting, NULL handling, and dialect-specific escaping — is not a task you should be doing by hand.
Why This Generator (Not a JSON Viewer)
PureDevTools has tools for viewing JSON, formatting JSON, and converting JSON to CSV, TypeScript, Go, Python, and Rust. This tool generates SQL INSERT statements — with MySQL, PostgreSQL, SQL Server, and SQLite quoting conventions, multi-row batch inserts, CREATE TABLE inference, NULL handling, and ISO date detection. Everything runs in your browser; no data is sent anywhere.
What Is a JSON to SQL INSERT Generator?
When migrating data, seeding a database, or converting API responses into persistent records, you need SQL INSERT statements that match your table schema. Writing them by hand is error-prone, especially for large JSON datasets with many columns or nested values.
A JSON to SQL INSERT generator reads your JSON array and automatically produces correct INSERT INTO statements, handling:
- All JSON primitives (
string,number,boolean,null) - Missing keys (treated as
NULLor skipped based on your null-handling setting) - Nested objects and arrays (serialized as JSON strings in TEXT columns)
- ISO 8601 date strings with proper SQL quoting
- Multi-row batch inserts for high-performance bulk loading
- CREATE TABLE with column types inferred from all values
This tool runs entirely in your browser — your JSON data never leaves your device.
How to Use This Tool
- Paste your JSON array in the JSON Input textarea (or click Load sample to try an example)
- Set the Table name to match your target table
- Choose the Quote style that matches your SQL dialect (backtick for MySQL, double-quote for PostgreSQL)
- Select INSERT mode: multi-row (batch) or single row per INSERT
- Configure NULL handling and optionally enable CREATE TABLE
- The SQL output updates instantly in the SQL Output panel
- Click Copy to copy to clipboard, or Download .sql to save as a file
Understanding the Generated SQL
Multi-Row INSERT (Batch Mode)
By default, the tool generates a single INSERT INTO statement with multiple VALUES rows:
INSERT INTO `users` (`id`, `name`, `email`) VALUES
(1, 'Alice', 'alice@example.com'),
(2, 'Bob', 'bob@example.com'),
(3, 'Carol', 'carol@example.com');
This is significantly faster than individual inserts because:
- One round-trip to the database instead of N
- Better transaction log efficiency
- Many databases optimize bulk inserts specially
Use the Rows per batch setting to split large datasets into chunks. Set to 0 to put all rows in a single statement. A value of 500–1000 rows per batch is generally safe for most MySQL and PostgreSQL configurations.
Single-Row INSERT Mode
When single-row mode is selected, each row becomes its own statement:
INSERT INTO `users` (`id`, `name`, `email`) VALUES (1, 'Alice', 'alice@example.com');
INSERT INTO `users` (`id`, `name`, `email`) VALUES (2, 'Bob', 'bob@example.com');
INSERT INTO `users` (`id`, `name`, `email`) VALUES (3, 'Carol', 'carol@example.com');
Use single-row mode when you need:
- Row-by-row error handling in a script
- Compatibility with tools that process one statement at a time
- Easier diffing of generated SQL between runs
Identifier Quote Styles
Different SQL databases use different syntax to quote identifiers (table names, column names):
| Style | Example | Use with |
|---|---|---|
Backtick ` | `column_name` | MySQL, MariaDB |
Double quote " | "column_name" | PostgreSQL, SQLite, standard SQL |
Square bracket [ ] | [column_name] | SQL Server (T-SQL), MS Access |
| None (bare) | column_name | Simple names with no reserved words |
Quoting identifiers is important when column names conflict with SQL reserved words (e.g., order, select, group, name, key) or contain special characters.
NULL Handling Options
Write NULL (default)
Missing or null values become the SQL NULL keyword:
INSERT INTO `users` (`name`, `bio`) VALUES ('Alice', NULL);
Use this when your column allows NULL and you want to explicitly represent missing data.
Write Empty String
Null values become '' (empty string):
INSERT INTO `users` (`name`, `bio`) VALUES ('Alice', '');
Use this for NOT NULL TEXT columns where you want a placeholder rather than a constraint violation.
Skip Column if Null
Columns with null values are omitted entirely from the INSERT:
INSERT INTO `users` (`name`) VALUES ('Alice');
-- bio column omitted, database default applies
Use this when your table has DEFAULT values you want to apply for missing fields. Note: in this mode each row may have a different column list, so multi-row INSERT falls back to single-row statements.
CREATE TABLE Type Inference
When Include CREATE TABLE is checked, the tool prepends a CREATE TABLE IF NOT EXISTS statement with inferred column types:
CREATE TABLE IF NOT EXISTS `users` (
`id` INTEGER,
`name` TEXT,
`email` TEXT,
`age` INTEGER,
`is_active` INTEGER,
`registered_at` DATETIME,
`score` REAL,
`bio` TEXT
);
Type inference rules:
| JSON value types in column | SQL type |
|---|---|
| All integers | INTEGER |
| All booleans | INTEGER (0/1) |
| Any floating-point number | REAL |
| ISO 8601 date strings only | DATETIME |
| Any strings (non-date) | TEXT |
| Objects or arrays | TEXT (JSON string) |
| All null | NULL |
These types follow the SQLite type affinity model and are broadly compatible with MySQL and PostgreSQL (which will implicitly cast them to their native types).
Value Encoding Rules
| JSON value | SQL literal |
|---|---|
null | NULL (or '' or omitted — see NULL handling) |
true | 1 |
false | 0 |
| Integer number | Literal number, e.g., 42 |
| Float number | Literal number, e.g., 3.14 |
Infinity, NaN | NULL (not representable in SQL) |
| String | Single-quoted, e.g., 'hello' |
| String with single quote | Escaped by doubling: 'it''s fine' |
| String with backslash | Escaped: '\\' |
| ISO 8601 date string | Single-quoted: '2024-01-15' |
| Object or array | JSON-serialized, single-quoted: '{"key":"val"}' |
Performance Tips for Large Datasets
- Use multi-row INSERT with a batch size of 500–1000 rows for the best throughput
- In MySQL, wrap your INSERT block with
SET autocommit = 0; ... COMMIT;to batch the transaction - In PostgreSQL, use
COPYfor very large datasets (> 100,000 rows) — the SQL output from this tool works well for seeding smaller tables - Some MySQL configurations have a
max_allowed_packetlimit (default 64MB); adjust batch size to stay under it
Common Use Cases
Database seeding: Convert fixture JSON files from a test framework into SQL seed scripts for a staging environment.
Data migration: Export data from a REST API (JSON) and import it into a relational database without writing custom ETL code.
Schema prototyping: Use the CREATE TABLE option to quickly scaffold a table structure from a sample API response.
Cross-database replication: Export records from one database as JSON (via SELECT ... FOR JSON in SQL Server or json_agg in PostgreSQL) and re-import them into another.
CSV → JSON → SQL pipeline: Combine the CSV to JSON converter tool to first parse your CSV file, then use this tool to generate INSERT statements.
Security Note
All string values are sanitized by doubling single quotes before insertion. However, you should always review generated SQL before running it against a production database:
- Verify column names match your actual table schema
- Check that data types are correct for your target database
- Use parameterized queries in application code rather than raw string interpolation
This tool is intended for development, migration, and seeding workflows — not for generating SQL to be executed from untrusted user input.