When the popularity of JSON grew amongst developers they started to face a problem. The issue was how to make sure the objects that are serialised in this new text format in one application can be then properly de-serialised in any other application. The different software solutions can use very different programming tools and languages thus the parser implementations and data transformations can vary significantly between these. The problem is not new and it has been apparent with all text-based data formats (for example the XML). The solution with XML was to introduce a special superset of XML tags that have standardised meaning and can describe what a valid XML file should look like. It is called the XML schema (or XSD).
The same approach was introduced for JSON with the JSON-schema specification (www.json-schema.org). The JSON schema is something called a meta model of the text format. By using the same notation, it describes the structure of the exchanged object’s data and the restrictions imposed over the attribute’s values (strings, date-time formats, numeric formats, and ranges). A structured text such as JSON is considered “valid” only when it fully complies to the rules defined in the schema.
Meta-model and data
The convenience to use the same text markup notation for the schema as for the JSON text model makes it possible to use the same software parsing tools and libraries to handle both.
Here is an example of how a JSON schema looks like:
"description": "The person's first name."
"description": "The person's last name."
"description": "Age in years which must be equal to or greater than zero.",
And here is the actual exchanged data:
It is evident that the schema is more descriptive than the actual JSON data, but the purpose of the schema is to give the proper structure and semantical meaning…