This specification defines a set of algorithms for programmatic transformations of JSON-LD documents. Restructuring data according to the defined transformations often dramatically simplifies its usage. Furthermore, this document proposes an Application Programming Interface (API) for developers implementing the specified algorithms.
This document has been developed by the JSON for Linking Data W3C Community Group as an update to the 1.0 recommendation [[JSON-LD-API]] developed by the RDF Working Group. The specification has undergone significant development, review, and changes during the course of several years.
There are several independent interoperable implementations of this specification, a test suite [[JSON-LD-TESTS]] and a live JSON-LD playground that is capable of demonstrating the features described in this document.
This document is one of three JSON-LD 1.1 Recommendations produced by the JSON for Linking Data W3C Community Group:
This document is a detailed specification of the JSON-LD processing algorithms. The document is primarily intended for the following audiences:
To understand the basics in this specification you must first be familiar with JSON, which is detailed in [[!RFC7159]]. You must also understand the JSON-LD syntax defined in the JSON-LD 1.1 Syntax specification [[!JSON-LD11CG]], which is the base syntax used by all of the algorithms in this document. To understand the API and how it is intended to operate in a programming environment, it is useful to have working knowledge of the JavaScript programming language [[ECMASCRIPT-6.0]] and WebIDL [[WEBIDL]]. To understand how JSON-LD maps to RDF, it is helpful to be familiar with the basic RDF concepts [[RDF11-CONCEPTS]].
There are a number of ways that one may participate in the development of this specification:
This document uses the following terms as defined in JSON [[!RFC7159]]. Refer to the JSON Grammar section in [[!RFC7159]] for formal definitions.
The Following terms are used within specific algorithms.
Note that in the examples used in this document, output is of necessity shown in serialized form as JSON. While the algorithms describe operations on the JSON-LD internal representation, when they as displayed as examples, the JSON serialization is used. In particular, the internal representation use of dictionaries are represented using JSON objects.
In the internal representation, the example above would be of a
dictionary containing @context
, @id
, name
, and knows
keys,
with either dictionaries, strings, or arrays of
dictionaries or strings values. In the JSON serialization, JSON objects are used
for dictionaries, while arrays and strings are serialized using a
convention common to many programming languages.
The JSON-LD 1.1 Syntax specification [[!JSON-LD11CG]] defines a syntax to express Linked Data in JSON. Because there is more than one way to express Linked Data using this syntax, it is often useful to be able to transform JSON-LD documents so that they may be more easily consumed by specific applications.
To allow these algorithms to be adapted for syntaxes other than JSON, the algorithms operate on the JSON-LD internal representation, which uses the generic concepts of arrays, dictionaries, strings, numbers, booleans, and null to describe the data represented by a JSON document. Algorithms act on this internal representation with API entry points responsible for transforming between the concrete and internal representations.
JSON-LD uses contexts to allow Linked Data to be expressed in a way that is specifically tailored to a particular person or application. By providing a context, JSON data can be expressed in a way that is a natural fit for a particular person or application whilst also indicating how the data should be understood at a global scale. In order for people or applications to share data that was created using a context that is different from their own, a JSON-LD processor must be able to transform a document from one context to another. Instead of requiring JSON-LD processors to write specific code for every imaginable context switching scenario, it is much easier to specify a single algorithm that can remove any context. Similarly, another algorithm can be specified to subsequently apply any context. These two algorithms represent the most basic transformations of JSON-LD documents. They are referred to as expansion and compaction, respectively.
JSON-LD 1.1 introduces new features that are
compatible with JSON-LD 1.0 [[!JSON-LD]],
but if processed by a JSON-LD 1.0 processor may produce different results.
In order to detect this JSON-LD 1.1 requires that the processing
mode be explicitly set to json-ld-1.1
, either through the
processingMode API option, or using the
@version
member within a context.
There are four major types of transformation that are discussed in this document: expansion, compaction, flattening, and RDF serialization/deserialization.
The algorithm that removes context is called expansion. Before performing any other transformations on a JSON-LD document, it is easiest to remove any context from it and to make data structures more regular.
To get an idea of how context and data structuring affects the same data, here is an example of JSON-LD that uses only terms and is fairly compact:
The next input example uses one IRI to express a property and an array to encapsulate another, but leaves the rest of the information untouched.
Note that both inputs are valid JSON-LD and both represent the same information. The difference is in their context information and in the data structures used. A JSON-LD processor can remove context and ensure that the data is more regular by employing expansion.
Expansion has two important goals: removing any contextual information from the document, and ensuring all values are represented in a regular form. These goals are accomplished by expanding all properties to absolute IRIs and by expressing all values in arrays in expanded form. Expanded form is the most verbose and regular way of expressing of values in JSON-LD; all contextual information from the document is instead stored locally with each value. Running the Expansion algorithm (expand) operation) against the above examples results in the following output:
The example above is the JSON-LD serialization of the output of the expansion algorithm, where the algorithm's use of dictionaries are replaced with JSON objects.
Note that in the output above all context definitions have been removed, all terms and compact IRIs have been expanded to absolute IRIs, and all JSON-LD values are expressed in arrays in expanded form. While the output is more verbose and difficult for a human to read, it establishes a baseline that makes JSON-LD processing easier because of its very regular structure.
While expansion removes context from a given input, compaction's primary function is to perform the opposite operation: to express a given input according to a particular context. Compaction applies a context that specifically tailors the way information is expressed for a particular person or application. This simplifies applications that consume JSON or JSON-LD by expressing the data in application-specific terms, and it makes the data easier to read by humans.
Compaction uses a developer-supplied context to shorten IRIs to terms or compact IRIs and JSON-LD values expressed in expanded form to simple values such as strings or numbers.
For example, assume the following expanded JSON-LD input document:
Additionally, assume the following developer-supplied JSON-LD context:
Running the Compaction Algorithm (compact) operation) given the context supplied above against the JSON-LD input document provided above would result in the following output:
The example above is the JSON-LD serialization of the output of the compaction algorithm, where the algorithm's use of dictionaries are replaced with JSON objects.
Note that all IRIs have been compacted to
terms as specified in the context,
which has been injected into the output. While compacted output is
useful to humans, it is also used to generate structures that are easy to
program against. Compaction enables developers to map any expanded document
into an application-specific compacted document. While the context provided
above mapped http://xmlns.com/foaf/0.1/name
to name
, it
could also have been mapped to any other term provided by the developer.
While expansion ensures that a document is in a uniform structure, flattening goes a step further to ensure that the shape of the data is deterministic. In expanded documents, the properties of a single node may be spread across a number of different dictionaries. By flattening a document, all properties of a node are collected in a single dictionary and all blank nodes are labeled with a blank node identifier. This may drastically simplify the code required to process JSON-LD data in certain applications.
For example, assume the following JSON-LD input document:
Running the Flattening Algorithm (flatten) operation) with a context set to null to prevent compaction returns the following document:
The example above is the JSON-LD serialization of the output of the flattening algorithm, where the algorithm's use of dictionaries are replaced with JSON objects.
Note how in the output above all properties of a node are collected in a
single dictionary and how the blank node representing
"Dave Longley" has been assigned the blank node identifier
_:t0
.
To make it easier for humans to read or for certain applications to process it, a flattened document can be compacted by passing a context. Using the same context as the input document, the flattened and compacted document looks as follows:
Please note that the result of flattening and compacting a document
is always a dictionary,
(represented as a JSON object when serialized),
which contains an @graph
member that represents the default graph.
JSON-LD can be used to serialize RDF data as described in [[RDF11-CONCEPTS]]. This ensures that data can be round-tripped to and from any RDF syntax without any loss in fidelity.
For example, assume the following RDF input serialized in Turtle [[TURTLE]]:
Using the Serialize RDF as JSON-LD algorithm a developer could transform this document into expanded JSON-LD:
The example above is the JSON-LD serialization of the output of the Serialize RDF as JSON-LD algorithm, where the algorithm's use of dictionaries are replaced with JSON objects.
Note that the output above could easily be compacted using the technique outlined in the previous section. It is also possible to deserialize the JSON-LD document back to RDF using the Deserialize JSON-LD to RDF algorithm.
There are two classes of products that can claim conformance to this specification: JSON-LD Processors, and RDF Serializers/Deserializers.
A conforming JSON-LD Processor is a system which can perform the Expansion, Compaction, and Flattening operations in a manner consistent with the algorithms defined in this specification.
JSON-LD Processors MUST NOT attempt to correct malformed IRIs or language tags; however, they MAY issue validation warnings. IRIs are not modified other than conversion between relative and absolute IRIs.
A conforming RDF Serializer/Deserializer is a system that can deserialize JSON-LD to RDF and serialize RDF as JSON-LD as defined in this specification.
The algorithms in this specification are generally written with more concern for clarity than efficiency. Thus, JSON-LD Processors may implement the algorithms given in this specification in any way desired, so long as the end result is indistinguishable from the result that would be obtained by the specification's algorithms.
In algorithm steps that describe operations on keywords, those steps also apply to keyword aliases.
Implementers can partially check their level of conformance to this specification by successfully passing the test cases of the JSON-LD test suite [[JSON-LD-TESTS]]. Note, however, that passing all the tests in the test suite does not imply complete conformance to this specification. It only implies that the implementation conforms to aspects tested by the test suite.
When processing a JSON-LD data structure, each processing rule is applied using information provided by the active context. This section describes how to produce an active context.
The active context contains the active term definitions which specify how properties and values have to be interpreted as well as the current base IRI, the vocabulary mapping and the default language. Each term definition consists of an IRI mapping, a boolean flag reverse property, an optional type mapping or language mapping, an optional context, an optional nest value, an optional prefix flag, and an optional container mapping. A term definition can not only be used to map a term to an IRI, but also to map a term to a keyword, in which case it is referred to as a keyword alias.
When processing, active context is initialized without any term definitions, vocabulary mapping, or default language. If a local context is encountered during processing, a new active context is created by cloning the existing active context. Then the information from the local context is merged into the new active context. Given that local contexts may contain references to remote contexts, this includes their retrieval.
First we prepare a new active context result by cloning the current active context. Then we normalize the form of the original local context to an array. Local contexts may be in the form of a dictionary, a string, or an array containing a combination of the two. Finally we process each context contained in the local context array as follows.
Unless specified using
processingMode API option,
the processing mode is set using the @version
member
in a local context and
affects the behavior of algorithms including expansion and compaction.
If context is a string, it represents a reference to
a remote context. We dereference the remote context and replace context
with the value of the @context
key of the top-level object in the
retrieved JSON-LD document. If there's no such key, an
invalid remote context
has been detected. Otherwise, we process context by recursively using
this algorithm ensuring that there is no cyclical reference.
If context is a dictionary, we first update the
base IRI, the vocabulary mapping, processing mode, and the
default language by processing three specific keywords:
@base
, @vocab
, @version
, and @language
.
These are handled before any other keys in the local context because
they affect how the other keys are processed. Please note that @base
is
ignored when processing remote contexts.
Then, for every other key in local context, we update the term definition in result. Since term definitions in a local context may themselves contain terms or compact IRIs, we may need to recurse. When doing so, we must ensure that there is no cyclical dependency, which is an error. After we have processed any term definition dependencies, we update the current term definition, which may be a keyword alias.
Finally, we return result as the new active context.
This algorithm specifies how a new active context is updated with a local context. The algorithm takes three input variables: an active context, a local context, and an array remote contexts which is used to detect cyclical context inclusions. If remote contexts is not passed, it is initialized to an empty array.
null
, set result to a
newly-initialized active context and continue with the
next context.
In JSON-LD 1.0, the base IRI was given
a default value here; this is now described conditionally
in .@context
member, an
invalid remote context
has been detected and processing is aborted; otherwise,
set context to the value of that member.@base
key and remote contexts is empty, i.e., the currently
being processed context is not a remote context:
@base
key.null
, remove the
base IRI of result.null
,
set the base IRI of result to the result of
resolving value against the current base IRI
of result.@version
key:
1.1
,
an invalid @version value
has been detected, and processing is aborted.json-ld-1.0
,
a processing mode conflict
error has been detected and processing is aborted.json-ld-1.1
, if not already set.@vocab
key:
@vocab
key.""
),
the effective value is the current base IRI.@language
key:
@language
key.null
, remove
any default language from result.@base
, @vocab
, or
@language
, invoke the
Create Term Definition algorithm,
passing result for active context,
context for local context, key,
and defined.This algorithm is called from the Context Processing algorithm to create a term definition in the active context for a term being processed in a local context.
term definitions are created by parsing the information in the given local context for the given term. If the given term is a compact IRI, it may omit an IRI mapping by depending on its prefix having its own term definition. If the prefix is a key in the local context, then its term definition must first be created, through recursion, before continuing. Because a term definition can depend on other term definitions, a mechanism must be used to detect cyclical dependencies. The solution employed here uses a map, defined, that keeps track of whether or not a term has been defined or is currently in the process of being defined. This map is checked before any recursion is attempted.
After all dependencies for a term have been defined, the rest of the information in the local context for the given term is taken into account, creating the appropriate IRI mapping, container mapping, and type mapping or language mapping for the term.
The algorithm has four required inputs which are: an active context, a local context, a term, and a map defined.
true
(indicating that the
term definition has already been created), return. Otherwise,
if the value is false
, a
cyclic IRI mapping
error has been detected and processing is aborted.false
. This indicates that the term definition
is now being created but is not yet complete.null
or value
is a dictionary containing the key-value pair
@id
-null
, set the
term definition in active context to
null
, set the value associated with defined's
key term to true
, and return.@id
and whose value is value.
Set simple term to true
.false
.@type
:
@type
key, which must be a string. Otherwise, an
invalid type mapping
error has been detected and processing is aborted.true
for vocab,
local context, and defined. If the expanded type is
neither @id
, nor @vocab
, nor an absolute IRI, an
invalid type mapping
error has been detected and processing is aborted.@reverse
:
@id
or @nest
, members, an
invalid reverse property
error has been detected and processing is aborted.@reverse
key
is not a string, an
invalid IRI mapping
error has been detected and processing is aborted.@reverse
key for value, true
for vocab,
local context, and defined. If the result
is neither an absolute IRI nor a blank node identifier,
i.e., it contains no colon (:
), an
invalid IRI mapping
error has been detected and processing is aborted.@container
member,
set the container mapping of definition
to its value; if its value is neither @set
, nor
@index
, nor null
, an
invalid reverse property
error has been detected (reverse properties only support set- and
index-containers) and processing is aborted.true
.true
and return.false
.@id
and its value
does not equal term:
@id
key is not a string, an
invalid IRI mapping
error has been detected and processing is aborted.@id
key for
value, true
for vocab,
local context, and defined. If the resulting
IRI mapping is neither a keyword, nor an
absolute IRI, nor a blank node identifier, an
invalid IRI mapping
error has been detected and processing is aborted; if it equals @context
, an
invalid keyword alias
error has been detected and processing is aborted.:
),
simple term is true
, and the,
IRI mapping of definition ends with a URI
gen-delim character,
set the prefix flag in definition to true
.:
):
@container
:
@container
key, which must be either
@graph
,
@id
,
@index
,
@language
,
@list
,
@set
, or
@type
.
or an array containing exactly any one of those
keywords, an array containing @graph
and
either @id
or @index
optionally
including @set
, or an array containing a
combination of @set
and any of
@index
, @id
, @type
,
@language
in any order
.
Otherwise, an
invalid container mapping
has been detected and processing is aborted.json-ld-1.0
and the container value
is @graph
, @id
, or @type
, or is otherwise not a string, an
invalid container mapping
has been detected and processing is aborted.@context
:
json-ld-1.0
, an
invalid term definition
has been detected and processing is aborted.@context
key, which is treated as a local context.@language
and
does not contain the key @type
:
@language
key, which must be either null
or a string. Otherwise, an
invalid language mapping
error has been detected and processing is aborted.@nest
:
json-ld-1.0
, an
invalid term definition
has been detected and processing is aborted.@nest
key, which must be a string and
must not be a keyword other than @nest
. Otherwise, an
invalid @nest value
error has been detected and processing is aborted.@prefix
:
json-ld-1.0
, or if
term contains a colon (:
), an
invalid term definition
has been detected and processing is aborted.@prefix
key, which must be a boolean. Otherwise, an
invalid @prefix value
error has been detected and processing is aborted.@id
,
@reverse
, @container
,
@context
, @nest
,
@prefix
, or @type
, an
invalid term definition error has
been detected and processing is aborted.true
.In JSON-LD documents, some keys and values may represent IRIs. This section defines an algorithm for transforming a string that represents an IRI into an absolute IRI or blank node identifier. It also covers transforming keyword aliases into keywords.
IRI expansion may occur during context processing or during any of the other JSON-LD algorithms. If IRI expansion occurs during context processing, then the local context and its related defined map from the Context Processing algorithm are passed to this algorithm. This allows for term definition dependencies to be processed via the Create Term Definition algorithm.
In order to expand value to an absolute IRI, we must
first determine if it is null
, a term, a
keyword alias, or some form of IRI. Based on what
we find, we handle the specific kind of expansion; for example, we expand
a keyword alias to a keyword and a term
to an absolute IRI according to its IRI mapping
in the active context. While inspecting value we
may also find that we need to create term definition
dependencies because we're running this algorithm during context processing.
We can tell whether or not we're running during context processing by
checking local context against null
.
We know we need to create a term definition in the
active context when value is
a key in the local context and the defined map
does not have a key for value with an associated value of
true
. The defined map is used during
Context Processing to keep track of
which terms have already been defined or are
in the process of being defined. We create a
term definition by using the
Create Term Definition algorithm.
The algorithm takes two required and four optional input variables. The
required inputs are an active context and a value
to be expanded. The optional inputs are two flags,
document relative and vocab, that specifying
whether value can be interpreted as a relative IRI
against the document's base IRI or the
active context's
vocabulary mapping, respectively, and
a local context and a map defined to be used when
this algorithm is used during Context Processing.
If not passed, the two flags are set to false
and
local context and defined are initialized to null
.
null
,
return value as is.null
, it contains
a key that equals value, and the value associated with the key
that equals value in defined is not true
,
invoke the Create Term Definition algorithm,
passing active context, local context,
value as term, and defined. This will ensure that
a term definition is created for value in
active context during Context Processing.
true
and the
active context has a term definition for
value, return the associated IRI mapping.:
), it is either
an absolute IRI, a compact IRI, or a
blank node identifier:
:
)._
)
or suffix begins with double-forward-slash
(//
), return value as it is already an
absolute IRI or a blank node identifier.null
, it
contains a key that equals prefix, and the value
associated with the key that equals prefix in defined
is not true
, invoke the
Create Term Definition algorithm,
passing active context,
local context, prefix as term,
and defined. This will ensure that a
term definition is created for prefix
in active context during
Context Processing.true
, and
active context has a vocabulary mapping,
return the result of concatenating the vocabulary mapping
with value.true
set value to the result of resolving value against
the base IRI. Only the basic algorithm in
section 5.2
of [[!RFC3986]] is used; neither
Syntax-Based Normalization nor
Scheme-Based Normalization
are performed. Characters additionally allowed in IRI references are treated
in the same way that unreserved characters are treated in URI references, per
section 6.5
of [[!RFC3987]].This algorithm expands a JSON-LD document, such that all context definitions are removed, all terms and compact IRIs are expanded to absolute IRIs, blank node identifiers, or keywords and all JSON-LD values are expressed in arrays in expanded form.
Starting with its root element, we can process the JSON-LD document recursively, until we have a fully expanded result. When expanding an element, we can treat each one differently according to its type, in order to break down the problem:
null
, there is nothing
to expand.Finally, after ensuring result is in an array, we return result.
The algorithm takes three required and one optional input variables.
The required inputs are an active context,
an active property, and an element to be expanded.
The optional input is the flag frame expansion the allows
special forms of input used for frame expansion.
To begin, the active property is set to null
,
and element is set to the JSON-LD input.
If not passed, the frame expansion flag is set to false
.
The algorithm also performs processing steps specific to expanding
a JSON-LD Frame. For a frame, the @id
and
@type
properties can accept an array of IRIs or
an empty dictionary. The properties of a value object can also
accept an array of strings, or an empty dictionary.
Framing also uses additional keyword properties:
(@explicit
, @default
,
@embed
, @explicit
, @omitDefault
, or
@requireAll
) which are preserved through expansion.
Special processing for a JSON-LD Frame is invoked when the
frame expansion flag is set to true
.
null
, return null
.@default
,
set the frame expansion flag to false
.null
or @graph
,
drop the free-floating scalar by returning null
.@list
or its
container mapping includes @list
, the
expanded item must not be an array or a
list object, otherwise a
list of lists
error has been detected and processing is aborted.@context
, set
active context to the result of the
Context Processing algorithm,
passing active context and the value of the
@context
key as local context.@type
using the
IRI Expansion algorithm,
passing active context, key for
value, and true
for vocab:
@context
, continue to
the next key.true
for vocab.null
or it neither
contains a colon (:
) nor it is a keyword,
drop key by continuing to the next key.@reverse
, an
invalid reverse property map
error has been detected and processing is aborted.@id
and
value is not a string, an
invalid @id value
error has been detected and processing is aborted. Otherwise,
set expanded value to the result of using the
IRI Expansion algorithm,
passing active context, value, and true
for document relative.
When the frame expansion flag is set, value
may be an empty dictionary, or an array of one
or more strings. expanded value will be
an array of one or more of these, with string
values expanded using the IRI Expansion Algorithm.@type
and value
is neither a string nor an array of
strings, an
invalid type value
error has been detected and processing is aborted. Otherwise,
set expanded value to the result of using the
IRI Expansion algorithm, passing
active context, true
for vocab,
and true
for document relative to expand the value
or each of its items.
When the frame expansion flag is set, value
may also be an empty dictionary.@graph
, set
expanded value to the result of using this algorithm
recursively passing active context, @graph
for active property, value for element,
and the frame expansion flag,
ensuring that expanded value is an array of one or more dictionaries.@value
and
value is not a scalar or null
, an
invalid value object value
error has been detected and processing is aborted. Otherwise,
set expanded value to value. If expanded value
is null
, set the @value
member of result to null
and continue with the
next key from element. Null values need to be preserved
in this case as the meaning of an @type
member depends
on the existence of an @value
member.
When the frame expansion flag is set, value
may also be an empty dictionary or an array of
scalar values. expanded value will be null, or an
array of one or more scalar values.@language
and
value is not a string, an
invalid language-tagged string
error has been detected and processing is aborted.
Otherwise, set expanded value to lowercased value.
When the frame expansion flag is set, value
may also be an empty dictionary or an array of zero or
strings. expanded value will be an
array of one or more string values converted to lower case.@index
and
value is not a string, an
invalid @index value
error has been detected and processing is aborted. Otherwise,
set expanded value to value.@list
:
null
or
@graph
, continue with the next key
from element to remove the free-floating list.@set
, set
expanded value to the result of using this algorithm
recursively, passing active context,
active property, value for element,
and the frame expansion flag.@reverse
and
value is not a dictionary, an
invalid @reverse value
error has been detected and processing is aborted. Otherwise
@reverse
as active property,
value as element,
and the frame expansion flag.@reverse
member,
i.e., properties that are reversed twice, execute for each of its
property and item the following steps:
@reverse
:
@reverse
member, create
one and set its value to an empty dictionary.@reverse
member in result
using the variable reverse map.@reverse
:
@nest
,
add key to nests, initializing it to an empty array,
if necessary.
Continue with the next key from element.@explicit
, @default
,
@embed
, @explicit
, @omitDefault
, or
@requireAll
),
set expanded value to the result of performing the
Expansion Algorithm
recursively, passing active context,
active property, value for element,
and the frame expansion flag.null
, set
the expanded property member of result to
expanded value.@language
and
value is a dictionary then value
is expanded from a language map
as follows:
null
,
otherwise an
invalid language map value
error has been detected and processing is aborted.@value
-item)
and (@language
-lowercased
language),
unless item is null
.
If language is @none
,
or expands to @none
, do not set the @language
member.
@index
,
@type
, or @id
and
value is a dictionary then value
is expanded from an map as follows:
@type
,
and index's term definition in
term context has a local context, set
map context to the result of the Context Processing
algorithm, passing term context as active context and the
value of the index's local context as
local context. Otherwise, set map context
to term context.true
for vocab.@graph
and if item is not a
graph object, set item to a new
dictionary containing the key-value pair
@graph
-item, ensuring that the
value is represented using an array.@index
and item does not have the key
@index
and expanded index is not @none
,
add the key-value pair
(@index
-index) to item.@id
and item does not have the key
@id
, add the key-value pair
(@id
-expanded index) to
item, where expanded index is set to the result of
using the
IRI Expansion algorithm,
passing active context, index, and true
for document relative, unless expanded index
is already set to @none
.@type
set types to the concatenation of
expanded index with any existing values of
@type
in item.
If expanded index is @none
,
do not concatenate expanded index to types.
Add the key-value pair
(@type
-types) to
item.null
, ignore key
by continuing to the next key from element.@list
and
expanded value is not already a list object,
convert expanded value to a list object
by first setting it to an array containing only
expanded value if it is not already an array,
and then by setting it to a dictionary containing
the key-value pair @list
-expanded value.@graph
, convert expanded value into an array, if necessary,
then convert each value ev in expanded value into a
graph object:
@graph
-ev
where ev is represented as an array.@reverse
member, create
one and initialize its value to an empty dictionary.@reverse
member in result
using the variable reverse map.@value
, an
invalid @nest value error
has been detected and processing is aborted.@value
:
@value
, @language
, @type
,
and @index
. It must not contain both the
@language
key and the @type
key.
Otherwise, an
invalid value object
error has been detected and processing is aborted.@value
key is
null
, then set result to null
.@value
member
is not a string and result contains the key
@language
, an
invalid language-tagged value
error has been detected (only strings
can be language-tagged) and processing is aborted.@type
member
and its value is not an IRI, an
invalid typed value
error has been detected and processing is aborted.@type
and its associated value is not an array, set it to
an array containing only the associated value.@set
or @list
:
@index
. Otherwise, an
invalid set or list object
error has been detected and processing is aborted.@set
, then
set result to the key's associated value.@language
, set result to null
.null
or @graph
,
drop free-floating values as follows:
@value
or @list
, set result to
null
.@id
, set result to null
.
When the frame expansion flag is set, a dictionary
containing only the @id
key is retained.If, after the above algorithm is run, the result is a
dictionary that contains only an @graph
key, set the
result to the value of @graph
's value. Otherwise, if the result
is null
, set it to an empty array. Finally, if
the result is not an array, then set the result to an
array containing only the result.
Some values in JSON-LD can be expressed in a compact form. These values are required to be expanded at times when processing JSON-LD documents. A value is said to be in expanded form after the application of this algorithm.
If active property has a type mapping in the
active context set to @id
or @vocab
,
and the value is a string,
a dictionary with a single member @id
whose
value is the result of using the
IRI Expansion algorithm on value
is returned.
Otherwise, the result will be a dictionary containing
an @value
member whose value is the passed value.
Additionally, an @type
member will be included if there is a
type mapping associated with the active property
or an @language
member if value is a
string and there is language mapping associated
with the active property.
Note that values interpreted as IRIs fall into two categories:
those that are document relative, and those that are
vocabulary relative. Properties and values of @type
,
along with terms marked as "@type": "@vocab"
are vocabulary relative, meaning that they need to be either
a defined term, a compact IRI
where the prefix is a term,
or a string which is turned into an absolute IRI using
the vocabulary mapping.
The algorithm takes three required inputs: an active context, an active property, and a value to expand.
@id
,
and the value is a string,
return a new
dictionary containing a single key-value pair where the
key is @id
and the value is the result of using the
IRI Expansion algorithm, passing
active context, value, and true
for
document relative.@vocab
,
and the value is a string,
return a new
dictionary containing a single key-value pair where the
key is @id
and the value is the result of using the
IRI Expansion algorithm, passing
active context, value, true
for
vocab, and true
for
document relative.@value
member whose value is set to
value.@id
or @vocab
,
add an @type
member to
result and set its value to the value associated with the
type mapping.@language
to result and set its
value to the language code associated with the
language mapping; unless the
language mapping is set to null
in
which case no member is added.@language
to result and set its value to the
default language.This algorithm compacts a JSON-LD document, such that the given context is applied. This must result in shortening any applicable IRIs to terms or compact IRIs, any applicable keywords to keyword aliases, and any applicable JSON-LD values expressed in expanded form to simple values such as strings or numbers.
Starting with its root element, we can process the JSON-LD document recursively, until we have a fully compacted result. When compacting an element, we can treat each one differently according to its type, in order to break down the problem:
@index
or @language
maps.The final output is a dictionary with an @context
key, if a non-empty context was given, where the dictionary
is either result or a wrapper for it where result appears
as the value of an (aliased) @graph
key because result
contained two or more items in an array.
The algorithm takes five required input variables: an active context,
an inverse context, an active property, an
element to be compacted, and a flag
compactArrays
To begin, the active context is set to the result of
performing Context Processing
on the passed context, the inverse context is
set to the result of performing the
Inverse Context Creation algorithm
on active context, the active property is
set to null
, element is set to the result of
performing the Expansion algorithm
on the JSON-LD input, and, if not passed,
compactArrays
is set to true
.
null
, then append
it to result.1
),
active property is not @graph
or @set
,
or the container mapping for active property in
active context does not include @list
or @set
,
and compactArrays
is true
, set result to its only item.@value
or @id
member and the result of using the
Value Compaction algorithm,
passing active context, inverse context,
active property,and element as value is
a scalar, return that result.true
if
active property equals @reverse
,
otherwise to false
.@type
member,
create a new array compacted types initialized
by transforming each expanded type of that member
into it's compacted form using the IRI Compaction algorithm,
passing active context, inverse context,
expanded type for var, and
true
for vocab. Then, for each term
in compacted types ordered lexicographically:
@id
or
@type
:
true
for vocab if
expanded property is @type
,
false
otherwise.@type
array:
true
for vocab.1
), then
set compacted value to its only item.true
for vocab.@reverse
:
@reverse
for
active property, and expanded value
for element.@set
or
compactArrays
is false
, and value is not an
array, set value to a new
array containing only value.@reverse
for var,
and true
for vocab.@preserve
then:
@preserve
in result unless expanded value is an empty array.@index
and
active property has a container mapping
in active context that includes @index
,
then the compacted result will be inside of an @index
container, drop the @index
property by continuing
to the next expanded property.@index
,
@value
, or @language
:
true
for vocab.true
for vocab, and
inside reverse.@nest
, or a term in the
active context that expands to @nest
,
otherwise an invalid @nest
value error has been detected, and processing is aborted.
If result does not have the key that equals nest
term, initialize it to an empty JSON object (nest
object). If nest object does not have the key
that equals item active property, set this key's
value in nest object to an empty
array.Otherwise, if the key's value is not an
array, then set it to one containing only the
value.true
for vocab, and
inside reverse.@nest
, or a term in the
active context that expands to @nest
,
otherwise an invalid @nest
value error has been detected, and processing is aborted.
Set nest result to the value of nest term in result,
initializing it to a new dictionary, if necessary; otherwise
set nest result to result.null
. If there
is a container mapping for
item active property in active context,
set container to the first
such value other than @set
.true
or false
depending on if the container mapping for
item active property in active context
includes @set
or if item active property
is @graph
or @list
.@list
and is not a graph object containing @list
,
otherwise pass the key's associated value for element.@list
:
@list
for var, and compacted item
for value and the value is the original compacted item.@index
, then add a key-value pair
to compacted item where the key is the
result of the IRI Compaction algorithm,
passing active context, inverse context,
@index
as var, and the value associated with the
@index
key in expanded item as value.@graph
and @id
:
@id
in expanded item
or @none
if no such value exists as var, with vocab set to true
if there is no @id
member in expanded item.true
,
set compacted item to an array containing that value.@graph
and @index
and expanded item is a simple graph object:
@index
in
expanded item or @none
, if no such
value exists.true
,
set compacted item to an array containing that value.@graph
and expanded item is a simple graph
object the value cannot be represented as a map
object. If compacted item is not an array
and as array is true
, set
compacted item to an array containing
that value. If the value associated with the key that
equals item active property in
nest result is not an array,
set it to a new array containing only the value.
Then append compacted item to the value if
compacted item is not an array,
otherwise, concatenate it.
@graph
or otherwise does not match one of the previous cases, redo compacted item.
@graph
as
var, and true
for
vocab using the original
compacted item as a value.@id
,
add the key resulting from calling the IRI Compaction algorithm
passing active context, @id
as
var, and true
for
vocab using the value resulting from calling the IRI Compaction algorithm
passing active context, the value of @id
in expanded item as
var.@index
,
add the key resulting from calling the IRI Compaction algorithm
passing active context, @index
as
var, and true
for
vocab using the value of @index
in expanded item.true
,
set compacted item to an array
containing that value.@language
,
@index
, @id
,
or @type
and container does not include @graph
:
@language
, @index
, @id
, or @type
based on the contents of container, as var, and true
for vocab.@language
and
expanded item contains the key
@value
, then set compacted item
to the value associated with its @value
key.
Set map key to the value of @language
in expanded item, if any.@index
set map key to the value of @index
in expanded item, if any,
and remove container key from compacted item.@id
, set
map key to the value of container key in
compacted item and remove container key from compacted item.@type
,
set map key to the first value of container key in compacted item, if any.
If there are remaining values in compacted item
for compacted container, set the value of
compacted container in compacted value
to those remaining values. Otherwise, remove that
key-value pair from compacted item.true
,
set compacted item to an array containing that value.null
, set it to the result of calling the
IRI Compaction algorithm
passing active context, @none
as
var, and true
for
vocab.false
, as array is true
and
compacted item is not an array,
set it to a new array
containing only compacted item.If, after the algorithm outlined above is run, result
is an empty array, replace it with a new dictionary.
Otherwise, if result is an array, replace it with a new
dictionary with a single member whose key is the result
of using the IRI Compaction algorithm,
passing active context, inverse context, and
@graph
as var and whose value is the array
result.
Finally, if a non-empty context has been passed,
add an @context
member to result and set its value
to the passed context.
When there is more than one term that could be chosen to compact an IRI, it has to be ensured that the term selection is both deterministic and represents the most context-appropriate choice whilst taking into consideration algorithmic complexity.
In order to make term selections, the concept of an inverse context is introduced. An inverse context is essentially a reverse lookup table that maps container mapping, type mappings, and language mappings to a simple term for a given active context. A inverse context only needs to be generated for an active context if it is being used for compaction.
To make use of an inverse context, a list of preferred container mapping and the type mapping or language mapping are gathered for a particular value associated with an IRI. These parameters are then fed to the Term Selection algorithm, which will find the term that most appropriately matches the value's mappings.
To create an inverse context for a given
active context, each term in the
active context is visited, ordered by length, shortest
first (ties are broken by choosing the lexicographically least
term). For each term, an entry is added to
the inverse context for each possible combination of
container mapping and type mapping
or language mapping that would legally match the
term. Illegal matches include differences between a
value's type mapping or language mapping and
that of the term. If a term has no
container mapping, type mapping, or
language mapping (or some combination of these), then it
will have an entry in the inverse context using the special
key @none
. This allows the
Term Selection algorithm to fall back
to choosing more generic terms when a more
specifically-matching term is not available for a particular
IRI and value combination.
The algorithm takes one required input: the active context that the inverse context is being created for.
@none
. If the
active context has a default language,
set default language to it.null
,
term cannot be selected during compaction,
so continue to the next term.@none
.
If the container mapping is not empty, set container
to the concatenation of all values of the container mapping
in lexicographically order
.@language
and its value is a new empty
dictionary, the second member is @type
and its value is a new empty dictionary,
and the third member is @any
and its value is a new dictionary with the member
@none
set to the term being processed.@type
member in type/language map using the variable
type map.@reverse
member, create one and set its value to the term
being processed.@type
member in type/language map using the variable
type map.null
):
@language
member in type/language map using the variable
language map.null
,
set language to @null
; otherwise set it
to the language code in language mapping.@language
member in type/language map using the variable
language map.@none
member, create one and set its value to the term
being processed.@type
member in type/language map using the variable
type map.@none
member, create one and set its value to the term
being processed.This algorithm compacts an IRI to a term or compact IRI, or a keyword to a keyword alias. A value that is associated with the IRI may be passed in order to assist in selecting the most context-appropriate term.
If the passed IRI is null
, we simply
return null
. Otherwise, we first try to find a term
that the IRI or keyword can be compacted to if
it is relative to active context's
vocabulary mapping. In order to select the most appropriate
term, we may have to collect information about the passed
value. This information includes which
container mapping
would be preferred for expressing the value, and what its
type mapping or language mapping is. For
JSON-LD lists, the type mapping
or language mapping will be chosen based on the most
specific values that work for all items in the list. Once this
information is gathered, it is passed to the
Term Selection algorithm, which will
return the most appropriate term to use.
If no term was found that could be used to compact the
IRI, an attempt is made to compact the IRI using the
active context's vocabulary mapping,
if there is one. If the IRI could not be compacted, an
attempt is made to find a compact IRI.
A term will be used to create a compact IRI
only if the term definition contains the prefix flag
with the value true
.
If there is no appropriate compact IRI,
and the compactToRelative option is true
,
the IRI is
transformed to a relative IRI using the document's
base IRI. Finally, if the IRI or
keyword still could not be compacted, it is returned
as is.
This algorithm takes three required inputs and three optional inputs.
The required inputs are an active context, an inverse context,
and the var to be compacted. The optional inputs are a value associated
with the var, a vocab flag which specifies whether the
passed var should be compacted using the
active context's
vocabulary mapping, and a reverse flag which specifies whether
a reverse property is being compacted. If not passed, value is set to
null
and vocab and reverse are both set to
false
.
null
, return null
.true
and var is a
key in inverse context:
@none
.@preserve
, use the first
element from the value of @preserve
as value.@language
,
and type/language value to @null
. These two
variables will keep track of the preferred
type mapping or language mapping for
a term, based on what is compatible with value.@index
,
and value is not a graph object
then append the values @index
and @index@set
to containers.true
, set type/language
to @type
, type/language value to
@reverse
, and append @set
to containers.@index
is a not key in value, then
append @list
to containers.@list
in value.null
. If
list is empty, set common language to
default language.@none
and
item type to @none
.@value
:
@language
,
then set item language to its associated
value.@type
, set item type to its
associated value.@null
.@id
.null
, set it
to item language.@value
, then set common language
to @none
because list items have conflicting
languages.null
, set it
to item type.@none
because list items have conflicting
types.@none
and
common type is @none
, then
stop processing items in the list because it has been
detected that there is no common language or type amongst
the items.null
, set it to
@none
.null
, set it to
@none
.@none
then set
type/language to @type
and
type/language value to common type.@index
,
append the values @graph@index
and @graph@index@set
to containers.@id
,
append the values @graph@id
and @graph@id@set
to containers.@graph
@graph@set
,
and @set
to containers.@index
,
append the values @graph@index
and @graph@index@set
to containers.@id
,
append the values @graph@id
and @graph@id@set
to containers.@index
and @index@set
to containers.@language
and does not contain the key @index
,
then set type/language value to its associated
value and, append @language
and @language@set
to
containers.@type
, then set type/language value to
its associated value and set type/language to
@type
.@type
and set type/language value to @id
,
and append @id
, @id@set
,
@type
, and @set@type
,
to containers.@set
to containers.@none
to containers. This represents
the non-existence of a container mapping, and it will
be the last container mapping value to be checked as it
is the most generic.json-ld-1.1
and value does not contain the key @index
, append
@index
and @index@set
to containers.
json-ld-1.1
and value contains only the key @value
, append
@language
and @language@set
to containers.
null
, set it to
@null
. This is the key under which null
values
are stored in the inverse context entry.@reverse
, append
@reverse
to preferred values.@id
or @reverse
and value has an @id
member:
@id
key in value for
var, and true
for vocab has a
term definition in the active context
with an IRI mapping that equals the value associated
with the @id
key in value,
then append @vocab
, @id
, and
@none
, in that order, to preferred values.@id
, @vocab
, and
@none
, in that order, to preferred values.@none
, in
that order, to preferred values.
If value is an empty list object,
set type/language to @any
.null
, return term.true
and
active context has a vocabulary mapping:
null
. This variable will be used to
tore the created compact IRI, if any.null
,
its IRI mapping equals var, its
IRI mapping is not a substring at the beginning of
var,
or the term definition does not contain
the prefix flag having a value of true
,
the term cannot be used as a prefix.
Continue with the next term.:
), and the substring of var
that follows after the value of the
term definition's
IRI mapping.null
, candidate is
shorter or the same length but lexicographically less than
compact IRI and candidate does not have a
term definition in active context, or if the
term definition has an IRI mapping
that equals var and value is null
,
set compact IRI to candidate.null
, return compact IRI.false
,
transform var to a relative IRI using
the base IRI from active context, if it exists.This algorithm, invoked via the IRI Compaction algorithm, makes use of an active context's inverse context to find the term that is best used to compact an IRI. Other information about a value associated with the IRI is given, including which container mapping and which type mapping or language mapping would be best used to express the value.
The inverse context's entry for the IRI will be first searched according to the preferred container mapping, in the order that they are given. Amongst terms with a matching container mapping, preference will be given to those with a matching type mapping or language mapping, over those without a type mapping or language mapping. If there is no term with a matching container mapping then the term without a container mapping that matches the given type mapping or language mapping is selected. If there is still no selected term, then a term with no type mapping or language mapping will be selected if available. No term will be selected that has a conflicting type mapping or language mapping. Ties between terms that have the same mappings are resolved by first choosing the shortest terms, and then by choosing the lexicographically least term. Note that these ties are resolved automatically because they were previously resolved when the Inverse Context Creation algorithm was used to create the inverse context.
This algorithm has five required inputs. They are: an inverse context, a keyword or IRI var, an array containers that represents an ordered list of preferred container mapping, a string type/language that indicates whether to look for a term with a matching type mapping or language mapping, and an array representing an ordered list of preferred values for the type mapping or language mapping to look for.
null
.The following examples are intended to illustrate how the term selection algorithm behaves for different term definitions and values. It is not comprehensive, but intended to illustrate different parts of the algorithm.
If the term definition has "@container": "@language"
, it will only match a
value object having no @type
.
{ "@context": {"t": {"@id": "http://example/t", "@container": "@language"}} }
The inverse context will contain the following:
{ "@language": { "@language": {"@none": "t"}, "@type": {"@none": "t"}, "@any": {"@none": "t"} } }
If the term definition has a datatype, it will only match a value object having a matching datatype.
{ "@context": {"t": {"@id": "http://example/t", "@type": "http:/example/type"}} }
The inverse context will contain the following:
{ "@none": { "@language": {}, "@type": {"http:/example/type": "t"}, "@any": {"@none": "t"} } }
Expansion transforms all values into expanded form in JSON-LD. This algorithm performs the opposite operation, transforming a value into compacted form. This algorithm compacts a value according to the term definition in the given active context that is associated with the value's associated active property.
The value to compact has either an @id
or an
@value
member.
For the former case, if the type mapping of
active property is set to @id
or @vocab
and value consists of only an @id
member and, if
the container mapping of active property
includes @index
, an @index
member, value
can be compacted to a string by returning the result of
using the IRI Compaction algorithm
to compact the value associated with the @id
member.
Otherwise, value cannot be compacted and is returned as is.
For the latter case, it might be possible to compact value
just into the value associated with the @value
member.
This can be done if the active property has a matching
type mapping or language mapping and there
is either no @index
member or the container mapping
of active property includes @index
. It can
also be done if @value
is the only member in value
(apart an @index
member in case the container mapping
of active property includes @index
) and
either its associated value is not a string, there is
no default language, or there is an explicit
null
language mapping for the
active property.
This algorithm has four required inputs: an active context, an inverse context, an active property, and a value to be compacted.
@index
member and the
container mapping associated to active property
includes @index
, decrease number members by
1
.2
, return
value as it cannot be compacted.@id
member:
1
and
the type mapping of active property
is set to @id
, return the result of using the
IRI compaction algorithm,
passing active context, inverse context,
and the value of the @id
member for var.1
and
the type mapping of active property
is set to @vocab
, return the result of using the
IRI compaction algorithm,
passing active context, inverse context,
the value of the @id
member for var, and
true
for vocab.@type
member whose
value matches the type mapping of active property,
return the value associated with the @value
member
of value.@language
member whose
value matches the language mapping of
active property, return the value associated with the
@value
member of value.1
and either
the value of the @value
member is not a string,
or the active context has no default language,
or the language mapping of active property
is set to null
,, return the value associated with the
@value
member.This algorithm flattens an expanded JSON-LD document by collecting all properties of a node in a single dictionary and labeling all blank nodes with blank node identifiers. This resulting uniform shape of the document, may drastically simplify the code required to process JSON-LD data in certain applications.
First, a node map is generated using the Node Map Generation algorithm which collects all properties of a node in a single dictionary. In the next step, the node map is converted to a JSON-LD document in flattened document form. Finally, if a context has been passed, the flattened document is compacted using the Compaction algorithm before being returned.
The algorithm takes two input variables, an element to flatten and
an optional context used to compact the flattened document. If not
passed, context is set to null
.
This algorithm generates new blank node identifiers
and relabels existing blank node identifiers.
The Generate Blank Node Identifier algorithm
keeps an identifier map and a counter to ensure consistent
relabeling and avoid collisions. Thus, before this algorithm is run,
the identifier map is reset and the counter is initialized
to 0
.
@default
and whose value is
an empty dictionary.@default
member of node map, which is a dictionary representing
the default graph.@default
, perform the following steps:
@id
member whose value is set to graph name.@graph
member to entry and set it to an
empty array.@graph
member of entry,
unless the only member of node is @id
.@id
.null
, return flattened.@graph
keyword (or its alias)
at the top-level other than @context
, even if the context is empty or if there is only one element to
put in the @graph
array. This ensures that the returned
document has a deterministic structure.This algorithm creates a dictionary node map holding an indexed
representation of the graphs and nodes
represented in the passed expanded document. All nodes that are not
uniquely identified by an IRI get assigned a (new) blank node identifier.
The resulting node map will have a member for every graph in the document whose
value is another object with a member for every node represented in the document.
The default graph is stored under the @default
member, all other graphs are
stored under their graph name.
The algorithm recursively runs over an expanded JSON-LD document to
collect all properties of a node
in a single dictionary. The algorithm constructs a
dictionary node map whose keys represent the
graph names used in the document
(the default graph is stored under the key @default
)
and whose associated values are dictionaries
which index the nodes in the
graph. If a
property's value is a node object,
it is replaced by a node object consisting of only an
@id
member. If a node object has no @id
member or it is identified by a blank node identifier,
a new blank node identifier is generated. This relabeling
of blank node identifiers is
also done for properties and values of
@type
.
The algorithm takes as input an expanded JSON-LD document element and a reference to
a dictionary node map. Furthermore it has the optional parameters
active graph (which defaults to @default
), an active subject,
active property, and a reference to a dictionary list. If
not passed, active subject, active property, and list are
set to null
.
null
, set node to null
otherwise reference the active subject member of graph using the
variable node.@type
member, perform for each
item the following steps:
@value
member, perform the following steps:
null
:
@list
member of list.@list
member, perform
the following steps:
@list
whose value is initialized to an empty array.@list
member for element, active graph,
active subject, active property, and
result for list.@id
member, set id
to its value and remove the member from element. If id
is a blank node identifier, replace it with a newly
generated blank node identifier
passing id for identifier.null
for identifier.@id
whose
value is id.null
, perform the following steps:
@id
whose value is id.null
:
@list
member of list.@type
key, append
each item of its associated array to the
array associated with the @type
key of
node unless it is already in that array. Finally
remove the @type
member from element.@index
member, set the @index
member of node to its value. If node has already an
@index
member with a different value, a
conflicting indexes
error has been detected and processing is aborted. Otherwise, continue by
removing the @index
member from element.@reverse
member:
@id
whose
value is id.@reverse
member of
element.@reverse
member from element.@graph
member, recursively invoke this
algorithm passing the value of the @graph
member for element,
node map, and id for active graph before removing
the @graph
member from element.This algorithm is used to generate new blank node identifiers or to relabel an existing blank node identifier to avoid collision by the introduction of new ones.
The simplest case is if there exists already a blank node identifier
in the identifier map for the passed identifier, in which
case it is simply returned. Otherwise, a new blank node identifier
is generated by concatenating the string _:b
and the
counter. If the passed identifier is not null
,
an entry is created in the identifier map associating the
identifier with the blank node identifier. Finally,
the counter is increased by one and the new
blank node identifier is returned.
The algorithm takes a single input variable identifier which may
be null
. Between its executions, the algorithm needs to
keep an identifier map to relabel existing
blank node identifiers
consistently and a counter to generate new
blank node identifiers. The
counter is initialized to 0
by default.
null
and has an entry in the
identifier map, return the mapped identifier._:b
and counter.1
.null
, create a new entry
for identifier in identifier map and set its value
to the new blank node identifier.This algorithm creates a new map of subjects to nodes using all graphs contained in the graph map created using the Node Map Generation algorithm to create merged node objects containing information defined for a given subject in each graph contained in the node map.
@id
whose value is id, if it does not exist.This section describes algorithms to deserialize a JSON-LD document to an RDF dataset and vice versa. The algorithms are designed for in-memory implementations with random access to dictionary elements.
Throughout this section, the following vocabulary prefixes are used in compact IRIs:
Prefix | IRI |
---|---|
rdf | http://www.w3.org/1999/02/22-rdf-syntax-ns# |
rdfs | http://www.w3.org/2000/01/rdf-schema# |
xsd | http://www.w3.org/2001/XMLSchema# |
This algorithm deserializes a JSON-LD document to an RDF dataset. Please note that RDF does not allow a blank node to be used as a property, while JSON-LD does. Therefore, by default RDF triples that would have contained blank nodes as properties are discarded when interpreting JSON-LD as RDF.
The JSON-LD document is expanded and converted to a node map using the
Node Map Generation algorithm.
This allows each graph represented within the document to be
extracted and flattened, making it easier to process each
node object. Each graph from the node map
is processed to extract RDF triple,
to which any (non-default) graph name is applied to create an
RDF dataset. Each node object in the
node map has an @id
member which corresponds to the
RDF subject, the other members
represent RDF predicates. Each
member value is either an IRI or
blank node identifier or can be transformed to an
RDF literal
to generate an RDF triple. Lists
are transformed into an
RDF collection
using the List to RDF Conversion algorithm.
The algorithm takes a JSON-LD document element and returns an
RDF dataset. Unless the produceGeneralizedRdf option
is set to true
, RDF triple
containing a blank node predicate
are excluded from output.
This algorithm generates new blank node identifiers
and relabels existing blank node identifiers.
The Generate Blank Node Identifier algorithm
keeps an identifier map and a counter to ensure consistent
relabeling and avoid collisions. Thus, before this algorithm is run,
the identifier map is reset and the counter is initialized
to 0
.
@type
, then for each
type in values, append a triple
composed of subject, rdf:type
,
and type to triples.true
,
continue with the next property-values pair.@list
key from
item and list triples. Append first a
triple composed of subject,
property, and list head to triples and
finally append all triples from
list triples to triples.null
, indicating a relative IRI that has
to be ignored.@default
, add
triples to the default graph in dataset.This algorithm takes a node object or value object
and transforms it into an
RDF resource
to be used as the object of an RDF triple. If a
node object containing a relative IRI is passed to
the algorithm, null
is returned which then causes the resulting
RDF triple to be ignored.
Value objects are transformed to
RDF literals as described in
whereas node objects are transformed
to IRIs,
blank node identifiers,
or null
.
The algorithm takes as its sole argument item which MUST be either a value object or node object.
@id
member is a relative IRI, return
null
.@id
member.@value
member in item.
@type
member of item or null
if
item does not have such a member.true
or
false
, set value to the string
true
or false
which is the
canonical lexical form as described in
If datatype is null
, set it to
xsd:boolean
.xsd:double
, convert value to a
string in canonical lexical form of
an xsd:double
as defined in [[!XMLSCHEMA11-2]]
and described in
.
If datatype is null
, set it to
xsd:double
.xsd:integer
, convert value to a
string in canonical lexical form of
an xsd:integer
as defined in [[!XMLSCHEMA11-2]]
and described in
.
If datatype is null
, set it to
xsd:integer
.null
, set it to
xsd:string
or rdf:langString
, depending on if
item has an @language
member.@language
member, add the value associated with the
@language
key as the language tag of literal.List Conversion is the process of taking a list object and transforming it into an RDF collection as defined in RDF Semantics [[!RDF11-MT]].
For each element of the list a new blank node identifier
is allocated which is used to generate rdf:first
and
rdf:rest
ABBR. The
algorithm returns the list head, which is either the first allocated
blank node identifier or rdf:nil
if the
list is empty. If a list element represents a relative IRI,
the corresponding rdf:first
triple is omitted.
The algorithm takes two inputs: an array list and an empty array list triples used for returning the generated triples.
rdf:nil
.null
, append a triple
composed of subject, rdf:first
, and object.rdf:nil
. Append a
triple composed of subject,
rdf:rest
, and rest to list triples.rdf:nil
if bnodes is empty.This algorithm serializes an RDF dataset consisting of a default graph and zero or more named graphs into a JSON-LD document.
In the RDF abstract syntax, RDF literals have a lexical form, as defined in [[RDF11-CONCEPTS]]. The form of these literals is used when creating JSON-LD values based on these literals.
Iterate through each graph in the dataset, converting each
RDF collection into a list
and generating a JSON-LD document in expanded form for all
RDF literals, IRIs
and blank node identifiers.
If the use native types flag is set to true
,
RDF literals with a
datatype IRI
that equals xsd:integer
or xsd:double
are converted
to a JSON numbers and RDF literals
with a datatype IRI
that equals xsd:boolean
are converted to true
or
false
based on their
lexical form
as described in
.
Unless the use rdf:type
flag is set to true, rdf:type
predicates will be serialized as @type
as long as the associated object is
either an IRI or blank node identifier.
The algorithm takes one required and two optional inputs: an RDF dataset dataset
and the two flags use native types and use rdf:type
that both default to false
.
@default
whose value references
default graph.@default
, otherwise to the
graph name associated with graph.@id
whose value is name.@id
whose value is
set to subject.@id
whose value is
set to object.rdf:type
, the
use rdf:type
flag is not true
, and object
is an IRI or blank node identifier,
append object to the value of the @type
member of node; unless such an item already exists.
If no such member exists, create one
and initialize it to an array whose only item is
object. Finally, continue to the next
RDF triple.@id
member of node
to
the object member of node usage map.usages
member, create one and initialize it to
an empty array.usages
member of the object
member of node map using the variable usages.node
, property
, and value
to the usages array. The node
member
is set to a reference to node, property
to predicate,
and value
to a reference to value.rdf:nil
member, continue
with the next name-graph object pair as the graph does
not contain any lists that need to be converted.rdf:nil
member
of graph object.usages
member of
nil, perform the following steps:
node
member of usage, property to
the value of the property
member of usage,
and head to the value of the value
member
of usage.rdf:rest
,
the value of the @id
member
of node is a blank node identifier,
the array value of the member of node usage map associated with the @id
member of node
has only one member,
the value associated to the usages
member of node has
exactly 1 entry,
node has a rdf:first
and rdf:rest
property,
both of which have as value an array consisting of a single element,
and node has no other members apart from an optional @type
member whose value is an array with a single item equal to
rdf:List
,
node represents a well-formed list node.
Perform the following steps to traverse the list backwards towards its head:
rdf:first
member of
node to the list array.@id
member of
node to the list nodes array.usages
member of node.node
member
of node usage, property to the value of the
property
member of node usage, and
head to the value of the value
member
of node usage.@id
member of node is an
IRI instead of a blank node identifier,
exit the while loop.rdf:first
, i.e., the
detected list is nested inside another list
@id
of node equals
rdf:nil
, i.e., the detected list is empty,
continue with the next usage item. The
rdf:nil
node cannot be converted to a
list object as it would result in a list of
lists, which isn't supported.@id
member of head.rdf:rest
member of head.@id
member from head.@list
member to head and initialize
its value to the list array.@graph
member to node and initialize
its value to an empty array.@graph
member of node after
removing its usages
member, unless the only
remaining member of n is @id
.usages
member, unless the only remaining member of
node is @id
.This algorithm transforms an RDF literal to a JSON-LD value object and a RDF blank node or IRI to an JSON-LD node object.
RDF literals are transformed to
value objects whereas IRIs and
blank node identifiers are
transformed to node objects.
If the use native types flag is set to true
,
RDF literals with a
datatype IRI
that equals xsd:integer
or xsd:double
are converted
to a JSON numbers and RDF literals
with a datatype IRI
that equals xsd:boolean
are converted to true
or
false
based on their
lexical form
as described in
.
This algorithm takes two required inputs: a value to be converted to a dictionary and a flag use native types.
@id
whose value is set to
value.null
true
xsd:string
, set
converted value to the
lexical form
of value.xsd:boolean
, set
converted value to true
if the
lexical form
of value matches true
, or false
if it matches false
. If it matches neither,
set type to xsd:boolean
.xsd:integer
or
xsd:double
and its
lexical form
is a valid xsd:integer
or xsd:double
according [[!XMLSCHEMA11-2]], set converted value
to the result of converting the
lexical form
to a JSON number.@language
to result and set its value to the
language tag of value.xsd:string
which is ignored.@value
to result whose value
is set to converted value.null
, add a member @type
to result whose value is set to type.When deserializing JSON-LD to RDF
JSON-native numbers are automatically
type-coerced to xsd:integer
or xsd:double
depending on whether the number has a non-zero fractional part
or not (the result of a modulo‑1 operation), the boolean values
true
and false
are coerced to xsd:boolean
,
and strings are coerced to xsd:string
.
The numeric or boolean values themselves are converted to
canonical lexical form, i.e., a deterministic string
representation as defined in [[!XMLSCHEMA11-2]].
The canonical lexical form of an integer, i.e., a
number with no non-zero fractional part or a number
coerced to xsd:integer
, is a finite-length sequence of decimal
digits (0-9
) with an optional leading minus sign; leading
zeros are prohibited. In JavaScript, implementers can use the following
snippet of code to convert an integer to
canonical lexical form:
The canonical lexical form of a double, i.e., a
number with a non-zero fractional part or a number
coerced to xsd:double
, consists of a mantissa followed by the
character E
, followed by an exponent. The mantissa is a
decimal number and the exponent is an integer. Leading zeros and a
preceding plus sign (+
) are prohibited in the exponent.
If the exponent is zero, it is indicated by E0
. For the
mantissa, the preceding optional plus sign is prohibited and the
decimal point is required. Leading and trailing zeros are prohibited
subject to the following: number representations must be normalized
such that there is a single digit which is non-zero to the left of
the decimal point and at least a single digit to the right of the
decimal point unless the value being represented is zero. The
canonical representation for zero is 0.0E0
.
xsd:double
's value space is defined by the IEEE
double-precision 64-bit floating point type [[!IEEE-754-2008]] whereas
the value space of JSON numbers is not
specified; when deserializing JSON-LD to RDF the mantissa is rounded to
15 digits after the decimal point. In JavaScript, implementers
can use the following snippet of code to convert a double to
canonical lexical form:
The canonical lexical form of the boolean
values true
and false
are the strings
true
and false
.
When JSON-native numbers are deserialized
to RDF, lossless data round-tripping cannot be guaranteed, as rounding
errors might occur. When
serializing RDF as JSON-LD,
similar rounding errors might occur. Furthermore, the datatype or the lexical
representation might be lost. An xsd:double
with a value
of 2.0
will, e.g., result in an xsd:integer
with a value of 2
in canonical lexical form
when converted from RDF to JSON-LD and back to RDF. It is important
to highlight that in practice it might be impossible to losslessly
convert an xsd:integer
to a number because
its value space is not limited. While the JSON specification [[!RFC7159]]
does not limit the value space of numbers
either, concrete implementations typically do have a limited value
space.
To ensure lossless round-tripping the
Serialize RDF as JSON-LD algorithm
specifies a use native types flag which controls whether
RDF literals
with a datatype IRI
equal to xsd:integer
, xsd:double
, or
xsd:boolean
are converted to their JSON-native
counterparts. If the use native types flag is set to
false
, all literals remain in their original string
representation.
Some JSON serializers, such as PHP's native implementation in some versions,
backslash-escape the forward slash character. For example, the value
http://example.com/
would be serialized as http:\/\/example.com\/
.
This is problematic as other JSON parsers might not understand those escaping characters.
There is no need to backslash-escape forward slashes in JSON-LD. To aid
interoperability between JSON-LD processors, forward slashes MUST NOT be
backslash-escaped.
This API provides a clean mechanism that enables developers to convert JSON-LD data into a variety of output formats that are often easier to work with.
The JSON-LD API uses Promises to represent the result of the various asynchronous operations. Promises are defined in [[ECMASCRIPT-6.0]]. General use within specifications can be found in [[promises-guide]].
The JsonLdProcessor interface is the high-level programming structure that developers use to access the JSON-LD transformation methods.
It is important to highlight that implementations do not modify the input parameters. If an error is detected, the Promise is rejected passing a JsonLdError with the corresponding error code and processing is stopped.
If the documentLoader option is specified, it is used to dereference remote documents and contexts. The documentUrl in the returned RemoteDocument is used as base IRI and the contextUrl is used instead of looking at the HTTP Link Header directly. For the sake of simplicity, none of the algorithms in this document mention this directly.
[Constructor] interface JsonLdProcessor { static Promise<JsonLdDictionary> compact( JsonLdInput input, JsonLdContext context, optional JsonLdOptions? options); static Promise<sequence<JsonLdDictionary>> expand( JsonLdInput input, optional JsonLdOptions? options); static Promise<JsonLdDictionary> flatten( JsonLdInput input, optional JsonLdContext? context, optional JsonLdOptions? options); };
Compacts the given input using the context according to the steps in the Compaction algorithm:
@context
member, set
context to that member's value, otherwise to context.null
.null
as property,
expanded input as element, and if passed, the
compactArrays flag in options.input
;
it can be specified by using a dictionary, an
IRI, or an array consisting of
dictionaries and IRIs.Expands the given input according to the steps in the Expansion algorithm:
application/json
,
nor application/ld+json
, nor any other media type using a
+json
suffix as defined in [[RFC6839]], reject the promise passing an
loading document failed
error.null
. If set, the
base option from options overrides the base IRI.@context
member, pass that member's value instead.http://www.w3.org/ns/json-ld#context
link relation
and a content type of application/json
or any media type
with a +json
suffix as defined in [[RFC6839]] except
application/ld+json
, update the active context using the
Context Processing algorithm, passing the
context referenced in the HTTP Link Header as local context. The
HTTP Link Header is ignored for documents served as application/ld+json
If
multiple HTTP Link Headers using the http://www.w3.org/ns/json-ld#context
link relation are found, the promise is rejected with a JsonLdError whose code is set to
multiple context link headers
and processing is terminated.true
..Flattens the given input and compacts it using the passed context according to the steps in the Flattening algorithm:
@context
member, set
context to that member's value, otherwise to context.null
.0
)
to be used by the
Generate Blank Node Identifier algorithm.null
is passed, the result will not be compacted
but kept in expanded form.The JsonLdDictionary is the definition of a dictionary used to contain arbitrary key/value pairs which are the result of parsing a JSON Object.
The JsonLdInput type is used to refer to an input value that that may be a dictionary, an array of dictionaries or a string representing an IRI which an be dereferenced to retrieve a valid JSON document.
The JsonLdContext type is used to refer to a value that that may be a dictionary, a string representing an IRI, or an array of dictionaries and strings.
The JsonLdOptions type is used to pass various options to the JsonLdProcessor methods.
true
, the JSON-LD processor replaces arrays with just
one element with that element during compaction. If set to false
,
all arrays will remain arrays even if they have just one element.
true
, the JSON-LD processor may emit blank nodes for
triple predicates, otherwise they will be omitted.json-ld-1.0
or json-ld-1.1
, the
implementation must produce exactly the same results as the algorithms
defined in this specification.
If set to another value, the JSON-LD processor is allowed to extend
or modify the algorithms defined in this specification to enable
application-specific optimizations. The definition of such
optimizations is beyond the scope of this specification and thus
not defined. Consequently, different implementations may implement
different optimizations. Developers must not define modes beginning
with json-ld
as they are reserved for future versions
of this specification.Users of an API implementation can utilize a callback to control how remote documents and contexts are retrieved. This section details the parameters of that callback and the data structure used to return the retrieved context.
The LoadDocumentCallback defines a callback that custom document loaders have to implement to be used to retrieve remote documents and contexts.
All errors result in the Promise being rejected with a JsonLdError whose code is set to loading document failed or multiple context link headers as described in the next section.
The RemoteDocument type is used by a LoadDocumentCallback to return information about a remote document or context.
http://www.w3.org/ns/json-ld#context
link relation in the
response. If the response's content type is application/ld+json
,
the HTTP Link Header is ignored. If multiple HTTP Link Headers using
the http://www.w3.org/ns/json-ld#context
link relation are found,
the Promise of the LoadDocumentCallback is rejected with
a JsonLdError whose code is set to
multiple context link headers.This section describes the datatype definitions used within the JSON-LD API for error handling.
The JsonLdError type is used to report processing errors.
The JsonLdErrorCode represents the collection of valid JSON-LD error codes.
@id
member was encountered whose value was not a
string.@index
member was encountered whose value was
not a string.@nest
has been found.@prefix
has been found.@reverse
member has been detected,
i.e., the value was not a dictionary.@version
key was used in a context with
an out of range value.null
.@container
member was encountered whose value was
not one of the following strings:
@list
, @set
, or @index
.null
and thus invalid.@language
member in a term definition
was encountered whose value was neither a string nor
null
and thus invalid.true
, or false
with an
associated language tag was detected.@context
are allowed in reverse property maps.@type
member in a term definition
was encountered whose value could not be expanded to an
absolute IRI.@type
member has been detected,
i.e., the value was neither a string nor an array
of strings.@value
member of a
value object has been detected, i.e., it is neither
a scalar nor null
.null
.http://www.w3.org/ns/json-ld#context
link relation
have been detected.Consider requirements from Self-Review Questionnaire: Security and Privacy.
@context
property, which defines a context used for values of
a property identified with such a term. This context is used
in both the Expansion Algorithm and
Compaction Algorithm.@nest
property, which identifies a term expanding to
@nest
which is used for containing properties using the same
@nest
mapping. When expanding, the values of a property
expanding to @nest
are treated as if they were contained
within the enclosing node object directly.@container
values within an expanded term definition may now
include @id
and @type
, corresponding to id maps and type maps.@none
value, but
JSON-LD 1.0 only allowed string values. This has been updated
to allow (and ignore) @none
values.@container
in an expanded term definition
can also be an array containing any appropriate container
keyword along with @set
(other than @list
).
This allows a way to ensure that such property values will always
be expressed in array form.@prefix
member with the value true. The 1.0 algorithm has
been updated to only consider terms that map to a value that ends with a URI
gen-delim character.@container
to include @graph
,
along with @id
, @index
and @set
.
In the Expansion Algorithm, this is
used to create a named graph from either a node object, or
objects which are values of keys in an id map or index map.
the Compaction Algorithm allows
specific forms of graph objects to be compacted back to a set of node
objects, or maps of node objects.@none
keyword, or an alias, for
values of maps for which there is no natural index. The Expansion Algorithm removes this indexing
transparently.""
) has been added as a possible value for @vocab
in
a context. When this is set, vocabulary-relative IRIs, such as the
keys of node objects, are expanded or compacted relative
to the base IRI using string concatenation.The following is a list of issues open at the time of publication.
A large amount of thanks goes out to the JSON-LD Community Group participants who worked through many of the technical issues on the mailing list and the weekly telecons - of special mention are Niklas Lindström, François Daoust, Lin Clark, and Zdenko 'Denny' Vrandečić. The editors would like to thank Mark Birbeck, who provided a great deal of the initial push behind the JSON-LD work via his work on RDFj. The work of Dave Lehn and Mike Johnson are appreciated for reviewing, and performing several implementations of the specification. Ian Davis is thanked for his work on RDF/JSON. Thanks also to Nathan Rixham, Bradley P. Allen, Kingsley Idehen, Glenn McDonald, Alexandre Passant, Danny Ayers, Ted Thibodeau Jr., Olivier Grisel, Josh Mandel, Eric Prud'hommeaux, David Wood, Guus Schreiber, Pat Hayes, Sandro Hawke, and Richard Cyganiak for their input on the specification.