Synqly supports customers defining their own mapping. Once a mapping has been defined, it becomes available for use via the adaptive mappings feature. A new management API is available for managing the custom mappings available within an organization. Once created, customer defined mappings are referenced by name or UUID when defining an adaptive mapping in an integration or integration point.
Documentation for general use of the mappings management API is available. To define a mapping, create a mapping entry in the following format:
{
"name": "string",
"data": "string"
}
The 'name' supplied is used for referencing the mapping within an integration or integration point. Optionally, you can also target the mapping using the generated UUID. The UUID is available after creating your mapping. The mapping itself is defined in the 'data' field. The JSON payload used to interact with mappings management API is really just a wrapper around 'data'. The data
field must contain a YAML document which defines the entirety of the custom mapping. The format of the YAML document is detailed below.
In addition to managing the list of available custom mappings, the mappings API exposes an endpoint for applying and testing a declared mapping to ensure it is performing as expected.
POST /v1/mappings/apply
This endpoint takes an array of mappings (by name or UUID) applying them as a mapping chain to the input data
. This mirrors how you would invoke a mapping from an integration or integration point. Within the 'mapping' chain, both custom and built-in mappings are supported (though the default:0
mapping is not available because the request is not in context of any specific provider). The output of each mapping is provided as the input for the next mapping in the chain.
Once a mapping has been added to Synqly, it is a good idea to try it with the apply endpoint to make sure the resulting mapped data meets expectations. If not, continue working on the mapping, updating it until it works correctly.
Mapping definitions are written in YAML. A mapping definition will always contain two top level keys, 'engine' and 'version'. This represents how the mapping is processed (the engine), and the version of the processor. Both are strings.
engine: bloblang_direct
version: '1.0'
In the future there may be different engines available. Currently this list includes only 'bloblang_direct' at version '1.0'.
Bloblang is a powerful transformation language that allows you to map data in a variety of ways.
The bloblang direct mapping engine provides a means of defining multiple bloblang-based templates, and then adaptively applying them based on the input data. In addition, it provides tooling for creating pre and post processing transformations which will always get applied before or after the primary transformation. This considerably limits duplication when working with several different, yet similar schemas.
A example of a bloblang direct mapping definition is as follows:
engine: bloblang_direct
version: '1.0'
templates:
default |
root = this
This simple mapping will act as a passthrough sending all input (this
) into the output (root
).
Beyond the 'engine' and 'version' declarations, the only key required for a bloblang direct transformation is 'templates'. A 'default' template must always be defined and runs as a fallback whenever another template is not selected. The name of the default template can be changed by specifying a new default template in the 'default' top level key.
engine: bloblang_direct
version: '1.0'
default: base_template
templates:
base_template:
root = this
This is functionally equivalent to the simple template defined above.
The 'templates' key can hold any number of templates, each containing the raw bloblang to apply when the template is invoked. Once defined, templates can be applied as bloblang maps.
engine: bloblang_direct
version: '1.0'
templates:
template1: |
root.path1 = "this came from template 1"
template2: |
root.path2 = "this came from template 2"
default |
root = this
root.metadata = {}
root.metadata = root.metadata.apply("template1")
root.metadata = root.metadata.apply("template2")
Results
# In: {"hello": "world"}
# Out: {"hello": "world", "metadata": { "path1": "this came from template 1", "path2": "this came from template 2" }}
Note: bloblang does not support 'map' definitions within a 'map'. Because this templating engine uses 'map' to organize and apply the available templates, you can not define a new 'map' within the bloblang of a template. Instead, define the 'map' as it's own template and use apply as shown here.
In addition to using the apply
function, you can also change the mapping applied by defining a 'select' list. This will adaptively select the template to apply based on the input data. Each select item has a template name, and a match condition. The select runs from top to bottom. The first match wins to return true will supply the template to apply.
engine: bloblang_direct
version: '1.0'
select:
- name: target1
match: this.tags.contains("target1")
- name: target2
match: this.tags.contains("target2")
templates:
target1: |
root.tags = this.tags | deleted()
root.message = "Hello from target1"
target2: |
root.tags = this.tags | deleted()
root.message = "Hello from target2"
default: |
root.tags = this.tags | deleted()
root.message = "Hello, world!"
Results
# No selections match, default template is applied
# In: {"tags": ["another_tag"]}
# Out: {"message": "Hello, world!", "tags": ["another_tag"]}
# First selection matches, target1 template is applied
# In: {"tags": ["target1", "another_tag"]}
# Out: {"message": "Hello from target1", "tags": ["target1", "another_tag"]}
# Second selection matches, target2 template is applied
# In: {"tags": ["target2", "another_tag"]}
# Out: {"message": "Hello from target2", "tags": ["target2", "another_tag"]}
# First selection matches first, so it wins, target1 template is applied
# In: {"tags": ["target1", "target2", "another_tag"]}
# Out: {"message": "Hello from target1", "tags": ["target1", "target2", "another_tag"]}
Notice in the above example that root.tags = this.tags | deleted()
had to be defined in every template. The bloblang direct engine provides a tool for abstracting out common mappings that should always get applied, either before or after the primary transformation.
engine: bloblang_direct
version: '1.0'
pre:
- pre
post:
- post
select:
- name: target1
match: this.tags.contains("target1")
- name: target2
match: this.tags.contains("target2")
templates:
pre: |
root.metadata.pre = "applied before the main mapping
root.metadata.post = "this will be overwritten by the post mapping"
post: |
root.tags = this.tags | deleted()
root.metadata.post = "applied after the main mapping"
target1: |
root.message = "Hello from target1"
target2: |
root.message = "Hello from target2"
default: |
root.message = "Hello, world!"
Results
# No selections match, default template is applied
# In: {"tags": ["another_tag"]}
# Out: {"message": "Hello, world!", "tags": ["another_tag"], "metadata": { "pre": "applied before the main mapping", "post": "applied after the main mapping" }}
# First selection matches, target1 template is applied
# In: {"tags": ["target1", "another_tag"]}
# Out: {"message": "Hello from target1", "tags": ["target1", "another_tag"], "metadata": { "pre": "applied before the main mapping", "post": "applied after the main mapping" }}
Any number of 'pre' and 'post' templates can be applied by name.
Note: When a template is applied automatically through pre
, post
, or select
, it has access to the entire input object through this
and the entire output of all previously applied mappings through root
. To accomplish this, the this
context within a mapping will additionally have a __root__
key defined. This houses the previously mapped root
object. If using the entirety of this
in a mapping template, it may be necessary to use this.without("__root__")
to avoid having the additonal __root__
key show up in the mapping.
Bloblang direct provides the ability to cast easily data to types, while ignoring certain values, such as when the data is null
. There are a few built-in types available. In addition you can define your own to take full control over data type casting. Applying the type cast to a mapped value checks for any ignored values (the built-in types ignore null
as an example), and then applies the type mapping to your data. If a value is ignored, the key getting mapped is skipped. The default type casts available are:
string
- Converts the value to a string.number
- Converts the value to a number.datetime
- Converts the value to a unix timestamp with milliseconds precision.boolean
- Converts the value to a boolean.array
- Converts the value to an array. If it is not an array, it will be converted to an array with a single element.
engine: bloblang_direct
version: '1.0'
templates:
default |
root.string0 = this.val0.apply("string")
root.string = this.val1.apply("string")
root.number = this.val2.apply("number")
root.datetime = this.val3.apply("datetime")
root.boolean = this.val4.apply("boolean")
root.array_cast = this.val5.apply("array")
root.array = this.val6.apply("array")
# In: {"val1": 123, "val2": "1", "val3": "2021-01-01T00:00:00Z", "val4": "true", "val5": "123", "val6": [1, 2, 3]}
# Out: {"string": "123", "number": 1, "datetime": 1609459200000, "boolean": true, "array_cast": ["123"], "array": [1, 2, 3]}
To override or define you own type casts, you can define a 'data_types' key. Each named item becomes an available type cast. Values present in the 'ignored' list will result in the mapping being deleted (not present in the output). If the value is not ignored, the transform will be applied to the value.
engine: bloblang_direct
version: '1.0'
data_types:
datetime:
transform: this.ts_format("2006-01-02T15:04:05Z07:00")
ignored:
- "null"
notest_string:
transform: this.string()
ignored:
- "null"
- "\"test\""
templates:
default |
root.string00 = this.val00.apply("notest_string")
root.string0 = this.val0.apply("notest_string")
root.string = this.val1.apply("notest_string")
root.datetime = this.val2.apply("datetime")
# In: {"val00": "test", "val1": "hello", "val2": 1609459200000}
# Out: {"string": "test", "datetime": "2021-01-01T00:00:00Z"}
This example shows a mapping example using all of the features available in the bloblang direct engine.
engine: bloblang_direct
version: '1.0'
pre:
- _pre
post:
- _post
default: base
data_types:
datetime:
transform: this.ts_format("2006-01-02T15:04:05Z07:00")
ignored:
- "null"
select:
- name: target1
match: this.tags.contains("target1")
- name: target2
match: this.tags.contains("target2")
templates:
_post: |
root.tags = this.tags.apply("array")
root.metadata.time = this.timestamp.apply("datetime")
_pre: |
root.metadata.pre = true
target1: |
root.message = "Hello from target1"
target2: |
root.message = "Hello from target2"
base: |
root.message = "Hello, world!
# No selections match, default template is applied
# In: {"tags": ["another_tag"], "timestamp": 1609459200000}
# Out: {"message": "Hello, world!", "tags": ["another_tag"], "metadata": { "pre": true, "time": "2021-01-01T00:00:00Z" }}
# First selection matches, target1 template is applied
# In: {"tags": ["target1", "another_tag"], "timestamp": 1609459200000}
# Out: {"message": "Hello from target1", "tags": ["target1", "another_tag"], "metadata": { "pre": true, "time": "2021-01-01T00:00:00Z" }}
# Second selection matches, target2 template is applied
# In: {"tags": ["target2", "another_tag"], "timestamp": 1609459200000}
# Out: {"message": "Hello from target2", "tags": ["target2", "another_tag"], "metadata": { "pre": true, "time": "2021-01-01T00:00:00Z" }}
Authoring custom mappings can be challenging. Writing bloblang within a YAML file is not always the most friendly. For this reason we suggest creating a workflow that allows working on each part of the mapping process in isolation. The following is one example of a potential workflow to accomplish this, but certainly not the only one.
Authoring Bloblang inside of a YAML file is challenging beyond the most basic of mappings. By working with individual file, you can more easily add language support for Bloblang to your editor of choice, resulting in a better authoring experience. That coupled with a small yaml-based manifest file and it becomes far easier to author mappings. For instance, using the full example above, we can break each template down into it's own file, resulting in a filesystem structure like this:
mappings/
full_example/
_manifest.yaml
_pre.blobl
_post.blobl
target1.blobl
target2.blobl
base.blobl
While not necessary, it can be helpful to add a character prefix to the manifest and pre/post files to visually differentiate them and bring them to the top of the file list when sorting by file name.
The manifest file in this example is effectively a yaml definition of the mapping without the templates, while each template contains only the bloblang for that template.
_manifest.yaml
engine: bloblang_direct
version: '1.0'
pre:
- _pre
post:
- _post
default: base
data_types:
datetime:
transform: this.ts_format("2006-01-02T15:04:05Z07:00")
ignored:
- "null"
select:
- name: target1
match: this.tags.contains("target1")
- name: target2
match: this.tags.contains("target2")
_post.blobl
root.tags = this.tags.apply("array")
root.metadata.time = this.timestamp.apply("datetime")
With this structure in place, a script can parse the manifest, read the contents of each .blobl
file and adds them to the mapping definition as a 'template', and writing the final mapping to the Synqly API using the name of the wrapping directory as the name
of the mapping.
In pseudo-code might look like:
const name = getArg('name')
const apiKey = getArg('apiKey')
let rootPath = path.join("mappings", escapePath(name))
manifestStr = readFile(path.join(rootPath, "_manifest.yaml"))
manifest = yaml.parse(manifestStr)
// handle errors and validation
let templates = {}
for (const file of readDir(rootPath)) {
if !file.Name.endsWith(".blobl") {
continue
}
templateName = file.Name.replace(".blobl", "")
templates[templateName] = readFile(path.join(rootPath, file.Name))
// handle errors and validation
}
manifest.templates = templates
let body = {
name: name,
data: yaml.stringify(manifest)
}
http.post("https://api.synqly.com/v1/mappings", body, {
headers: {
"Authorization": "Bearer " + apiKey,
"Content-Type": "application/json"
}
})
// handle errors and validation and return