Porovnávané verzie

Kľúč

  • Tento riadok sa pridal
  • Riadok je odstránený.
  • Formátovanie sa zmenilo.

...

The protocol is an implementation of the MQTT 3.1.1 standard (October 2014) and the MQTT 5.0 standard (March 2019). MQTT protocol is a client/server protocol of a subscribe/publish type. It is simple, has little overhead, and is easy to implement. It is used for M2M communication (Machine to Machine) and in the IoT context (Internet of Things). The MQTT server is also called the MQTT broker.
D2000 KOM implements the client part of the protocol. The protocol is implemented on a TCP/IP-TCP and TCP/IP line-TCP Redundant lines. MQTTS (Secure MQTT, MQTT over TLS) is also supported - either directly in the D2000 KOM process (using TLS settings on on TCP/IP-TCP and TCP/IP-TCP lineRedundant lines) or via the tunnelstunnel utility.
For the transfer of LoRaWAN data encapsulated within the MQTT protocol, see LoRaWan protocol description.

...

  • Connecting to the MQTT broker as a Host Application (data consumer, in Payload Type = Sparkplug Host mode):
    • reading simple value types
    • reading arrays (with support for the Destination column)
    • writing simple value types
    • reading of template items (UDT)
    • reading of dataset items
    • support for metric aliases
    • compression support (GZIP, DEFLATE)
    • browsing - finding a list of metrics
  • Connecting to an MQTT broker as an Edge Node (data producer, in Payload Type = Sparkplug Edge Node mode):
    • publishing values ​​with simple types
    • processing commands (NCMD, DCMD) with simple value types
    • processing commands (NCMD, DCMD) with template (UDT) items
    • processing commands (NCMD, DCMD) with dataset items
    • processing commands (NCMD, DCMD) with arrays
    • processing of the NCMD command with the Node Control/Next Server metric (connecting to the next MQTT server, if several are configured on the line)
    • processing of the STATE message with the content online=false, which comes from the Primary Host Application (disconnecting from the MQTT server and trying another one, if several are configured on the line - searching for an MQTT server with a connected Primary Host Application)
    • support for metric aliases

The communication was tested/deployed with:

...

Sparkplug's MQTT protocol defines 3 groups of applications:

  • Edge Node - supports Supports the MQTT protocol and connects to the MQTT server. It sends him data obtained from the Device, its own, or aggregated data.
  • Device/Sensor - represents Represents a physical or logical device connected to an Edge Node and providing data, process data , or metrics.
  • Host Application - represents Represents a data consumer (SCADA/MES system, Historian, analysis tool) that connects to the MQTT server and receives MQTT data from the Edge Node/Device and, if necessary, commands.

...

  • If the MQTT server supports Topic aliases, use MQTT Version = MQTT 5.0, and set Topic Alias Maximum to a value greater than the number of stations on the line (for Payload Type = Sparkplug Edge Node) or greater than the number of topics expected from the MQTT broker (for Payload TypeText only / JSON). On the MQTT broker side, ensure that its Topic Alias ​​Maximum value is greater than the number of topics that will be sent to the D2000 KOM process, so that numeric aliases can be used instead of text topics.
    Note: the Topic Alias ​​Maximum value that the MQTT broker sends when connecting as a parameter of the CONNACK message is visible in the communication logs:
    09:31:12.086 13-05-2025|D|MQTT> CONNACK Property # 1 Topic Alias Maximum ( 34)
    09:31:12.087 13-05-2025|T|MQTT> Recv:<00><0A>
    09:31:12.088 13-05-2025|D|MQTT> CONNACK Property value= 10

  • Set Subscribe QoS = QoS_0 to not require acknowledgments for PUBLISH messages (unless they are needed for some reason). Additionally, if TLS is also active, this will cause a small packet (e.g., PUBACK for QoS_1) to be encrypted and result in a larger encrypted packet.

  • Use aliases for metrics (for for Payload Type = Sparkplug Edge Node): automatically using the Sparkplug Alias Mode=Automatic (HOBJ) parameter or manually using the SAL=alias item in the I/O tag address. If you use manual aliases, use low numbers (aliases 0-127 are encoded as 1 byte).

  • For Payload Type = Sparkplug Edge Node: if there are many changes to the output I/O tags, we recommend setting the Batch Size parameter to a larger value than the default (10) and possibly increasing the Delay parameter in the station's time parameters. This will cause fewer messages to be generated (and each will contain multiple metrics), which reduces the overhead required to transmit one metric. If specific I/O tags change frequently, the message may contain several values ​​for the same I/O tag (with timestamps). The disadvantage is the delay in the values.

...

Sparkplug parameters
KeywordFull name

Meaning

UnitDefault value

Kotva
ssa
ssa
SSA

Subscribe Station Address

If this parameter has the value YES, the Topic related to the station address is also added to the SUBSCRIBE message:

  • For Payload Type = Sparkplug Host, the following topics are created from the station address:
    • NDATA/DDATA topic (e.g., spBv1.0/myGroup/NDATA/myEdgeNode or spBv1.0/myGroup/DDATA/myEdgeNode/myDevice1)
    • NBIRTH/DBIRTH topic (e.g., spBv1.0/myGroup/NBIRTH/myEdgeNode or spBv1.0/myGroup/DBIRTH/myEdgeNode/myDevice1)
    • NDEATH/DDEATH topic (e.g., spBv1.0/myGroup/NDEATH/myEdgeNode or spBv1.0/myGroup/DDEATH/myEdgeNode/myDevice1)
  • For Payload Type = Sparkplug Edge Node, the parameter is ignored - NCMD/DCMD topic is always created (e.g., spBv1.0/myGroup/NCMD/myEdgeNode or spBv1.0/myGroup/DCMD/myEdgeNode/myDevice1)
  • For Payload Type = Text only or JSON, the topic is the same as the station address (i.e., it makes no sense to set this parameter on stations that have addresses in the form of a regular expression, e.g., status/batt.*)
YES/NONO

Kotva
swt
swt
SWT

Station Will Topic

Will topic of the device. If this parameter is set and a message with the same topic is received, the station will go into a communication error (StHardErr) and the values of the I/O tags will be invalidated. In this way, it is possible to emulate the standard behavior that occurs when there is a communication error with the device (even if the communication between the D2000 Kom process and the MQTT broker is functional).
If Payload Type = Sparkplug Host, it is not necessary to enter this parameter - if an NDEATH/DDEATH message arrives with a Topic that matches the station address, the station will go into a communication error.



Kotva
swp
swp
SWP

Station Will Payload

Content of the Will message. If this parameter is set and a message with the same topic as defined by the Station Will Topic parameter is received, the Payload must also be the same. If this parameter is an empty string, matching the topic with the Station Will Topic parameter is sufficient.
Note: This parameter was implemented because MQTT brokers send messages with the same Topic when connecting/disconnecting the device, the difference being only in the payload.



Kotva
fpt
fpt
FPT

Payload Type

The setting of message parsing (overriding the line parameter Payload Type):

  • Default - the line parameter Payload Type is respected
  • Text only - the message is not parsed, it is assigned to the I/O tag with the address IN_TOPIC
  • JSON - the message is parsed as JSON data. If there is an I/O tag with the address IN_TOPIC, the whole message will be assigned to it.
    If there are I/O tags with addresses JA=json_address, they will be populated with the appropriate data from the JSON message. If no such addresses exist in the message, the I/O tags will be invalidated.
  • Sparkplug - the message is parsed as Sparkplug B payload (binary coded). Based on the line parameter Payload Type, this is Sparkplug Host/Edge Node mode, Sparkplug Host by default.
Default
Text only
JSON
Sparkplug
Default

Kotva
ftf
ftf
FTF

Time Field Name

If Payload Type = JSON, the name of the field with a timestamp - overriding the line parameter Time Field Name--

Kotva
ftm
ftm
FTM

Time Mask

Mask for parsing a value in the field with a timestamp - overriding the line parameter Time Mask.

Note: Whether the time is interpreted as local or UTC with a configured offset depends on the time station parameters settings.

--

MQTT 5.0 Parameters

Kotva
srm5csr
m5c
SRM5C

MQTT 5.0 Compression

Payload compression when sending. If the value application/gzip is set, the Content Type parameter in the PUBLISH message header is set to the value application/gzip, and the payload is compressed (GZIP), which saves transmission capacity for larger text messages. The parameter is used only for MQTT Version = MQTT 5.0.

Note: If the Content Type parameter with the value application/gzip is found when receiving a message, the payload is automatically decompressed - this parameter does not need to be set.

None

application/gzip

None

Sparkplug parameters



Kotva
sr
sr
SR

Send Node Control/Rebirth

If Payload Type

Send Node Control/Rebirth

If Payload Type = Sparkplug Host, when connecting to the MQTT server, a command (NCMD or DCMD) with the metric 'Node Control/Rebirth' is sent to the SparkPlug station. The response should be a message (NBIRTH/DBIRTH) with all current metrics.

YES/NOYES

Kotva
bs
bs
BS

 Batch Size

If Payload Type = Sparkplug Edge Node: The parameter specifies the maximum number of values ​​(metrics) that are sent in one message (NDATA/DDATA). The parameter allows for optimization of the number of MQTT messages (more smaller or fewer larger with a delay). Reading values ​​from the station (ReadAllPoints function) will cause the buffered values ​​to be sent even if the configured maximum has not yet been reached (i.e., a larger time delay in the station parameters will cause more values ​​to be buffered).

1-100010

Kotva
sf
sf
SF

Store & Forward

The parameter allows you to change the Store & Forward functionality, which is defined for all stations on the line by the Store & Forward line parameter. The value Default means using the settings configured on the line.

Default
False
True
Default

Kotva
sfbs
sfbs
SFBS

Store & Forward Batch Size

If the Store & Forward functionality is active (see Store & Forward parameter), the parameter specifies the maximum number of historical values ​​sent in one message (NDATA/DDATA) after communication is resumed.

1-100010

Kotva
sam
sam
SAM

Sparkplug Alias Mode

If Payload Type = Sparkplug Edge Node, the parameter specifies the alias usage mode. Aliases are numeric (integer - Int64) identifiers, used optionally in data and command messages (NDATA/DDATA/NCMD/DCMD) instead of text identifiers, due to the message size reduction. If used, they are listed in the NBIRTH/DBIRTH message together with text identifiers. Aliases must be unique within all I/O tags belonging to one station. 

  • Default (SAL=alias) - Aliases are used for those I/O tags that have them configured directly in the address as a SAL item (e.g. ST=UInt16;SAL=2;SA=Second).
  • Automatic (HOBJ) - Aliases are used for allI/O tags belonging to the station. The alias value is the HOBJ (ID) of the I/O tag. If an alias is configured in the I/O tag address, it is ignored.
  • Off - Aliases are not used. If they are configured directly in theI/O tag address, they are ignored.

Default (SAL=alias)

Automatic (HOBJ)

Off

Default (SAL=alias)

Kotva
sjm
sjm
SJM

Sparkplug-to-JSON Mode

The parameter defines the JSON data format for the measured point with the address IN_SP2JS (Sparkplug payload to JSON conversion). The JSON data is either generated in a compact mode suitable for further machine processing (fields separated by spaces) or in a form suitable for human viewing (multiline format).

Compact

Multiline

the address IN_SP2JS (Sparkplug payload to JSON conversion). The JSON data is either generated in a compact mode suitable for further machine processing (fields separated by spaces) or in a form suitable for human viewing (multiline format).

Compact

Multiline

Compact

Kotva
sc
sc
SC

Sparkplug Compression

Optional compression used for NDATA/DDATA/NCMD/DCMD/NBIRTH/DBIRTH messages. If the resulting message after compression is larger than the original (e.g., due to added information), the data is sent uncompressed.

None
DEFLATE
GZIP

NoneCompact


Kotva
merany_bod
merany_bod
I/O tag configuration

...

Type of  I/O tagAddressDescription
I/O tags for reading data sent by the MQTT server through a PUBLISH message (usually used in Text mode or JSON mode, rarely in Sparkplug mode).
Note: Values of I/O tags are set by the D2000 KOM process in the order IN_TOPIC, IN_DATA, and IN_ID. The configuration doesn't need to contain all three I/O tags.
TxtI
Kotva
in_topic
in_topic
IN_TOPIC
Topic (Topic) of the received PUBLISH message.
TxtI
Kotva
in_data
in_data
IN_DATA
Data (Payload) of the received PUBLISH message.
Ci
Kotva
in_id
in_id
IN_ID
Identifier of a packet (Packet Identifier) of a PUBLISH message that depends on the level of validation (QoS).
For messages sent with QoS_0, the identifier is zero; for QoS_1 and QoS_2, it is a positive 16-bit number. On the TCP/IP-TCP line, the identifier is monotonically increasing; on the TCP/IP-TCP Redundant line, values from two monotonically increasing sequences can alternate (so they can also be repeated), so the recommendation given in the following note applies:
Note: If the MQTT server also sends messages with the QoS_0 level of validation and the ACK_ID I/O tag is configured, then we recommend activating the option New value when changing time in the Filter tab, so that repeated writing of the value 0 will cause a new value that differs only in a timestamp to be generated.
I/O tag to confirm the received data to the MQTT server.
Co
Kotva
ack_id
ack_id
ACK_ID
If an output I/O tag with the ACK_ID address is defined, the D2000 KOM expects confirmation of the processing of each message by writing a copy of the value of the IN_ID tag. Only after, it sets values from the next received PUBLISH message (if it was received in the meantime) into the IN_TOPIC, IN_DATA, and IN_ID I/O tags (in this order). 
In the case of the QoS_0 level of confirmation, it is, therefore, necessary to repeatedly set the value of the I/O tag ACK_ID to 0. 
If the I/O tag ACK_ID does not exist, the values are written into the IN_TOPIC, IN_DATA, and IN_ID I/O tags immediately after the PUBLISH message is received and processed.
Note: For messages received with a QoS_0 level of validation, no confirmation is sent to the MQTT server; only the values of the received PUBLISH message will be published. 
I/O tags for sending values to the MQTT server through a PUBLISH message.
Note: in order for the D2000 KOM process to send the PUBLISH messages to the MQTT server, both I/O tags must be defined within one station.
TxtO
Kotva
out_topic
out_topic
OUT_TOPIC
The topic of the PUBLISH message being sent.
Note: If the I/O tag with the OUT_TOPIC address does not exist, the station address will be used directly as the Topic (if it is empty, the writing will not be performed).
TxtO

Kotva
out_value
out_value
OUT_VALUE

Data (Payload) of the PUBLISH message being sent.
Note: Sending the message is performed as a result of writing to the OUT_VALUE I/O tag (i.e., if the Topic does not change, then it is sufficient to set the OUT_TOPIC point once, e.g., by using the default value).
I/O tags for parsing JSON messages

TxtI, TxtO, Qi,
Ci, Co,
Ai, Ao,
Di, Do,
TiR, ToR

Kotva
ja
ja
JA=json_address

If Payload Type=JSON, the message is parsed as JSON data. The json_address value specifies the name of the JSON field whose value is to be assigned to the I/O tag
For JSON messages that can be structured, the syntax level1.level2.level3 ... is supported, e.g. rx.current, and if they contain fields (indexed from 1) also syntax level1[index1].level2[index2].level3 ... is possible, e.g. rx.gwrx[1].time.
Since the JSON message itself can be an array, the address can also start with an index, e.g. JA=[1].batt_cell_v_avg

For other examples, see the description of the LoRaWAN protocol's Envelope type I/O tags.

Note: If you need to read the entire array, it is possible to configure the Destination column of the structure and specify an address with index 0, e.g. JA=myarray[0] or JA=struct.myarray[0].itemA

Writing to an I/O tag with a JSON address is also supported, but it must not have indexes. Examples of correct addresses for writing:

  • JA=myaddress
  • JA=mystruct.mylevel.myitem

When writing, the generated JSON contains the value itself and optionally a timestamp, if the station protocol parameter Time Field Name or the line parameter Time Field Name is set.
Note: Before writing to an I/O tag with a JSON address, a topic must first be set (an I/O tag with the OUT_TOPIC address). If the I/O tag with the OUT_TOPIC address does not exist, the station address will be used directly as the Topic (if it is empty, the writing will not be performed).

I/O tags for parsing Sparkplug messages

TxtI, TxtO, Qi,
Ci, Co,
Ai, Ao,
Di, Do,
TiR, ToR, TiA, ToA

Kotva
sa
sa

SA=sparkplug_address

SAL=alias;SA=sparkplug_address

ST=type;SA=sparkplug_address

ST=type;SAL=alias;SA=sparkplug_address

If Payload Type = Sparkplug Host/Edge Node, the message is parsed as Sparkplug data (a binary format built on Google Protocol Buffers). Sparkplug data contains metrics that have text identifiers (sparkplug_address) or possibly numeric aliases (alias).

Reading template items  is possible by specifying sparkplug_address in the format <TemplateName1><Separator><TemplateName2><Separator> ... <Separator><ItemName> where:

  • <TemplateNameX> is the name of the template/sub-template
  • <Separator> is the separator for individual levels (standard characters "->", which can be changed with the Item Separator parameter if this sequence occurs in template/item names)
  • <ItemName> is the item name of the deepest nested template

Examples of template item addresses:
SA=Template1->SubTemplate2->Item
SA=secUDT->sec

Reading dataset items (equivalent to structured variables in D2000) is possible by specifying sparkplug_address in the format <DatasetName>[<Row>]^<ColumnName> where:

  • <DatasetName> is the name of the dataset (it can also be part of a structure, e.g., Template1->SubTemplate2->Dataset3)
  • <Row> is the column number (1..N) or the "*" character. In the latter case, it is possible to configure the Destination column to which all rows are written (values ​​from the first row of the corresponding column are written to the I/O tag)
  • <ColumnName> is the name of the dataset column

Examples of dataset item addresses:
SA=Performance[3]^ActivePower
SA=Machine2->Parameters[1]^ActivePower
SA=DHS/Formation Data->Reservoir Parameter[*]^Gas density

For output I/O tags, the value type can be explicitly (ST=type). Simple types are supported (not template items/dataset items):

  • Int8
  • Int16
  • Int32
  • Int64
  • UInt8
  • UInt16
  • UInt32
  • UInt64
  • Float
  • Double
  • Boolean
  • String
  • DateTime
  • Text
  • Unknown
    (undefined type)

If the value type is not specified, the default value depends on the type of I/O tag:

I/O tag typeValue type
Dout

Boolean

CoInt64
Ao

Double

ToA

DateTime

ToRDouble
TxtO

String

Note: There is no difference between the String and Text types.
Note: for Payload Type = Edge Node, the D2000 KOM process sends DBIRTH/NBIRTH messages that contain only I/O tags (both input and output ones) with a defined value type. If we want to hide some (input) I/O tags, we need to set ST=Unknown in the address.

For Payload Type = Edge Node, it is also possible to specify a numerical alias (SAL=alias) as a natural number (0, 1, 2, ...) for both input and output I/O tags. The alias of the I/O tag within the station must be unique. Aliasing allows you to reduce the size of the transmitted data: in the NBIRTH/DBIRTH message, both the text identifier (sparkplug_address) and the alias are specified for each I/O tag; in the NDATA/DDATA messages, only aliases that are shorter than the text addresses are sent. Alias ​​is only used if Sparkplug Alias Mode = "Default (SAL=alias)".
Note: If aliases are used in the I/O tag address in Payload Type = Sparkplug Host mode, they will be replaced by aliases from the NBIRTH/DBIRTH message. However, if for some reason the Edge Node does not send these messages (and sends NDATA/DDATA with aliases), aliases can be useful for matching a text name with an alias.

The PUBLISH message created during writing contains a Topic derived from the station address. The message type depends on the station address:

  • for Payload Type = Sparkplug Host, depending on whether it is an Edge Node (NCMD) or Device/Sensor (DCMD)
  • for Payload Type = Edge Node, depending on whether it is an Edge Node (NDATA) or Device/Sensor (DDATA)

The Payload of the message contains a timestamp, a value type (type), a written value (encoded according to the specified value type), and a metric name (sparkplug_address) or an alias.

TxtI

Kotva
IN_SP2JS
IN_SP2JS
IN_SP2JS

The I/O tag is used to convert the Sparkplug payload into a JSON representation, which can then be processed, e.g., in an ESL script. Depending on the Convert Datatype/Timestamp to Text parameter, a textual representation of the value type and timestamp is also added. 
An example of value:

{"metrics":[{"datatype":3,"int_value":7338992,"name":"Corrected Vol Acc Stn","timestamp":1729664005479}],"seq":32,"timestamp":1729664005479}

After formatting into a readable form:

{
    "metrics": [
        {
            "datatype": 3,
            "int_value": 7338992,
            "name": "Corrected Vol Acc Stn",
            "timestamp": 1729664005479
        }
    ],
    "seq": 32,
    "timestamp": 1729664005479
}

An example of a more complex value containing properties and a dataset, and also displaying a textual representation of the data type (datatype_txt) and timestamp (timestamp_txt) as a result of the set parameter Convert Datatype/Timestamp to Text:

{
    "metrics": [
        {
            "datatype": 12,
            "datatype_txt": "String",
            "name": "Node Properties/Configuration",
            "string_value": "{}",
            "timestamp": 1730305529539,
            "timestamp_txt": "30-10-2024 17:25:29.539"
        },
        {
            "alias": 30064771073,
            "datatype": 5,
            "datatype_txt": "Uint8",
            "int_value": 0,
            "name": "Node Properties/Missing Param",
            "properties": {
                "keys": [
                    "usage"
                ],
                "values": [
                    {
                        "string_value": "technical information",
                        "type": 12,
                        "type_txt": "String"
                    }
                ]
            },
            "timestamp": 1730305529537,
            "timestamp_txt": "30-10-2024 17:25:29.537"
        },
        {
            "alias": 0,
            "dataset_value": {
                "columns": [
                    "topic_name",
                    "offset",
                    "length",
                    "crc"
                ],
                "num_of_columns": 4,
                "rows": [
                    {
                        "row": [
                            "N/A",
                            0,
                            0,
                            0
                        ]
                    }
                ],
                "types": [
                    12,
                    7,
                    7,
                    7
                ],
                "types_txt": [
                    "String",
                    "UInt32",
                    "UInt32",
                    "UInt32"
                ]
            },
            "datatype": 16,
            "datatype_txt": "DataSet",
            "name": "Node Control/FW Update",
            "timestamp": 1730305529537,
            "timestamp_txt": "30-10-2024 17:25:29.537"
        }
    ],
    "seq": 0,
    "timestamp": 1730305529536,
    "timestamp_txt": "30-10-2024 17:25:29.536"
}

...

Info
titleBlog

You can read a blog blogs about the MQTT protocol:


Kotva
revizie
revizie
Document revisions

...