A Data Node is defined from the logical grouping of all arcs (or FIFO queues) in the graph sharing a common source Port (see Figure 4.9). The interesting feature that leads us to this logical grouping is that arcs with a common source may in fact share the same FIFO queue, avoiding many unnecessary data copying and moving at the only price of making flow control issues a bit more complex. But then again, these issues should be automatic and transparent to anyone not wishing to understand the details of this low-level layer.
A DSPOOM Data Node acts as a connection slot where several Processing objects can be connected. Out of all the connected Processing objects, only one can be a producer while the rest will act as consumers. In other words, all connected Ports except one must be Inports. As already commented, connected Ports may have different and varying sizes. The amount of data tokens that an Inport needs to process is its only firing rule and these firing rules may be interpreted as sliding windows or regions.
But a Data Node is more than a simple connection slot. First it must be interpreted as a data container as it is here where the physical FIFO queues are actually implemented. The implementation of these FIFO queues in the Data Node admits different solutions but the most immediate is based on a circular buffer with several reading and only one writing region.
The Data Node is also in charge of keeping track of the different regions connected to it. It is the Data Node responsibility to avoid inconsistent situations and notify the flow control entity of any exceptional state. Reading regions may be of any size and advance at any rate. They may also overlap. Therefore, the only unwanted conditions that the Data Node has to avoid are: