Similar to video compression which uses key-frames and differences
between them: every N publishes we send a full report, but inbetween
we only send what has changed.
Fairly simple approach in the probe - hold on to the last full report,
and for the deltas remove anything that would be merged in from the
full report.
On the receiving side in the app it already merges a set of reports
together to produce the final output for rendering, so provided N is
smaller than that set we don't need to do anything different.
Deltas don't need to represent nodes that have disappeared - an
earlier full node will have that node so it would be merged into the
final output anyway.
If we run out of things to look at in the other map, return quickly.
Also move the equal-key case above the less-than case, since maps with
equal keys are the common case when merging.
Fixing a rare case that came up in a test. In order for this to cause
a problem, the data being decoded has to have entries out of order,
and have a value that is nil or omitted.
Most maps we merge have the same keys, or at least one set of keys is
the subset of the other. Therefore, allocate a result slice capable of
holding only the max of number keys, rather than the sum.
Structs like StringLatestMap now use ps.Map directly, which saves
a memory allocation for LatestEntry.Value to point to.
The values in the ps.Map are now pointers, which saves a memory
allocation indirecting a value type to an interface{}