Additional LLAPI Features

Learn about additional LLAPI networking features available in the ARDK.

Overview

On top of functionality to join sessions and send messages, the MultipeerNetworking API has some additional features to assist in creating multiplayer experiences.

Coordinated Clock

A MultipeerNetworking’s CoordinatedClock is a server backed clock that all clients in the same session can access. When a client joins a session, its local clock will automatically begin syncing with the server clock. The synchronization process is necessary to ensure that networking hiccups will not degrade the performance of the coordinated clock on any individual client in the session. Once synchronized (represented by the clock’s SyncStatus turning to Stable), there is a guarantee that all synchronized clients in the session that query their clock’s CurrentCorrectedTime will agree within 30ms variance.

CurrentCorrectedTime itself has no guarantees of epoch or standard — it is a timestamp in milliseconds. It is better used as a stopwatch or timer, rather than used to represent a real world time (though this can be done by locally comparing it to another known clock).

Persistent Key-Value Store

Similar to messages, the persistent key-value store is a way for clients within a session to share data. However, the data is stored on the server side, and persisted as long as the session is active — unlike the temporary nature of messages. All clients that join a session will be notified of the latest state of all key-value pairs currently stored in the session, regardless of which peer set the pair, or when it was set.

Setting Key-Value Pairs

A key-value pair consists of a string (key) and byte[] (value).

using Niantic.ARDK.Networking;

void SetKeyValuePair(IMultipeerNetworking networking)
{
  string key = "my_key";
  byte[] value = new byte[10];

  // ...populate value array with content, empty values may not get persisted...

  // Stores the above data as a key-value pair on the server
  networking.StorePersistentKeyValue(key, value);
}

See also Serializing Data for more details on data serialization.

Getting Updates

Also similar to the messaging API, stored key-values will surface an event PersistentKeyValueUpdated:

using Niantic.ARDK.Networking;
using Niantic.ARDK.Networking.MultipeerNetworkingEventArgs;

void SubscribeToKeyValueUpdates(IMultipeerNetworking networking)
{
  networking.PersistentKeyValueUpdated += OnPersistentKeyValueUpdated;
}

// This will be fired once per key-value update, including keys that the local peer sets.
void OnPersistentKeyValueUpdated(PersistentKeyValueUpdatedArgs args)
{
  // Copy the value of the stored KV into local variables
  byte[] value = args.CopyValue();
  string key = args.Key;

  // Log some information about the key-value
  Debug.LogFormat
  (
    "Got a Persistet Key-Value. Key: {0}, Length: {1}",
    key,
    value.Length
  );

  // Do something with the value, depending on key
}

No information about the peer that actually stored the key-value pair is exposed.

Eventual Consistency Model

The rule for multiple peers writing to the same key is “Last Write Wins” - dependent on the order that the server receives store requests.

Furthermore, clients receiving key-value updates follow a contract of eventual consistency.

To illustrate this behaviour, imagine that 3 clients are quickly writing incrementing values to a single key (i.e., each one storing numbers from 1-100 sequentially). Assuming that the server receives requests in ascending order, the last store request that the server receives (“last write”) contains a value of 100.

Even though the server has received 300 store requests (and may or may not have processed all of them), there is no guarantee that any of the clients will receive an update for any value between 1 and 99. They may receive all of them, some of them, or none of them. If any are received, they will be received in ascending order, because that is the order in which the server got the store requests.

However, as long as there are no more writes to the key, eventual consistency guarantees that the last write (100) will be received by all of the clients.