Array-Based Queue on Aerospike

21 March 2015

Aerospike is a high-performance, horizontally scalable key-value store. Running 1M transactions per second without breaking a sweat, this NoSQL database is delivering 10-15x price-performance gains. Notably, through flash optimizations, RAM-like performance can be achieved using local SSDs. Further, Aerospike operates as an AP data store.

Similar to Redis, Aerospike provides built-in support for complex data types such as lists and maps. If you a migrating a Redis based application for nice performance rewards, you can conveniently use a complete Redis veneer. Note, when storing built-in Aerospike data structures in records, you are bound by the typical record size limitation of 1MB. The Large Data Type (LDT) Lua extensions, further, provide support for ordered lists, sets, stacks, and maps. Using sub-records, LDTs may grow freely, only limited by the underlying storage.

Through User Defined Function (UDF) modules, data structures can be managed directly within Aerospike on the server. The Redis veneer above provides a great example of this record management from which this UDF module is based on. To enable runtime characteristics similar to that of a memory resident linked list in Redis, you can implement efficient queuing using two array list buffers. Let's begin with the enqueue User Defined Function (UDF) in Lua. NOTE: For a more scalable approach with similar runtime properties, please see the aerospike-LDT-techniques repo.

function enqueue(rec, bin, value)
    local q = rec[bin]
    if q == nil then
        q = map { size = 0, pos = 0 }

    if q.rear == nil then
        q.rear = list()
    list.append(q.rear, value)
    q.size = q.size + 1

    rec[bin] = q
    return q.size

The UDF above participates in the main flow of the transaction. This method sets up a map to manage the queue data structure if it does not already exist. Subsequently, it appends the incoming value to a rear array list buffer before updating the record with the call to the local helper function l_update. Next, let's take a look at the work of our dequeuing function which manages our second buffer.

if q.front == nil then
    q.front = q.rear
    q.pos = 1
    map.remove(q, "rear")

local item = nil
local last = list.size(q.front)

if last == 1 then
    item = q.front[1]
    map.remove(q, "front")
    if q.pos < last then
        item = q.front[q.pos]
        q.front[q.pos] = q.front[last]
        q.pos = q.pos + 1
        item = q.front[last]
    list.remove(q.front, last)
q.size = q.size - 1
return item

The dequeuing logic above manages the front buffer of the queue. If no front buffer exists, it will move the rear buffer to the front. As we are using array lists for our buffers, it would be a linear operation to dequeue items from the beginning of the list since all subsequent items need to be shifted left. By maintaining a position in our queue, we can dequeue more efficiently. As items are dequeued, we iteratively reverse the list by copying the item currently in the last position into the old index of the item being dequeued. Once we hit the end of the list, we dequeue from the last position until the front buffer is empty.

The complete queue.lua UDF module is available here. To install, use the udf-put command below.

ascli udf-put queue.lua

To work with the queue through the aql console, it can be called with the execute command.

aql> execute queue.enqueue('Q', 99) on demo.queues where pk='q'
| enqueue |
| 1       |

aql> execute queue.dequeue('Q') on demo.queues where pk='q'
| dequeue |
| 99      |

Using two buffers, we can achieve runtime characteristics similar to that of a linked list. As this queue example stores the data structure within a single Aerospike record, it is bound by the aforementioned record storage limits.


Thanks to Peter Milne for creating the excellent Redis veneer article which this UDF module is based on. Additionally, please see Peter's aerospike-LDT-techniques repo for FIFO utilizing LDTs. As opposed to the in-record storage limitations discussed above, the LDT approach scales queueing by utilizing a LLIST as a map with counters positioned at the head and tail.

By Aaron Dunnington