We have a nice Christmas present for you: Lily 1.1 is out, and there's improvements for everyone: developers, administrators and Lily hackers. Read more about the exciting new stuff in Lily 1.1 below!
Lily adds a high-level data model on top of HBase. Originally, the model was a simple list of fields stored within records, but we added some field types making that model a whole lot more interesting. A first addition is the RECORD value type. You can now store records inside records, which is useful to store structured data in fields. For indexing purposes, you can address sub-record data as if it are linked records, using dereferencing.
Two other cool new value types are LIST and PATH, which allow for far more flexible modeling than the previous multi-value and hierarchy field properties. At the schema level, we adopted a generics style of defining value types, for instance LIST<LIST<STRING>> defines a field that will contain a list of lists of strings. Finally, we also added a BYTEARRAY value type for raw data storage.
If you're familiar with multi-user environments you sure know about the problem of concurrent updates. For these situations, Lily now provides a lock-free, optimistic concurrency control feature we call conditional updates. The update and delete methods allow one to add a list of mutation conditions that need to be satisfied before the the update or delete will be applied.
For concurrency control, you can require that the value of a field needs to be the same as when the record was read before the update.
Lily 1.1 ships with a toolchest for Java developers that want to run unit tests against an HBase/Lily application stack. The stack can be launched embedded or externally, with simple scripts straight out of the Lily distribution. You can also request a 'state reset', clearing a single node instance of Lily for subsequent test runs. Yes, you can now run Lily, HBase, Zookeeper, HDFS, Map/Reduce and Solr in a single VM, with a single command.
For the fearless Lily repository hacker, we offer two hooks to expand functionality of the Lily server process. There's decorators which can intercept any CRUD operation for pre- or post-execution of side-effect operations (like modifying a field value before actually committing it).
The global rowlog queue is now distributed across a pre-split table, with inserts and deletes going to several region servers. This will lead to superior performance on write-or update-heavy multi-node cluster setups.
Our first customers (*waves to our French friends*) found our API to be a tad too verbose and suggested a Builder pattern approach. We listened and unveil a totally new (but optional) method-chaining Builder API for the Java API users.
For Lily Enterprise customers, we rewrote our cluster installer using Apache Whirr, being one of the first serious adopters of this exciting Cloud- and cluster management tool. Using this, installing Lily on many nodes becomes a breeze. Here's a short movie showing off the new installer.
Thanks to better parallelization, Lily has become considerably faster. You can now comfortably throw more clients at one Lily cluster and see combined throughput scale fast.
All in all, Lily 1.1 was a great release to prepare. We hope you have as much fun using Lily 1.1 as we had building it. Check it out here: www.lilyproject.org.