Redis vs RavenDB – Benchmarks for .NET Client NoSQL Solutions

These Redis vs RavenDB benchmarks have been made into a bar chart for a better visualization.

Seeing that Redis v2.0 has been just been released and Oren Eini (aka @ayende) has just checked in performance optimization improvements that show a 2x speed improvement for raw writes in RavenDB, I thought it was a good time to do a benchmark pitting these 2 popular NoSQL data stores against each other.

Benchmarks Take 2 – Measuring write performance

For the best chance of an Apples to Apples comparison I just copied the RavenDB’s benchmarks solution project and modified it slightly only to slot in the equivalent Redis operations. The modified solution is available here. Redis was also configured to run in its ‘most safest mode’ where it keeps an append only transaction log with the fsync option so the operation does not complete until the transaction log entry is written to disk. This is so we can get Redis to closely match RavenDB’s default behaviour. Enabling this behaviour in Redis is simply a matter of uncommenting the lines below in redis.conf:

appendonly yes
appendfsync always

To use this new configuration simply run ‘redis-server.exe /path/to/redis.conf’ on the command line.
Other changes I made for these new set of benchmarks was to remove batching from the Redis benchmark since its an accidental complexity not required or useful for the Redis Client.

Here are the benchmarks with these new changes in place:

Which for this scenario show that:

Redis is 11.75x faster than RavenDB

Note: The benchmarks here are of Redis running on a Windows Server through the Linux API emulation layer – Cygwin. Expect better results when running Redis on Unix servers where it is actively developed and optimized for. It is understood that the Cygwin version of redis-server is 4-10x slower than the native Linux version so expect results to be much better in production.

I attribute the large discrepancy between Redis and RavenDB due to the fact that Redis doesn’t use batches so only pays the ‘fsync penalty’ once instead of once per batch.

The ‘appendfsync always‘ mode is not an optimal configuration for a single process since Redis has to block to wait for the transaction log entry to be written to disk, a more sane configuration would be ‘appendfsync everysec‘ which writes to the transaction log asynchronously. Running the same benchmark using the default configuration yields the following results:

Which is a 39% improvement over the previous benchmarks where now:

Redis is 16.9x faster than RavenDB

Which unless I hear otherwise? should make this the fastest NoSQL solution available for .NET or MONO clients.

Measuring raw write performance using Redis is a little unfair since it has a batchful operation MSET specifically optimized for this task. But that is just good practice, whenever you cross a process boundary you should be batching requests to minimize the number of your calls minimizing latency and maximizing performance.

Even though performance is important, its not the only metric when deciding which NoSQL database to use. If you have a lot of querying and reporting requirements that you don’t know up front then a document database like RavenDB, MongoDB or CouchDB is a better choice. Likewise if you have minimal querying requirements and performance is important than you would be better suited to using Redis – either way having a healthy array of vibrant choices available benefits everybody.

Notes about these benchmarks

Since these benchmarks just writes entities in large batches to a local Redis or RavenDB instance using a single client, I don’t consider this to be indicative of a *real-world* test rather a measure is raw write performance, i.e. How fast each client can persist 5,163 entities in their respective datastore.

A better *real-world* test would be one that accesses the server over the network using multiple concurrent clients that were benchmarking typical usage of a real-world application rather than just raw writes as done here.

So why is Redis so fast?

Based on the comments below there appears to be some confusion as to what Redis is and how it works. Redis is a high-performance data structures server written in C that operates predominantly in-memory and routinely persists to disk and maintains an Append-only transaction log file for data integrity – both of which are configurable.

For redundancy each instance has built-in support for replication so you can turn any redis instance into a slave of another, which can also be trivially configured at runtime. It also features its own Virtual Machine implementation so if your dataset exceeds your available memory, un-frequented values are swapped out to disk whilst the hot values remain in memory.

Like other high-performance network servers e.g. Nginx (the worlds fastest HTTP server), Node.js (a popular, very efficient web framework for JavaScript), Memcached, etc it achieves maximum efficiency by having each Redis instance run in a single process where all IO is asynchronous and no time is wasted context-switching between threads. To learn more about this architecture, check out Douglas Crockford (of JavaScript and JSON fame) imformative speech comparing event-loops vs threading for simulating concurrency.

It achieves concurrency by being really fast and achieves integrity by having all operations atomic. You are not just limited to the available transactions either as you can compose any combination of Redis commands together and process them atomically in a single transaction.

Effectively if you wanted to create the fastest NoSQL data store possible you would design it just like Redis and Memcached. Big kudos to @antirez for his continued relentless pursuit of optimizations resulting in Redis’s stellar performance.

The Redis Client,  JSON and the Redis Admin UI

Behind the scenes the Redis Client automatically stores the entities as JSON string values in Redis. Thanks to the ubiquitous nature of JSON I was easily able to develop a Redis Admin UI which provides a quick way to navigate and introspect your data in Redis. The Redis Admin UI runs on both .NET and Linux using Mono – A live demo is available here.

Download Benchmarks

The benchmarks (minus the dependencies) are available in ServiceStack’s svn repo.

I also have a complete download with including all dependencies available here: (18MB)

Gaining in Popularity

Redis is sponsored by VMWare and has a vibrant pro-community behind it and been gaining a lot of popularity lately. Already with a library for every popular language in active use today, it is gaining momentum outside its Linux roots with twitter now starting to make use of it as well as popular .NET shops like the StackOverflow team taking advantage of it.

Unlike RavenDB and MongoDB which are document-orientated data stores, Redis is a ‘data structures’ server which although lacks some of the native querying functionalities found in Document DBs, encourage you to leverage its comp-sci data structures to maintain your own custom indexes to satisfy all your querying needs.

Try Redis in .NET

If these results have piqued your interest in Redis I invite you to try it out. If you don’t have a linux server handy, you can still get started by trying one of the windows server builds.

Included with ServiceStack is a feature-rich C# client which provides a familiar and easy to use C# API which like the rest of Service Stack runs on .NET and Linux with Mono.

Useful resources for using the C# .NET Client

I also have some useful documentation to help you get started:
Designing a NoSQL Database using Redis
+ A refactored example showing how to use Redis behind a repository pattern
Painless data migrations with schema-less NoSQL datastores and Redis
How to create custom atomic operations in Redis
Publish/Subscribe messaging pattern in Redis
Achieving High Performance, Distributed Locking with Redis

How to host an HTTP + Web Services server in an iPhone with MonoTouch

Yep the title of this post is correct! With a little hacking to get around MonoTouch Limitations I’ve managed to shoe-horn Service Stack’s HttpListener and host it inside of an iPhone with the help of MonoTouch. Basically, in not so many words I made this happen:

Basically the above is the result of hosting a web server on your iPhone making it possible to view your iPhone content with any web browser. In a addition to a static HTTP server there is also an Embedded version of Service Stack turning your iPhone into a Web Services server. This happens to be a pretty convenient way to get at your data. As a basic example I created a simple Web Service returning all the contacts in your iPhone Address book:

namespace EmbeddedServiceStack
public class ContactsService : IService<Contacts>
//Singleton defined in AppHost, Injected by Funq IOC
public ABAddressBook AddressBook { get; set; }

public object Execute (Contacts request)
var response = new ContactsResponse();

foreach(var person in AddressBook.GetPeople())
var emails = person.GetEmails().GetValues();
response.Contacts.Add(new Contact {
FirstName = person.FirstName,
LastName = person.LastName,
Email = emails.Length > 0 ? emails[0] : null,

return response;

Which once you start the AppHost’s HttpListener lets you view your data using REST-ful  HTTP requests like so:

Some of you might be thinking WTF dude why the hell would you want to do this to an iPhone?? Actually I did for a while too! However in a recent thread on the mailing list Miguel De Icaza actually identified some useful use cases where this might be a good idea.

For one reason you can sprinkle a little JavaScript in your HTML page to display the list of Contacts below:

Which granted doesn’t look like much however this task is infinitely less tedious with JavaScript.

Having thought about for a little more I can think of a few more potential use-cases:

  • Giving you more flexibility to access and export your iPhone’s data without needing to use iTunes
  • Develop PhoneGap-like iPhone apps without PhoneGap 🙂
  • Use a full-featured desktop browser to view your iPhone data rather than it’s constrained mobile interface
  • Access your iPhone from over the Internet (allowing async callback duplex requests)
  • Provide easy 2-way communication between iPhone Apps

Other potential use-cases heard on the tweet vine:

  • Mobile device management (Landesk like) and Software testing (@itdnext)

Developer Notes

How the Web Server works?

Basically any url with a Web Service prefix is interpreted as a Web Service and delegated to Service Stack to process.


All other urls are treated like a standard HTTP GET request where it looks for a matching file in the Projects www/ resource directory and simply copies it to the Response Output Stream. Requests that ends with a ‘/’ (i.e. a directory path)  are treated like ‘/default.html’

Note: In order to have your static files packaged and deployed with your application you need to set the ‘Build Action’ to ‘Content’ for each file.
Also if you don’t see any contacts you need to use the Contact App in the iPhone simulator and add some in yourself.

MonoTouch Quirks

Since I found the JsonDataContractSerializer in Mono to be unreliable I’ve had to write and included my own Json Serializer whose heavily reliance on generics requires some extra attention when running in an No JIT environment like MonoTouch. Basically we have to tell the MonoTouch compiler what concrete generic classes to generate – which we do by Registering all DTO’s like so:

private void RegisterDtoTypes_RequiredWhenTheresNoJit()

Download the Example project

For those interested, the above sample project and all the Embedded ServiceStack libraries required to build your own iPhone HTTP+Webservices solution is available at:

Since I only have the trial version of MonoTouch I have only been able to verify that this works in the iPhone Simulator. So if anyone with the full version of MonoTouch gets this running on an iOS device – can you let me know. Feel free to file any issues you run into at the projects website:

Happy Hacking!

Further Reading

A tutorial showing you how to create and call web services from MonoTouch is available here:

Introducing the Redis Admin UI

Confident that I’ve optimized ServiceStack’s JSON web services performance enough with the adoption of my latest efforts developing .NET’s fastest JSON Serializer, I’m now turning my attention towards creating apps that take advantage of it.

I’m a firm believer that performance is one of, if not the most important feature in developing an App that most users will love and use on a regular basis.  It’s the common trait amongst all the apps and websites I regularly use and is why I’m continually seeking software components and/or techniques that can help make my software run faster; or whenever there is no alternative to develop them myself. Although having said this I’m not a complete perf maniac and find that its important to strike a balance between productivity, utility and performance – which is what has effectively kept me tied to C# language for all my server development.

Redis, Sweet Redis

One of the exciting movements that have occurred in recent times is the introduction of NoSQL suite of data persistence solutions. There are numerous impressive NoSQL solutions out there but the one that I have been most interested in is Redis which from the projects website:

is an advanced key-value store. It is similar to memcached but the dataset is not volatile, and values can be strings, exactly like in memcached, but also lists, sets, and ordered sets. All this data types can be manipulated with atomic operations to push/pop elements, add/remove elements, perform server side union, intersection, difference between sets, and so forth.

I found this fascinating since it provides an extremely fast data-store (that gets routinely persisted) supporting rich data-structures that can be safely accessed by multiple app servers concurrently since all operations are atomic. Sweet just what I always wanted – although to make it productive I developed a C# Redis Client that apart from supporting Redis’s entire feature-set also provides a high-level typed API that can persist any .NET POCO Type which gets persisted as JSON in Redis.

The Redis Admin UI

One of the disadvantages that comes with making use of a shiny new tech is that there is sometimes not a lot of tooling available for it. Despite its vibrant community this is also true for Redis where although it sports a rich command-line interface (Unix software is good like this) the GUI admin tools are somewhat lacking. Not to worry, I actually needed a project to work on to learn about Google’s closure-library anyway so this ended up being a pretty good fit.


Before we get into more detail its probably a good idea to showcase some of screenshots of where its currently at:
Note: You can also try it out live:

Admin tab showing redis instance info

Aggregate view of complex types

View single complex type

Redis Web Services

In order to be able to access Redis from a web page some JSON web services are in order. I could’ve just implemented the services required by the Admin UI although I wanted to flex some ServiceStack muscle so decided to create web services for all of Redis’s operations which on final count totalled near 100 web services that I ended up knocking out over a single weekend. One of the benefits of using ServiceStack to develop your web services is that you get SOAP, XML, JSON and JSV endpoints for free. So after spending the next couple of days creating unit tests to provide 100% coverage, the back-end was complete – thus giving Redis CouchDB-like powers by allowing it to be accessed from any HTTP client.

Those interested in the Redis Web Services component can check out a live preview – with the complete list of available web services are available here:

And some examples on how to call them:

Ajax UI

With the web services in place, it is now possible to build pure static html/js/css ajax apps talking directly to the servers’ JSON data services – with no other web framework required!
The closure-library although not as terse or as initially productive as jQuery really shines in building large applications. It has a good framework for developing and re-using JavaScript classes and modules and comes with a set of rich, well-tested, cross-browser-compatible widgets. So within a couple of weeks of hacking on the client I was able to churn out a fairly useful featureset:
  • A TreeView displaying a heirachal view of the filtered redis keyset
  • Deep linking support so you can refresh, save or send a link of the entry you’re looking at
  • Back and forward button support
  • A tabular, aggregate view of all your ‘grouped keys’
  • An auto-complete filter to filter the tabular data
  • Updating and deleting of string values
  • Identifying the type, viewing and deleting of all keys
  • An admin interface to view redis server stats and the ability to destroy and rebuild the entire redis instance

Restrictions and Assumptions

In order to provide a useful Generic UI I’ve had to make a few assumptions on conventions used. Coincidentally these also happen to be the same conventions that the ServiceStack’s C# Redis Client uses when storing data :-).

  1. Keys parts should be separated with a ‘:’
  2. Keys within the same group are expected to be of the same type
  3. Complex types are stored as JSON

There are likely to be others I’ve subconsciously used so I’ll make an effort to keep this list of assumptions live.

Download and installation

Like the rest of ServiceStack the Redis Admin UI is Open source released under the liberal new BSD licence.

In keeping with tradition with most of my software, the Redis Admin UI works cross-platform on Windows with .NET and Linux and OSX using Mono (Live demo is hosted on CentOS/Nginx).
I’ve had an attempt at some basic installation instructions that are included in the downloaded and viewable online.

The latest version is hosted on Service Stacks code project site at the following url:

The Admin UI is highly customizable and very hackable since its written all in Java Script, so if you are interested in customizing the UI for your own purposes I invite you get started by downloading the development version from svn trunk.

Fastest JSON Serializer for .NET released

New! Benchmarks graphs are now available to better visualize the performance of ServiceStack’s JSON and JSV text serializers.

Due to my unhealthy obsession for producing software that runs fast, I’m releasing a shiny new JSON Serializer for .NET into the wild!

ServiceStack JsonSerializer is based upon my previous efforts of inventing a fast, new compact text serializer with `TypeSerializer`
and its JSV Format. Essentially I just refactored the current JSV code-base to support multiple serializers and then simply added an
adapter for the JSON format. Unfortunately in my quest of adding a JSON serializer to the feature list I’ve given up a little perf in the JSV TypeSerializer by
not being able to apply more aggressive static-type optimizations and method in-lining.  However I ended up preferring this option rather than having to branch the existing code-base to support
two fairly large almost identical code-bases doubling my efforts whenever I want to add new features or fix a bug. Although the good news is that the library is still Reflection.Emit code-free
so future optimizations are still possible!

Anyway based on the latest Northwind database benchmarks perf didn’t suffer too much
as JSV is still the fastest text serializer for .NET with the newly released JSON serializer not too far behind 🙂
The benchmarks is showing the new JSON Serializer now over 3.6x faster than the BCL
JsonDataContractSerializer and is around 3x faster than NewtonSoft JSON.NET (the previous fastest JSON serializer benchmarked).
(Other popular JSON serializers LitJSON and JayRock were also benchmarked although both were found to be slower and more buggier than the previous options).

It also happens to be 2.6x faster and 2.6x more compact than the fastest Type serializer in the BCL – Microsoft’s Xml DataContractSerializer, giving yet another
reason for JSON lovers to prefer it over XML.

Serializer Payload size Larger than best Avg Slower than best
Microsoft DataContractSerializer 4097 4.68x 838.1957 6.93x
Microsoft JsonDataContractSerializer 1958 2.24x 1125.8554 9.31x
Microsoft BinaryFormatter 4927 5.62x 1113.4011 9.21x
NewtonSoft.Json 2014 2.30x 947.2970 7.83x 876 1x 120.9475 1x
ServiceStack TypeSerializer 1549 1.77x 270.0429 2.23x
ServiceStack JsonSerializer 1831 2.09x 312.6265 2.58x

(Combined results based on the Northwind database benchmarks. Payload size in bytes / Times in milliseconds)

Features and Usages
Effectively the JSON serializer is optimized for one task which is to Serialize/Deserialize types fast. Where possible I try to remain compatible with the BCL’s JsonDataContractSerializer by for example choosing to serialize DateTime using the WCF JSON format (i.e. /Date(1234+0000)/).

Although that being said the serializer tries to serialize as much as possible while at the same time being non-invasive and configuration free:

  • Serializes / De-serializes any .NET data type (by convention)
    • Supports custom, compact serialization of structs by overriding ToString() and static T Parse(string) methods
    • Can serialize inherited, interface, anonymous types or ‘late-bound objects’ data types
    • Respects opt-in DataMember custom serialization for DataContract dto types.

Developers wanting more features like outputting ‘indented JSON’ or building a dynamic JSON structure with LINQ 2 JSON would still be better off with the popular NewtonSoft JSON.NET.

In keeping with tradition I’ve retained a simple API:

string JsonSerializer.SerializeToString<T>(T value);
void JsonSerializer.SerializeToWriter<T>(T value, TextWriter writer);
void JsonSerializer.SerializeToStream<T>(T value, Stream stream);
T JsonSerializer.DeserializeFromString<T>(string value);
T JsonSerializer.DeserializeFromReader<T>(TextReader reader);
T JsonSerializer.DeserializeFromStream<T>(Stream stream);

Basic Usage Example

var customer = new Customer { Name=”Joe Bloggs”, Age=31 };
var json = JsonSerializer.SerializeToString(customer);
var fromJson = JsonSerializer.DeserializeFromString<Customer>(json);

Reasons for yet another .NET JSON Serializer

I only recently decided to develop a JSON serializer as I was pretty happy with my JSV format in fulfilling its purpose quite well by providing a fast, human readable, version-able, clean text-format ideal for .NET to .NET web services or for serializing any text blob (e.g. in an RDBMS or Redis, etc).

Unfortunately I recently hit a few issues which called for the use of JSON over JSV:

Ajax Benchmarks

The benchmarks after porting the JSV format over to JavaScript (to be able to use it inside Ajax apps) on the upside showed that it was actually a little quicker to deserialize than ‘safe JavaScript’ in advanced browsers (read: any browser NOT Internet Explorer). On the downside Native evaluation of ‘unsafe JavaScript’ was still quicker in those browsers. Unfortunately the biggest problem was performance in Internet Explorer sucked in comparison as at times was over 20x slower compared to its own eval. Now I’ve long ago become a silent proponent of the ‘Death to IE6’ group by electing not to test/support it, unfortunately given its significant market share I really couldn’t do the same for IE7 and IE8 so for overall performance reasons using the existing BCL JSON serializer was still the way to go.

Embracing the future Web (Ajax / HTML5)

It appears that dynamic web applications using Ajax and HTML5 are quickly becoming the first-choice platform for developing advanced client UI applications. I believe that pure ajax applications (i.e. static html/js/css talking directly to JSON data web services) will quickly supersed rich plugin frameworks like Flash and Silverlight with the help of sophisticated JavaScript frameworks like jQuery, the Google Closure Library and the upcoming browsers broad support for HTML5. I consider performance to be one of the most important features for an application so having a fast JSON Serializer would increase ServiceStack’s appeal as a high-performance Ajax server to power these new breed of apps.

Redis Interoperability

Although Redis potentially supports storing any binary data, it does have first class support for UTF8 strings. There have started to be some talk in the redis mailing groups for client library maintainers to choose to standardize on storing text blobs using JSON for simplicity and interoperability reasons. At the moment ServiceStack’s C# Redis Client is currently using the JSV format to store complex types because of its inherent perf and versionability characteristics, however this is likely to change to use JSON in a future release.

JSON support in Mono

Unfortunately over the years Mono’s implementation of the BCL’s JsonDataContract hasn’t improved much and is still the number 1 reason why some ServiceStack examples don’t work in Mono. This ended up being the motivating factor as I recently added preliminary REST support in ServiceStack (details in a future post) and was not able to run the live examples since I only have access to a Linux web host (thus requiring mono).

In the end, I decided to bite the bullet and jump on the NIH bandwagon again and develop a JSON serializer which would ultimately yield a few benefits mainly by making the ServiceStack web framework a very fast .NET Ajax/JSON server and hopefully positioning it as the preferred platform for developing high-performance cross-platform web services.

Now the default Json Serializer in ServiceStack

I’ve refactored all the ServiceStack.Text.Tests to support and test both the JSON and JSV formats and added a fair few JSON-specific tests as well – so even though its new I consider the new JsonSerializer to fairly stable and bug-free so much so that I’ve made it the default Json serializer in ServiceStack. As a result all my live ServiceStack examples are now working flawlessly on Mono!

If new serializer is causing some problems for existing ServiceStack users please file any issues you have or alternatively you can revert back to using the .NET’s default JsonDataContractSerializer by setting the global config option in your AppHost, i.e.

SetConfig(new EndpointHostConfig { UseBclJsonSerializers = true});

Download JsonSerializer

JsonSerializer is available in the ServiceStack.Text namespace which like the rest of Service Stack it is released under the liberal Open Source New BSD Licence which you can get:


Versatility of JSV – Late-bound objects

New! Benchmarks graphs are now available to better visualize the performance of ServiceStack’s JSON and JSV text serializers.

As there have been a few people trying to use TypeSerializer in dynamic situations, I thought I’d put together a post detailing some restrictions and highlighting the kind of use-case scenarios that is possible with TypeSerializer and its JSV format.

Some of the goals for the JSV format was to be both compact in size and resilient to versioning and schema changes. With these goals in mind, a conscience design decision was made to not include any type information with the serialized payload. So the way that JSV does its de-serializing is by coercing the JSV payload into the type specified by the user. This can be seen in the API provided by the TypeSerializer class allowing the client to deserialize based on a runtime or static generic type:

T DeserializeFromString<T>(string value)
object DeserializeFromString(string value, Type type)

The consequences of this means the user in one way or another must supply the type information although at the same time it allows the same JSV payload to be deserialized back into different types. For example every POCO type can be deserialized back into a Dictionary<string,string> which is useful when you want to still access the data but for whatever reason do not have the type that created it. This also allows for some interesting versioning possibilities in which the format can withstand large changes in its schemas as seen in the article Painless data migrations with schema-less NoSQL datastores and Redis.

Beyond normal serialization of DTO types, TypeSerializer is also able to serialize deep heirachys and Interface types as well as ‘late-bound objects’. The problem with trying to deserialize a late-bound object (i.e. a property with an object type) is that TypeSerializer doesn’t know what type to de-serialize it back into – and since a string is a valid object, will simply populate the object property with the string contents of the serialized property value.

With this in mind, the best way to deserialize a POCO type with a dynamic object property is to serialize the Type information yourself along with the payload. Of course it is best to highlight what this means with an example.

The example below shows how you can serialize a message with a dynamic object payload and have it deserialize back into a DynamicMessage as well as alternate GenericMessage<T> and a StrictMessage types sharing a similar definition – all as expected, without any data loss.

public class DynamicMessage : IMessageHeaders
public Guid Id { get; set; }
public string ReplyTo { get; set; }
public int Priority { get; set; }
public int RetryAttempts { get; set; }
public object Body { get; set; }

public Type Type { get; set; }
public object GetBody()
//When deserialized this.Body is a string so use the serilaized
//this.Type to deserialize it back into the original type
return this.Body is string
? TypeSerializer.DeserializeFromString((string)this.Body, this.Type)
: this.Body;

public class GenericMessage<T> : IMessageHeaders
public Guid Id { get; set; }
public string ReplyTo { get; set; }
public int Priority { get; set; }
public int RetryAttempts { get; set; }
public T Body { get; set; }

public class StrictMessage : IMessageHeaders
public Guid Id { get; set; }
public string ReplyTo { get; set; }
public int Priority { get; set; }
public int RetryAttempts { get; set; }
public MessageBody Body { get; set; }

public class MessageBody
public MessageBody()
this.Arguments = new List<string>();

public string Action { get; set; }
public List<string> Arguments { get; set; }

/// Common interface not required, used only to simplify validation
public interface IMessageHeaders
Guid Id { get; set; }
string ReplyTo { get; set; }
int Priority { get; set; }
int RetryAttempts { get; set; }

public class DynamicMessageTests
public void Can_deserialize_between_dynamic_generic_and_strict_messages()
var original = new DynamicMessage
Id = Guid.NewGuid(),
Priority = 3,
ReplyTo = "http://path/to/reply.svc",
RetryAttempts = 1,
Type = typeof(MessageBody),
Body = new MessageBody
Action = "Alphabet",
Arguments = { "a", "b", "c" }

var jsv = TypeSerializer.SerializeToString(original);
var dynamicType = TypeSerializer.DeserializeFromString<DynamicMessage>(jsv);
var genericType = TypeSerializer.DeserializeFromString<GenericMessage<MessageBody>>(jsv);
var strictType = TypeSerializer.DeserializeFromString<StrictMessage>(jsv);

AssertHeadersAreEqual(dynamicType, original);
AssertBodyIsEqual(dynamicType.GetBody(), (MessageBody)original.Body);

AssertHeadersAreEqual(genericType, original);
AssertBodyIsEqual(genericType.Body, (MessageBody)original.Body);

AssertHeadersAreEqual(strictType, original);
AssertBodyIsEqual(strictType.Body, (MessageBody)original.Body);

//Using T.Dump() ext method to view output
/* Output:
Id: 891653ea2d0a4626ab0623fc2dc9dce1,
ReplyTo: http://path/to/reply.svc,
Priority: 3,
RetryAttempts: 1,
Action: Alphabet,

public void AssertHeadersAreEqual(IMessageHeaders actual, IMessageHeaders expected)
Assert.That(actual.Id, Is.EqualTo(expected.Id));
Assert.That(actual.ReplyTo, Is.EqualTo(expected.ReplyTo));
Assert.That(actual.Priority, Is.EqualTo(expected.Priority));
Assert.That(actual.RetryAttempts, Is.EqualTo(expected.RetryAttempts));

public void AssertBodyIsEqual(object actual, MessageBody expected)
var actualBody = actual as MessageBody;
Assert.That(actualBody, Is.Not.Null);
Assert.That(actualBody.Action, Is.EqualTo(expected.Action));
Assert.That(actualBody.Arguments, Is.EquivalentTo(expected.Arguments));

The source of this runnable example can be found as part of TypeSerializer’s test suite in the DynamicMessageTests.cs test class. Some more dynamic examples showing advanced usages of TypeSerializer can be found in the ComplexObjectGraphTest.cs class within the same directory.


History of Microsoft and the Web – From ASP, ASP.NET to MVC

I was asked to provide some initial feedback of my impressions of ASP.NET MVC and since its been a while since my last post I thought it was as good as subject as any to start off my summer rants.

My background and personal recommendations

I would like to add that the contents contained here-in is from a web developers perspective (i.e. my own) having spent several years in the earlier part of my career building web solutions (which are still in production use today) with ASP, PHP and Java web frameworks and having felt the pain points and advantages of each first hand.  Basically, I talk-of where I’ve walked.

I am currently an active developer of websites developed with both ASP.NET and ASP.NET MVC web frameworks which I continue to advocate for under different circumstances that yield the most benefits of each. Outside of work I also develop some websites with the Python Django web framework thanks largely to the free hosting that provided for by Google App Engine. If you are primarily a *NIX developer and performance is not your primary concern then the increasingly popular and productive Ruby on Rails web framework may be your best choice.

My personal love for the performance, productivity and elegance of the C# programming language has kept me tied to the .NET web frameworks. Although it really depends on your situation because I also think Google has ‘got it right’ where the best performance and User Experience may possibly be achieved by using no server-side web framework at all – instead relying on static html/css/js to run the entire web application inside the browser, calling on JSON/JavaScript data services for its interactive content. If you are curious with Google’s approach I recommend taking a look at the Google Closure Library which is designed for exactly this scenario.

ASP.NET MVC is Microsoft’s latest web application framework designed to provide an alternate, modern and cleaner web framework that is more aligned with the ‘spirit of the web’.
This should not to be confused with Microsoft’s previous attempt at a web application framework which goes under the shorter and slightly confusing name of just ‘ASP.NET’.

First there was Classic ASP

Prior to Microsoft introducing its original ASP.NET the landscape for web frameworks was vastly different. At that time (well more accurately from authors view of that time 🙂 many small websites were scripted with PHP and Perl languages which when used without a solid web-framework behind them (as most of them were) resulted in ‘spaghetti-code’ – a term used to describe the resulting complex code resulting from maintaining multiple callback execution paths and application and presentation logic kept within a single page. The main established competing frameworks in the enterprise space we’re from Java Servlets and Cold Fusion – which because of its commercial nature did not last very long in the ‘Open Web’.

All Microsoft had prior to this was classic ASP which in my opinion looked like the result of a C-grad student on a summer internship who was tasked with ‘just make something that works that outputs HTML so we can ship-it and silence vocal developers!’. The result of which was the very lacking ‘Classic ASP’ which was conceptually very similar to PHP but used VBScript or JScript as its core programming language and when called on to enable richer functionality answered by enabling developers to invoke ActiveX COM objects – needlessly to say did not perform or scale very well. Maybe I’m being too harsh here but the technology fell way below my expectations of a multi-national corporation who prides themselves in developing first-class software development platforms, tools and frameworks.

Microsoft was very late to the HTML game where most of their developers were still tackling the complexity of building windows applications with C++ or VB6. Luckily for them, the Servlet solutions provided by the competing Java frameworks seemed to be maintaining an unhealthy fixation having just discovered  XML nirvana and thought that if they over-used it enough to configure absolutely everything that it would somehow land its authors in XML heaven. So productivity-wise the solutions ended being pretty much the same but the Java frameworks were quite rightly seen as the superior and more stable solution and as a result saw its usage increasing in the enterprise. Clearly Microsoft saw the impeding danger of the managed Java platform and knew that it needed to change course swiftly and offer a competing solution to fend off the Java attack if it was to keep developer mind share. So from the school of ‘Good artists copy great artists steal’ Microsoft initiated a company-wide effort to build a competing platform to Java and shortly after .NET was born. Although .NET being a very capable platform, the problem they still faced was they had a large market-share of developers who predominantly only knew how to develop Windows GUI Applications. What to do?….

and then came ASP.NET

Simple, provide a ‘Windows Forms-like’ abstraction to allow developers familiar with developing windows applications a state-full, event-based development environment similar to what they were used to when they were cutting their teeth on Windows GUI applications. This unique approach of developing websites comes with its share of advantages and disadvantages. The advantage of which were a shorter learning curve, a very capable GUI designer and a state-full event-based programming model which for the most part would let you get a lot of work done without needing write any HTML whatsoever.

Despite the short-term gains, these advantages can quickly evaporate for large projects. The learning curve is shorter but at the same time the curve is going the wrong way. Websites are inherently stateless and trying to make them state-full uncovers some of ASP.NET’s major limitations which include:

  • Being limited to a single FORM tag on a page which all subsequent requests are routed through.
  • The event-based model provided is handled with server-side logic so trivial user interactions like changing the combo-box with an auto-postback requires a slow round-trip back to the server and the page to be completely re-rendered.
  • In order to maintain page state a VIEWSTATE is kept that contains the state of each control on the page. Depending on the number and type of controls this VIEWSTATE can explode into a huge payload providing degraded performance for every request. The VIEWSTATE is core to ASP.NET’s and one of its biggest criticisms as it is effectively seen as the anti-web. In order to function it requires every request to HTTP POST an unreadable blob that is both undebugable with a packet sniffer and essentially unreproducible without manually clicking your way back to your desired state. This makes your application harder to test, debug, and maintain.
  • Turning every request into a HTTP POST also has some disadvantages on their own. It breaks the users back button as HTTP POST’s are meant for destructive requests so browsers must prompt the user to make sure it is safe to do so. This has a direct impact on usability as in contrast to a HTTP GET request, the page’s url does not provide you context of your ‘current state’ (as can be inferred with RESTful urls), its not book-markable or transferable to someone else. It also has a wider impact in that the page state is not cache-able or indexed by search engines, etc.
  • ASP.NET’s GUI designer (like all HTML designers I’ve ever tried) produce semantically poor mark-up which although great for building prototypes quickly become a burden to maintain a consistent style throughout.

One advantage of ASP.NET that doesn’t get nearly enough attention is the composability that the state-full ASP.NET framework provides. This is evident by the rich 3rd party commercial ecosystem for ASP.NET controls – for which appear to be non-existent for alternate web frameworks. My personal belief for this, is its ability to encapsulate the entire state and lifecycle of a single control allowing authors to provider richer, more re-usable server-side controls. This feature allows accommodating developers to show off impressive rapid prototypes to their bosses where they’re able to drop a DataGrid control configured with a 1-line databinding and have it browse and edit their corporate RDBMS dataset.

Despite its criticisms I consider ASP.NET to be a fairly RAD environment and is still my first choice for developing small, simple Intranet or single-purpose applications. For large or public-facing websites I prefer to use the newer and much cleaner MVC. Incidentally MVC is not a replacement technology as Microsoft is planning to support both web frameworks for the foreseeable future.

Introducing the leaner, cleaner ASP.NET MVC

After having been largely successful in defending itself from its last foe with Java – Microsoft is finding itself again on the battle lines on the cusp of falling out of fashion with new developer mind share. This time its a result of a surge in popularity from the dynamic languages mainly Ruby, Python and PHP. They are now re-appearing armed with well established and proven web frameworks and a suite of large website backing proving their worth in the enterprise space. Ruby is leading the charge with Ruby on Rails while Pythonistas are sticking by their tried and trusted Django web framework. It’s hard to explain the reason for this new resurgence behind Dynamic language frameworks but I’m putting it down to a combination of the following factors:

  • Hardware is getting cheaper, and virtual machines and cloud computing are becoming increasingly popular. The performance issues of old have been mitigated, and scaling is now seen as more important since performance is now largely a hardware cost problem which thanks to Moores law is comparatively a lot cheaper to buy than programmers wages.
  • Dynamic languages have proven themselves. Increasingly large websites such as Facebook, Yahoo, Twitter, etc have chosen to build on dynamic language frameworks and have made them both perform and scale well.
  • There has been a shift in development methodologies towards a ‘best-practices’ software discipline where the most sought after traits in the Enterprise have now become: Unit, Integration, User Acceptance Testing, DDD, TDD, BDD, Agile, DSLs, etc. Although these approaches are not specifically tied to dynamic web frameworks methodology, it now supersedes the Enterprises previous only ‘safe-choice’ of using .NET and Java in the enterprise since they were considered statically-typed ‘safe languages’ which were served on industrial-strength multi-threaded app servers companies could trust.
  • Using the above methodologies to create well-tested robust software that maximizes customer value and satisfaction is now, I believe the most prominent goal in software development. Since dynamic languages lack a lot of the compile-time safety found in statically-typed languages, developers are more inclined to write testable software and actually test their code.
  • In keeping in-line with maximizing customer value many dynamic web frameworks prefer simplicity and Convention over Configuration allowing a lot more functionality to be delivered with the least effort.
  • Spurred on initially by Google the web has become more powerful, faster and compliant then ever before increasing the possibilities and making the Web the first choice development platform for most class of applications. It seems that with this renewed interest in the web (I hate to say it – Web 2.0) has spurred a lot more research into what makes the Web great and the Internet work. Through this greater under standing, many people have attributed its success and the ‘Spirit of the Web’ to the simplicity and power of the HTTP specification and the REST (Representational State Transfer) software architecture which it was built-on.

Many of the above points have either nullified ASP.NET’s inherent advantage and at the same time exposed its limitations. It is clear that ASP.NET’s state-full ViewState and direct access of sealed concrete classes were developed without any consideration of testing in mind as it is nearly impossible to test an ASP.NET website outside of a full integration test which takes a lot of effort and is cumbersome to write. So like any good software company Microsoft has recognized the current criticisms and shortcomings with ASP.NET and attempt to address them head on with their latest effort an entirely new solution written from the ground up to embracing the ‘Spirit of the Web’ providing a lean, clean, testable framework with powerful routing capabilities which enables experienced web developers the necessary tools to build REST-ful web applications.

ASP.NET MVC is now into it’s 2nd edition and has matured quite nicely adding some useful features along the way. It’s heavily inspired from other web MVC frameworks like Rails but also brings its own unique features to the table. It is generally well received and since is built using it – already has its proved-in-the-wild poster child 🙂

Model-View-Controller (aka MVC) is an architectural-style pattern aiming to split your application amongst well divided lines. When adhering to the pattern, your application is split between the Controller which accepts and validates user input, processes the desired logic and then dictates the final result by populating the Model (i.e. data class) and then electing the View in which to bind it to.

The MVC architecture originated with thick-client GUI applications which are in practice a very different technology to a web application where they are long-running, event-driven state-full applications where the Controller is used to manipulate the applications Model which the View generally Data Binds to, to reflect the change. This is in stark contrast with a web application which is generally short-lived, stateless applications that are centred around a ‘per-request’ execution model where instead of events ‘User Inputs’ comes in the form of HTTP network requests used to facilitate communication between client and server. Although very different in implementation the MVC concept remains fundamentally the same where the Model, View and Controller are kept in visibly separate tiers.

Inside the ASP.NET MVC web framework ‘user input’ arrives in the form of a HTTP request which is mapped to a Controller Action method. Where a Controller is any class that inherits from the Controller base class. It is mapped by convention as defined in the ASP.NET Routing module (a concept unique to ASP.NET MVC). From inside the Action logic the Model is populated which can either be a custom POCO class, a list of key values pairs set on ViewData Dictionary or both. Every controller action returns an ActionResult which most of the time will be View() which simply tells the MVC framework to pass the populated Model to a View with the same name as the invoked Controller Action method. Both the Controller and Model are standard .NET classes whilst the View is a basic .aspx page – ASP.NET’s templating language which simply executes code logic embedded inside the <% %> tags with the text mark-up surrounding it. Ideally MVC applications should only contain basic presentation logic in the View, by design most of the applications core logic should be contained within the Controller. Keeping the application logic separate from the presentation logic allows it to be re-used with different views. This is one of the ways where the MVC architecture promotes re-usability and a clean separation of concerns.

I think I’ll end this post here as its already too long – WordPress has a word counter here telling me that I’m dangerously close to surpassing the longest essay I’ve ever had to write for school!
Its hard to believe but I originally expected for this to be an all encompassing post providing a brief history of ASP.NET before diving in and giving my first impressions on the newer MVC flavour. I don’t actually understand why its ended up so long since being a coder at heart am not really fond of writing documentation in any form – I guess I had a lot to say 🙂

Anyway stay tuned for the next post getting started with ASP.NET MVC and my first impressions of the new kid on the block.

Useful C# .NET Extension method: T.Dump();

One of the things I missed in the switch from a dynamic language like PHP to a typed language like C# is the ability to easily traverse any object without much care for types. This meant you could implement things like PHP’s incredibly useful print_r() function without too much effort.

I’m now happy to announce that following the release of TypeSerializer we now have that functionality in C#/.NET!

In the ServiceStack.Text.JsvFormatter class are two extension methods which recursively dumps all the public properties of any type into a human readable ‘pretty formatted’ string.

string Dump<T>(this T instance);
string SerializeAndFormat<T>(this T instance);

Both methods achieve the same result. I just wanted to include the logically named but lengthier ‘SerializeAndFormat’ for completion as it describes exactly what it does. Most of the time we don’t care and are happy to use the shortened ‘Dump’ to mean the same thing.

Example Usage

After importing the ServiceStack.Text namespace you can view the value of all fields as seen in the following example:

var model = new TestModel();

Example Output

    Int: 1,
    String: One,
    DateTime: 2010-04-11,
    Guid: c050437f6fcd46be9b2d0806a0860b3e,
    EmptyIntList: [],
        a: 1,
        b: 2,
        c: 3

Inbuilt into Service Stack JSV web service endpoint

I’ve found this feature to be so useful that I’ve included it as part of the JSV endpoint by simply appending &debug anywhere in the request’s query string. So even if you don’t use the new JSV endpoint you can still benefit from it by instantly being able to read the data provided by your web service. Here are some live examples showing the same web services called from the XML and JSV endpoint that shows the difference in readability:

GetNorthwindCustomerOrders                      XML   |   JSV + Debug

GetFibonacciNumbers?Skip=5&Take=10      XML   |   JSV + Debug



All software is released under the liberal New BSD Licence so you are free to start using it in your own projects. You can download it any ONE of the following ways:


Find out more…

If you want to know more about the Dump’s serialization format and how you can use it to store text blobs in databases check out the introductory post.

.NET’s new fast, compact Web Service endpoint: The JSV Format

New! Benchmarks graphs are now available to better visualize the performance of ServiceStack’s JSON and JSV text serializers.

Service Stack’s Git repo is still hot from the fresh check-in that has just added TypeSerializer’s text-serialization format as a first class Web Service endpoint.

JSV Format (i.e. JSON-like Separated Values) is a JSON inspired format that uses CSV-style escaping for the least overhead and optimal performance.

Service Stack’s emphasis has always been on creating high-performance, cross-platform web services with the least amount of effort. In order to maximize performance, our web services are effectively raw text/byte streams over light-weight IHttpHandler’s (Note: SOAP endpoints still use WCF bindings). This approach coupled with extensive use of cached-delegates (eliminating any runtime reflection) has proven to provide superior performance in itself.

So why the new format?

Well up until now the de/serialization for all web service endpoints were done using the DataContract serializers in the .NET BCL. The XML DataContractSerializer looks to be a well written library providing good performance for serialization of XML. Unfortunately for reasons articulated in my previous post on the history of web services, XML although great for interoperability, does not make a good ‘programmatic fit’ for integrating with many programming language models – e.g. for AJAX applications JSON is a much more suitable format. The verbosity and strict-extensibility of XML also does not make it the ideal format in performance-critical, or bandwidth constrained environments.

The problem with JSON in .NET is that although being 2x more compact than XML it is 1.5x slower (based on Northwind model benchmarks). Unfortunately that is the fastest JSON implementation in the BCL there are others like JavaScriptSerializer which are over 40x slower still. The other blocker I encountered was that although the JSON implementation in .NET was slow, the equivalent one in MONO just doesn’t work for anything but the most simple models. Effectively Mono users had no choice but to use the XML endpoints, which is clearly not a good story for bandwidth-constrained environments as found in iPhone/MonoTouch apps.

Quite simply if I want a fast, compact, cross-platform serialization format that’s ideal to use in bandwidth-constrained, performance-critical environments as found in iPhone networked applications I had to code it myself. Drawing on years of experience in handling different serialization formats I had a fair idea on what I thought the ideal text-format should be. Ultimately the core goals of being fast and compact is the major influence in the choice of syntax. It is based on the familiar JSON format but as it is white-space significant, does not require quotes for normal values, which made it the most compact text-format that was still lexically parseable.

Other key goals was that it should be non-invasive to work with any POCO-type. Due to the success of schema-less designs in supporting versioning by being resilient to schema-changes, it is a greedy format that tries to de-serialize as much as possible without error. Other features that sets it apart from existing formats makes it the best choice for serializing any .NET POCO object.

  • Fastest and most compact text-serializer for .NET (5.3x quicker than JSON, 2.6x smaller than XML)
  • Human readable and writeable, self-describing text format
  • Non-invasive and configuration-free
  • Resilient to schema changes (focused on deserializing as much as possible without error)
  • Serializes / De-serializes any .NET data type (by convention)
    • Supports custom, compact serialization of structs by overriding ToString() and static T Parse(string) methods
    • Can serialize inherited, interface or ‘late-bound objects’ data types
    • Respects opt-in DataMember custom serialization for DataContract DTO types.

For these reasons it is the preferred choice to transparently store complex POCO types for OrmLite (in RDBMS text blobs), POCO objects with ServiceStacks’ C# Redis Client or the optimal serialization format in .NET to .NET web services.

Simple API

Like most of the interfaces in Service Stack, the API is simple and descriptive. In most cases these are the only methods that you would commonly use:


string TypeSerializer.SerializeToString<T>(T value);
void TypeSerializer.SerializeToWriter<T>(T value, TextWriter writer);

T TypeSerializer.DeserializeFromString<T>(string value);
T TypeSerializer.DeserializeFromReader<T>(TextReader reader);


Where T can be any .NET POCO type. That’s all there is – the API was intentionally left simple 🙂

By convention only public properties are serialized, unless the POCO is a DataContract in which case only DataMember properties will be serialized. Structs can provide custom (e.g. more compact) serialization value by overloading the ToString() instance method and providing a static TStruct.Parse(string).

The JSV Web Service Endpoint

The home page for TypeSerializer on goes into more detail on the actual text-format. You can get a visual flavour of it with the screen shots below

Note: the results have been ‘pretty-formatted’ for readability, the actual format is white-space significant.

In comparison here is the equivalent data formatted in XML (under a nice syntax highlighter):

View JSV live Web Services

One of the major features of Service Stack is that because JSV is a supported out of the box endpoint, it doesn’t require any code for all your web services to take advantage of it. You can access all your web services with the JSV endpoint by simply changing the base URL. Below are live web service examples from the Service Stack’s Examples project:

GetNorthwindCustomerOrders XML JSON JSV | debug
GetFactorial?ForNumber=3 XML JSON JSV | debug
GetFibonacciNumbers?Skip=5&Take=10 XML JSON JSV | debug

*Live webservices hosted on CentOS / Nginx / Mono FastCGI

You can view all web services available by going to Service Stack’s web service Metadata page:

Download TypeSerializer for your own projects

The JSV-Format is provided by the TypeSerializer class in the ServiceStack.Text namespace. It is perfect for anywhere you want to serialize a .NET type, ideal for storing complex types as text blobs in a RDBMS. Like the rest of Service Stack it is Open Source, released under the New BSD Licence:

History of REST, SOAP, POX and JSON Web Services

The W3C defines a “web service” as “a software system designed to support interoperable machine-to-machine interaction over a network.

The key parts of this definition are that it should be interoperable and that it facilitates communication over a network. Unfortunately over the years different companies have had different ideas on what the most ideal interoperable protocol should be, leaving a debt-load of legacy binary and proprietary protocols in its wake.

HTTP the defacto web services transport protocol

HTTP the Internet’s protocol is the undisputed champ and will be for the foreseeable future. It’s universally accepted, can be proxied and is pretty much the only protocol allowed through most firewalls which is the reason why Service Stack (and most other Web Service frameworks) support it. Note: the future roadmap will also support the more optimized HTML5 ‘Web Sockets’ standard.

XML the winning serialization format?

Out of the ashes another winning format looking to follow in HTTP’s success, is the XML text serialization format. Some of the many reasons why it has reigned supreme include:

  • Simple, Open, self-describing text-based format
  • Human and Computer readable and writeable
  • Verifiable
  • Provides a rich set of common data types
  • Can define higher-level custom types

XML doesn’t come without its disadvantages which currently are centred around it being verbose and being slow to parse resulting wasted CPU cycles.


Despite the win, all is not well in XML camp. It seems that two teams are at odds looking to branch the way XML is used in web services. On one side, I’ll label the REST camp (despite REST being more than just XML) approach to developing web services is centred around resources and prefers to err on simplicity and convention choosing to re-use the other existing HTTP metaphors where they’re semantically correct. E.g. calling GET on the URL http://host/customers will most likely return a list of customers, whilst POST‘ing a ‘Customer’ against the same url will, if supported append the ‘Customer’ to the existing list of customers.

The URL’s used in REST-ful web services also form a core part of the API, it is normally logically formed and clearly describes the type of data that is expected, e.g. viewing a particular customers order would look something like:

  • GET http://location/customers/mythz/orders/1001 – would return details about order ‘1001’ which was placed by the customer ‘mythz’.

The benefit of using a logical URL scheme is that other parts of your web services API can be inferred, e.g.

  • GET http://location/customers/mythz/orders – would return all of ‘mythz’ orders
  • GET http://location/customers/mythz – would return details about the customer ‘mythz’
  • GET http://location/customers – would return a list of all customers

If supported, you may have access to different operations on the same resources via the other HTTP methods: POST, PUT and DELETE. One of the limitations of having a REST-ful web services API is that although the API may be conventional and inferable by humans, it isn’t friendly to computers and likely requires another unstructured document accompanying the web services API identifying the list, schema and capabilities of each service. This makes it a hard API to provide rich tooling support for or to be able to generate a programmatic API against.

NOTE: If you’re interested in learning more about REST one of the articles I highly recommend is

Enter SOAP

SOAP school discards this HTTP/URL nonsense and teaches that there is only one true METHOD – the HTTP ‘POST’ and there is only one url / end point you need to worry about – which depending on the technology chosen would look something like http://location/CustomerService.svc. Importantly nothing is left to the human imagination, everything is structured and explicitly defined by the web services WSDL which could be also obtained via a url e.g. http://location/CustomerService.svc?wsdl. Now the WSDL is an intimately detailed beast listing everything you would ever want to know about the definition of your web services. Unfortunately it’s detailed to the point of being unnecessarily complex where you have layers of artificial constructs named messages, bindings, ports, parts, input and output operations, etc. most of which remains un-utilized which a lot of REST folk would say is too much info that can be achieved with a simple GET request 🙂

What it does give you however, is a structured list of all the operations available, including the schema of all the custom types each operation accepts. From this document tools can generate a client proxy into your preferred programming language providing a nice strongly-typed API to code against. SOAP is generally favoured by a lot of enterprises for internal web services as in a lot of cases if the code compiles then there’s a good chance it will just work.

Ultimately on the wire, SOAP services are simply HTTP POSTs to the same endpoint where each payload (usually of the same as the SOAP-Action) is wrapped inside the body of a ‘SOAP’ envelope. This layer stops a lot of people from accessing the XML payload directly and have to resort to using a SOAP client library just to access the core data.

This complexity is not stopping the Microsoft’s and IBM’s behind the SOAP specification any-time soon. Nope they’re hard at work finishing their latest creations that are adding additional layers on top of SOAP (i.e. WS-Security, WS-Reliability, WS-Transaction, WS-Addressing) which is commonly referred to as the WS-* standards. Interestingly the WS-* stack happens to be complex enough that they happen to be the only companies able to supply the complying software and tooling to support it, which funnily enough works seamlessly with their expensive servers.

It does seem that Microsoft, being the fashionable technology company they are don’t have all their eggs in the WS-* bucket. Realizing the current criticisms on their current technology stack, they have explored a range of other web service technologies namely WCF Data Services, WCF RIA Services and now their current favourite OData. The last of which I expect to see all their previous resource efforts in WS-* to be transferred into promoting this new Moniker. On the surface OData seems to be a very good ‘enabling technology’ that is doing a good job incorporating every good technology BUZZ-word it can (i.e. REST, ATOM, JSON). It is also being promoted as ‘clickbox driven development’ technology (which I’ll be eagerly awaiting to see the sticker for :).

Catering for drag n’ drop developers and being able to create web services with a checkbox is a double-edged sword which I believe encourages web service development anti-patterns that run contra to SOA-style (which I will cover in a separate post). Just so everyone knows the latest push behind OData technology is to give you more reasons to use Azure (Microsoft’s cloud computing effort).

POX to the rescue?

For the pragmatic programmer it’s becoming a hard task to follow the WS-* stack and still be able to get any work done. For what appears to be a growing trend, a lot of developers have taken the best bits from SOAP and WSDL and combined them in what is commonly referred to as POX or REST+POX. Basically this is Plain Old Xml over HTTP and REST-like urls. In this case a lot of the cruft inside a WSDL can be reduced to a simple XSD and a url. The interesting part about POX is that although there seems to be no formal spec published, a lot of accomplished web service developers have ultimately ended up at the same solution. The advantages this has over SOAP are numerous many of which are the same reasons that have made HTTP+XML ubiquitous. It is a lot simpler, smaller and faster at both development and runtime performance – while at the same time retaining a strongly-typed API (which is one of the major benefits of SOAP). Even though it’s lacking a formal API, it can be argued that POX is still more interoperable than SOAP as clients no longer require a SOAP client to consume the web service and can access it simply with a standard Web Client and XML parser present in most programming environments, even most browsers.

And then there was JSON

One of the major complaints of XML is that it’s too verbose, which given a large enough dataset consumes a lot of bandwidth. It is also a lot stricter than a lot of people would like and given the potential for an XML document to be composed from many different namespaces and for a type to have both elements and attributes – it is not an ideal fit for most programming models. As a result of this, parsing XML can be quite cumbersome especially inside of a browser. A popular format which is seeking to overcome both of these problems and is now the preferred serialization format for AJAX applications is JSON. It is very simple to parse and maps perfectly to a JavaScript object, it is also safe format which is the reason why it’s chosen over pure JavaScript. It’s also a more ‘dynamic’ and resilient format than XML meaning that adding new or renaming existing elements or their types will not break the de-serialization routine as there is no formal spec to adhere to which is both and advantage and disadvantage. Unfortunately even though it’s a smaller, more simple format it is actually deceptively slower to de/serialize than XML using the available .NET libraries based on the available benchmarks. This performance gap is more likely due to the amount of effort Microsoft has put into their XML DataContractSerializer than a deficiency of the format itself as my effort of developing a JSON-like serialization format is both smaller than JSON and faster than XML – the best of both worlds.

Service Stack’s new JSV Format

The latest endpoint to be added to Service Stack, is JSV the serialization format of Service Stack’s POCO TypeSerializer. It’s a JSON inspired format that uses CSV-style escaping for the least overhead and optimal performance.

With the interest of creating high-performance web services and not satisfied with the performance or size of existing XML and JSON serialization formats, TypeSerializer was created with a core goal to create the most compact and fastest text-serializer for .NET. In this mission, it has succeeded as it is now both 5.3x quicker than the leading .NET JSON serializer whilst being 2.6x smaller than the equivalent XML format.

TypeSerializer was developed from experience taking the best features of serialization formats it looks to replace.  It has many other features that sets it apart from existing formats which makes it the best choice for serializing any .NET POCO object.

  • Fastest and most compact text-serializer for .NET
  • Human readable and writeable, self-describing text format
  • Non-invasive and configuration-free
  • Resilient to schema changes (focused on deserializing as much as possible without error)
  • Serializes / De-serializes any .NET data type (by convention)
    • Supports custom, compact serialization of structs by overriding ToString() and static T Parse(string) methods
    • Can serialize inherited, interface or ‘late-bound objects’ data types
    • Respects opt-in DataMember custom serialization for DataContract DTO types.

For these reasons it is the preferred choice to transparently store complex POCO types for OrmLite (in text blobs), POCO objects with ServiceStacks’ C# Redis Client or the optimal serialization format in .NET to .NET web services.

NoSQL and RDBMS – Choose your weapon.

nosql_thumb Sensationalist headline right? Unfortunately I think the aggressive tone of the term ‘NoSQL’ is one of the reasons that a lot of people have an instant resentment to the technology. It encourages flame ignited posts like which when posted to Slashdot will get every developer who has ever touched an RDBMS to weigh-in and pass judgement on technology that they’ve never used before in a combined post also declaring their eternal love for their preferred RDBMS of choice.

The negative posts generally share the same tone:

I have developed with RDBMS for 10 years and I’ve never needed to use a NoSQL database. RDBMS can scale just as good as NoSQL.

Unfortunately statements like the above instantly illustrate the developer has a biased attachment to a technology they’ve used all their life whilst at the same time declare they have absolutely no knowledge (or desire to gain any knowledge) on the subject for which they are passing judgement. It’s most likely these developers have also made message queues fit in databases and marvelled at their configuration-mapping ability to have an eagerly-loaded chain of nested objects auto-magically bind to their pristine domain model. Yes this is quite a feat to be proud of, unfortunately it also happens to be a one-liner in a lot of non-relational databases. This characteristic of being able to serialize your domain model without requiring it to be mapped to a database using an ORM is not a feature limited to NoSQL databases, other data persistence solutions like db4o (an object orientated database) achieve this equally as well.

Picking the best tool for the job?

All this says is that RDBMS’s are really good at doing what they do, which is storing flat, relational, tabular data. Now believe it or not they still remain the best solution for storing relational data. Using a NoSQL data store isn’t an all-or-nothing technology. It is actually serves as a good complementary technology to have along-side an RDBMS. Yes that’s right even though they have overlapping feature-set they can still be great together. Awesome – we can all still be friends! 

It’s still all about picking the right tool and using the right technology for the task at hand. Which leads me to what NoSQL databases are naturally good at:

  • Performance – As everything is retrieved by key, effectively all your queries hits an index. Redis an in-memory data-store (with optional async persistence) can achieve 110000 SETs/second, 81000 GETs/second in an entry level Linux box, and no this is not possible with any RDBMS.
  • Replication – A feature common in most NoSQL data stores is effortless replication. In Redis this is achieved by un-commenting one line: ‘slaveof ipaddress port’ in redis.conf
  • Schema-less persistence – As there is no formal data structure to bind to and most values are stored as binary or strings the serialization format is left up to you. Effectively this can be seen as a feature as it leaves you free to serialize your object directly – which lets you do those one-liner saves that everyone is talking about. A lot of client libraries opt for a simplistic language-neutral format like JSON.
  • Scalability – This seems to be a heated topic (where everyone believes they can scale their technology of choice equally as well given the right setup) so I won’t delve in to this too deeply only to say that key-value data-stores by their nature have good characteristics to scale. When everything is accessed by key, clients can easily predict the source of data given a pool of available data-stores. Most clients also come in-built with consistent hashing where the addition or removal of a data store does not significantly impact this predictability.
  • Efficiency and Cost – As there are a plethora of options available most NoSQL data stores are both free and open source. They also perform better and provide better utilization of server resources than comparative RDBMS solutions.
  • Advanced data constructs – NoSQL variants like Redis, in addition to a key-value data store also provide rich data constructs and atomic operations on server-side lists, sets, sorted sets and hashes which make things like message-queuing, notification systems, load-balancing work tasks trivial to implement.

Try NoSQL today

redis Fortunately NoSQL solutions are not black magic and are actually fairly easy to get started with. My personal favourite is Redis for which I also happen to be the maintainer of a rich open source C# client (can also run on Linux with Mono). If .NET is not your thing, than you’re in luck as Redis is so popular that there is a language binding in almost every programming language in active use today which you can find listed on its supported languages page.

Getting started is as easy as downloading the latest source from the project website. If you’re on a windows platform you can download pre-compiled binaries using cygwin here. A simple make command from the tarball directory creates the required redis-server which is all you need to run to get a server instance up and running.

After that you can access the comprehensive Redis feature-set exposed by the C# IRedisClient API.
To give you a taste of it’s simplicity, here is an example demonstrating how to persist and access a simple POCO type using the Redis client:

public class IntAndString
public int Id { get; set; }
public string Letter { get; set; }

using (var redisClient = new RedisClient())
//Create a typed Redis client that treats all values as IntAndString:
var typedRedis = redisClient.GetTypedClient<IntAndString>();

var pocoValue = new IntAndString { Id = 1, Letter = "A" };
typedRedis.Set("pocoKey", pocoValue);
IntAndString toPocoValue = typedRedis.Get("pocoKey");

Assert.That(toPocoValue.Id, Is.EqualTo(pocoValue.Id));
Assert.That(toPocoValue.Letter, Is.EqualTo(pocoValue.Letter));

var pocoListValues = new List<IntAndString> {
new IntAndString {Id = 2, Letter = "B"},
new IntAndString {Id = 3, Letter = "C"},
new IntAndString {Id = 4, Letter = "D"},
new IntAndString {Id = 5, Letter = "E"},

IRedisList<IntAndString> pocoList = typedRedis.Lists["pocoListKey"];

//Adding all IntAndString objects into the redis list ‘pocoListKey’
pocoListValues.ForEach(x => pocoList.Add(x));

List<IntAndString> toPocoListValues = pocoList.ToList();

for (var i=0; i < pocoListValues.Count; i++)
pocoValue = pocoListValues[i];
toPocoValue = toPocoListValues[i];
Assert.That(toPocoValue.Id, Is.EqualTo(pocoValue.Id));
Assert.That(toPocoValue.Letter, Is.EqualTo(pocoValue.Letter));

Other note-worthy features of Redis include its support for custom atomic transactions examples of which are here.

More examples are available at the ServiceStack’s Open source C# Client’s home page.

Useful External Links