Offline JSON Pretty Printing

Today when you’re dealing with Web APIs, you often find yourself in the situation of handling JSON, either in the input for these APIs or in the output, or both. Some browsers have the means to pretty print the JSON from their dev tools. But you don’t always have that opportunity. That’s why there are tools to pretty print JSON. I’ve found quite a few of them on the web, but all the ones I’ve found have one terrible flaw: they actually send the JSON you’re trying to pretty print to the server (*shudder*). I don’t want my JSON data (sensitive or not) to be sent to some random servers!

All your JSON are belong to us!

Now as I wrote, I don’t particularly like the fact that my JSON data is sent over the wire for pretty printing. It may not be super secret or anything, but in these days, you cannot be careful enough. Besides, it’s completely unnecessary to do it. All you need is already in your browser! So I quickly built my own JSON pretty printer (and syntax highlighter). You can find it right here.

Offline JSON Pretty Printing to the Rescue

Actually, the design is very simple. All my JSON pretty printer is doing, is to take your JSON input and try to parse it as JSON in the browser.

JSON.parse(yourJsonInput)

If that fails, I’m showing the parsing error and it’s done. If it succeeds, I get back a JavaScript object/array/value, which then I’m inspecting. For objects, I’m using basic tree navigation to go through all the properties and nested objects/arrays/values for pretty printing. That’s it, really simple. No need to transmit the data anywhere — it stays right in your browser!

So like it, hate it, use it or don’t: cymbeline.ch JSON Pretty Printer

LINQ with Lucene.Net.ObjectMapping

Last time I mentioned that I started to work on supporting LINQ with Lucene.Net.ObjectMapping. That includes LINQ queries like the following:

using (Searcher searcher = new IndexSearcher(directory))
{
    IQueryable<BlogPost> posts =
        from post in searcher.AsQueryable<BlogPost>()
        where obj.Tag == "lucene"
        orderby obj.Timestamp descending
        select post;
}

Now granted, the above example is a very basic one. So here’s a short list of other methods on IQueryable<T> that are already supported at this point: Any *, Count *, First *, FirstOrDefault *, OrderBy, OrderByDescending, Single *, SingleOrDefault *, Skip, Take, ThenBy, ThenByDescending, and finally Where.

* Method is supported both with and without a filter predicate.

With this, it becomes easy to build paging based on objects you get back as a result of a query on Lucene.Net. I’m still working on improving the supported filter expressions (most of all for Where, but all the other filterable methods naturally profit too). For instance, with the default JSON-based object mapping it is already possible to search for entries in a dictionary that maps a string to another property or object. Say you have a set of classes, defined as follows.

public class MyClass
{
    public int Id { get; set; }
    public Dictionary<string, MyOtherClass> Map { get; set; }
}

public class MyOtherClass
{
    public string Text { get; set; }
    public int Sequence { get; set; }
    public DateTime Timestamp { get; set; }
}

Now you can actually search for instances of MyClass that satisfy certain conditions in the Map dictionary, like this:

var query = from c in searcher.AsQueryable<MyClass>()
            where c.Map["MyKey"].Sequence == 123
            select c;

Since the items in the dictionary are mapped to analyzed fields in the Lucene.Net document, we can search on them!

Delete and Update By Query

Now since I have this query expression binder to create Lucene.Net queries based on LINQ filter expressions, I’ve added an extension method to update and one to delete documents that match a query. So it is now possible to do this:

indexWriter.Delete<MyClass>(x => x.Id == 1234);
indexWriter.Update(myObject, x => x.Id == myObject.Id);

Call to Action

Now with all this said, I’m looking for volunteers to help me get more coverage on the LINQ queries, because that’s definitely where the weak spot is right now. If you’re interested, leave a comment here or on GitHub.

Improvements to Lucene.Net.ObjectMapping

I’d like to discuss some improvements to Lucene.Net.ObjectMapping which I published yesterday as a new version (1.0.3) to NuGet. In addition, I want to take this opportunity to give a quick outlook on what’s to come next.

CRUD Operations

The library now comes with support for all of the CRUD operations. Let’s look at them one by one, starting with Create.

Create / Add

In Lucene.Net terms, that would be AddDocument. Since the library does object to document mapping, this is simplified to an Add operation.

IndexWriter myIndexWriter = ...;
MyClass myObject = new MyClass(...);

myIndexWriter.Add(myObject);

Or, if you need a specific analyzer for the document the object gets mapped to, you can use the overload which accepts a second parameter of type Analyzer.

IndexWriter myIndexWriter = ...;
MyClass myObject = new MyClass(...);

myIndexWriter.Add(myObject, new MyOwnAnalyzer());

Retrieve / Query

The retrieve operation, or mapping of a document to an object hasn’t changed since v1.0.0. There are examples for how to query and retrieve in my previous post. Of course, if you happen to know the ID of the document without a query, then you can just map that document to your class without going through a query. But since the document IDs can change over time, it’s usually more practical to pivot off a query.

Update

Update is maybe the most interesting operation here. Since document IDs can change over time, there’s really no good way to reliably update a specific document, without making a query. That’s why the UpdateDocument method from the IndexReader asks you for a query/term to use to match the document to update. And that’s why it’s generally a good idea to bring your own unique identifier to the game. Suppose your class has a property of type Guid and name “Id”, which is used as your unique identifier for the objects of that type.

IndexWriter myIndexWriter = ...;
MyClass myObject = ...;

myObject.MyPropertyToUpdate = "new value";

myIndexWriter.Update(
    myObject,
    new TermQuery(new Term("Id", myObject.Id.ToString())));

Under the covers, this will find all the documents matching the query and matching the type (MyClass), delete them and then add a new document for the mapped myObject. If you need an analyzer, for the newly mapped document, you can use the second overload.

IndexWriter myIndexWriter = ...;
MyClass myObject = ...;

myObject.MyPropertyToUpdate = "new value";

myIndexWriter.Update(
    myObject,
    new TermQuery(new Term("Id", myObject.Id.ToString())),
    new MyOwnAnalyzer());

Delete

Just like the retrieve operation, the Delete operation is also supported since v1.0.0. I realize though that I haven’t given any examples yet. But really, it’s quite simple again. You give the type of objects you want to delete the mapped documents for, and you give a query to identify the objects to delete. No magic at all.

IndexWriter myIndexWriter = ...;
myIndexWriter.DeleteDocuments<MyClass>(
    new TermQuery(new Term("Tag", "deleted")));

Naturally, you can use any Query you want for the delete operation (as well as for updates). You can make them arbitrarily complex as long as they’re still supported by Lucene.Net.

Summary and Outlook

That’s it, CRUD with no magic, no tricks. Let me know if there’s functionality you’d like to see added, either by commenting here or by opening a bug/enhancement/whatever on GitHub. I’ve started working on LINQ support for the ObjectMapping library too, with the goal that you can write LINQ queries like the following.

var query = from myObject in mySearcher.AsQueryable<MyClass>()
            where myObject.Tag == "history"
            select myObject;

It will likely take a little longer to get that stable, but I’ll try to make a pre-release on NuGet in the next few weeks.

Search Mapped Objects in Lucene.Net

In my previous post (Lucene.Net Object Mapping) I introduced the Lucene.Net.ObjectMapping NuGet package. The post describes how the package can be used to map virtually any .Net object to a Lucene.Net Document and how to reconstruct the object from that same Document later. Now it’s time to look at the search aspect of it, so how can you search mapped objects in Lucene.Net?

You already know Searcher

The Searcher class in Lucene.Net can be used to run queries on an index and retrieve documents matching that query. The Lucene.Net.ObjectMapping library comes with additional extensions to the Searcher class which help you search for Documents. There’s a variety of different extensions, some which just return a TopDocs object with the number of results you’ve specified, and some which allow sorting, but more powerful are the ones which require you to specify a Collector to gather the results. Using a Collector makes it very easy to support paging over all the results for a specific query, and after all that’s usually what you’d do today if you want to show search results. So let’s look at an example of searching for Documents that contain mapped .Net objects using a Collector. Let’s assume we’re building a blog engine, for which we want to index the posts.

public class BlogPost
{
    public Guid Id { get; set; }
    public DateTime Created { get; set; }
    public string Title { get; set; }
    public string Body { get; set; }
    public string[] Tags { get; set; }
}

// ... as before, you'd store your BlogPost objects like this:
luceneIndexWriter.AddDocument(thePost.ToDocument());

Use a Collector for Paging

Creating an paged index of all your blog posts is very easy, really. You’ll need a Searcher, a Collector (the TopFieldCollector will do for now) and that’s about it. Let’s look at some code.

private const int PageSize = 10;

public BlogPost[] GetPostsForPage(int page)
{
    // Sanitize the 'page' before doing anything with it.
    if (page < 0)
    {
        page = 0;
    }

    int start = page * PageSize;
    int end = start + PageSize;

    using (Searcher searcher = new IndexSearcher(myIndexReader))
    {
        TopFieldCollector collector = TopFieldCollector.Create(
            // Let's sort descending by create date.
            new Sort(new SortField("Created", SortField.LONG, true)),
            end, // Need to get the hits until 'end'.
            false,
            false,
            false,
            false);

        // Let's use the object mapping extensions for Search! This will
        // filter results to only those Documents which hold a BlogPost.
        searcher.Search<BlogPost>(new MatchAllDocsQuery(), collector);

        // At this point we know how many hits there are in total. So
        // let's check that the requested page is within range.
        if (start >= collector.TotalHits)
        {
            page = (collector.TotalHits - 1) / PageSize;
            start = page.Value * PageSize;
            end = start + PageSize;
        }

        TopDocs docs = collector.TopDocs(start, PageSize);
        List<BlogPost> posts = new List<BlogPost>();

        foreach (ScoreDoc scoreDoc in docs.ScoreDocs)
        {
            Document doc = searcher.Doc(scoreDoc.Doc);

            posts.Add(doc.ToObject<BlogPost>());
        }

        return posts.ToArray();
    }
}

That’s it, no magic, no tricks. One thing you could do, instead of just returning a plain array with the results is to return an object which holds some more meta information, like for instance the number of total hits, or the actual page you’re returning results for. But the core logic remains the same. You can play around with different ways to sort the results. Keep in mind though that tokenized/analyzed fields in Lucene.Net are sorted based on the tokens, not based on the actual string value. To help address this, I’m thinking about extending the object mappers to allow to specify not only to analyze a field (because you want to search it), but also to add a non-analyzed copy of the field for sorting purposes. That way, you have the advantage of being able to search and sort on the same logical field in the end. Keep in mind though that the index will grow since the data is indexed twice: once tokenized/analyzed, once as-is.