C# Using Newtonsoft and dynamic ExpandoObject to convert one Json to another

The scenario where you convert one input Json format to another output Json is not uncommon. Before C# dynamic and ExpandoObject you would serialize the input Json to POCO model classes and use a Factory class to convert to another set of POCO model classes, that then would be serialized to Json.

With the dynamic type and the ExpandoObject you have another weapon of choice, as you can deserialize the input Json to a dynamic object, and convert the contents to another dynamic object that is serialized. Imagine the following input and output Json formats:

Input format:

{
	"username": "someuser@somewhere.com",
	"timeStamp": "2017-09-20 13:50:16.560",
	"attributes": {
		"attribute": [{
			"name": "Brian",
			"count": 400
		},
		{
			"name": "Pedersen",
			"count": 100
		}]
	}
}

Output format:

{
	"table": "USER_COUNT",
	"users": [{
		"uid": "someuser@somewhere.com",
		"rows": [{
			"NAME": "Brian",
			"READ_COUNT": 400
		},
		{
			"NAME": "Pedersen",
			"READ_COUNT": 100
		}]
	}]
}

Converting from the input format to the output format can be achieved with a few lines of code:

// Convert input Json string to a dynamic object
dynamic input = JsonConvert.DeserializeObject(myQueueItem);

// Create a dynamic output object
dynamic output = new ExpandoObject();
output.table = "USER_COUNT";
output.users = new dynamic[1];
output.users[0] = new ExpandoObject();
output.users[0].uid = input.username;
output.users[0].rows = new dynamic[input.attributes.attribute.Count];
int ac = 0;
foreach (var inputAttribute in input.attributes.attribute)
{
    var row = output.users[0].rows[ac] = new ExpandoObject();
    row.NAME = inputAttribute.name;
    row.READ_COUNT = inputAttribute.count;
    ac++;
}

// Serialize the dynamic output object to a string
string outputJson = JsonConvert.SerializeObject(output);

I’ll try to further explain what happens. The Newtonsoft.Json DeserializeObject() method takes a json string and converts it to a dynamic object.

The output Json is created by creating a new dynamic object of the type ExpandoObject(). With dynamic ExpandoObjects we can create properties on the fly, like so:

// Create a dynamic output object
dynamic output = new ExpandoObject();
// Create a new property called "table" with the value "USER_COUNT"
output.table = "USER_COUNT";

This would, when serialized to a Json, create the following output:

{
"table": "USER_COUNT"
}

To create an array of objects, you need to first create a new dynamic array and then assign an ExpandoObject to the position in the array:

// Create a dynamic output object
dynamic output = new ExpandoObject();
// Create a new array called "users"
output.users = new dynamic[1];
// An an object to the "users" array
output.users[0] = new ExpandoObject();
// Create a new property "uid" in the "users" array
output.users[0].uid = input.username;

This generates the following Json output:

{
	"users": [{
		"uid": "someuser@somewhere.com"
		}]
}

MORE TO READ:

Advertisements
Posted in c#, General .NET | Tagged , , , | Leave a comment

.NET Session state is not thread safe

When working with the .NET session state you should bear in mind that the HttpContext.Current.Session cannot be transferred to another thread. Imagine that you, from the Global.asax would like to read the SessionID each time a session is started:

// This method inside Global.asax is called for every session start
protected void Session_Start(object sender, EventArgs e)
{
  MyClass.DoSomethingWithTheSession(HttpContext.Current);
}

To speed up performance you wish to use a thread inside DoSomethingWithTheSession. The thread will read the Session ID:

public class MyClass
{															   
  public static void DoSomethingWithTheSession(HttpContext context) 
  {
    if (context == null)  
	  return;

    // Here the context is not null
	ThreadPool.QueueUserWorkItem(DoSomethingWithTheSessionAsync, context);
  }

  private static void DoSomethingWithTheSessionAsync(object httpContext)
  { 
    HttpContext context = (HttpContext)httpContext;
	
	// Oops! Here the context is NULL
	string sessionID = context.Session.SessionID; 
  }
}

The code above will fail because the HttpContext is not thread safe. So in DoSomethingWithTheSession(), the context is set, but in DoSomethingWithTheSessionAsync, the context will null.

THE SOLUTION: TRANSFER THE SESSION VALUES INSTEAD OF THE SESSION OBJECT:

To make it work, rewrite the DoSomethingWithTheSessionAsync() method to reteieve the values needed, not the HttpContext object itself:

public class MyClass
{															   
  public static void DoSomethingWithTheSession(HttpContext context) 
  {
    if (context == null)  
      return;

    // Transfer the sessionID instead of the HttpContext and everything is fine
    ThreadPool.QueueUserWorkItem(DoSomethingWithTheSessionAsync, 
      context.Session.SessionID);
  }

  private static void LogReportFeatureflagsAsync(object session)
  { 
    // This works fine, as the string is thread safe.
    string sessionID = (string)session;
	
    // Do work on the sessionID
  }
}

MORE TO READ:

 

Posted in .net, c#, General .NET | Tagged , , , , , , | Leave a comment

Edit special field types in Sitecore Experience Editor – Custom Experience Editor Buttons replaces the Edit Frame

The Sitecore Experience Editor allows inline editing of simple field types like text and rich text (HTML) field, and a few complex ones like links. But editing checkboxes, lookup values, multiselect boxes, or any custom field you might have developed yourself requires some custom setup.

Previously, the Edit Frame have been the weapon of choice. The Edit Frame opens a tiny shell with the fields of your choice when clicking on the control to edit.
Unfortunately this has the downside that it hides the Experience Editor’s own buttons, so it is becoming deprecated, and isn’t even available when using MVC to render the front end.

The Edit Frame will hide the standard Experience Editor Buttons

The Edit Frame will hide the standard Experience Editor Buttons

But fear not, as the Edit Frame functionality have just been moved to the Experience Editor Buttons.

STEP 1: SET UP THE AVAILABLE BUTTONS

Go to the CORE database. Find the /sitecore/content/Applications/WebEdit/Custom Experience Buttons.

For your own pleasure, create a nice folder structure that matches your component structure, and add a “Field Editor Button” in the structure:

A Field Editor Button placed in a folder below Custom Experience Buttons.

A Field Editor Button placed in a folder below Custom Experience Buttons.

In the “Fields” field of that button, add the fields that needs to be editable, as a Pipe separated list, like this:

  • FieldName1|FieldName2|FieldName3

STEP 2: CONFIGURE THE RENDERING

In the “Experience Editor Buttons”, add the button you created:

The button is added to the Experience Editor Buttons

The button is added to the Experience Editor Buttons

STEP 3: TEST IT

Now, when clicking the rendering, the button you added is available:

Experience Editor Buttons

Experience Editor Buttons

And when clicking it, the Edit Frame opens, and the fields are available for editing:

Edit Frame

Edit Frame

MORE TO READ:

Posted in Sitecore 7, Sitecore 8 | Tagged , , , , | Leave a comment

Sitecore Scheduled Task – Schedule time format and other quirks

The Sitecore task runner, usually called Scheduled Tasks, is a simple way of executing code with intervals. You configure scheduled tasks in Sitecore, at /sitecore/system/Tasks/Schedules:

Scheduled Task

Scheduled Task

The quirkiest configuration setting is the “Schedule” field, which is a pipe separated string determining when the task should run:

{start timestamp}|{end timestamp}|{days to run bit pattern}|{interval}

  • Start timestamp and End timestamp: Determines the start and end of the scheduled task.
    Format is the Sitecore ISO datetime, YearMonthDayTHoursMinutesSeconds.
    Example: 20000101T000000 = January 1st 2000 at 00:00:00.
    (the font Sitecore uses does not help reading the timestamp at all, I know).
    NOTE: If you do the format wrong, the task will run.
  • Days to run: A 7 bit pattern determining which days the task must run:
    1 = Sunday
    2 = Monday
    4 = Tuesday
    8 = Wednesday
    16 = Thursday
    32 = Friday
    64 = Saturday
    So, 127 means to run the task every day. To run the task on Saturday and Sunday, add the 2 values, 1+64 = 65.
  • Interval: How long time between each run. 00:05:00 means that the task will run with 5 minute intervals.

WHY DOESN’T MY TASK RUN WITH MY SPECIFIED INTERVALS?

Sitecore uses no less than 2 sitecore.config settings to determine when the task runner should run:

<scheduling>
  <frequency>00:05:00</frequency>
  <agent type="Sitecore.Tasks.DatabaseAgent" method="Run" interval="00:05:00">
    <param desc="database">master</param>
    <param desc="schedule root">/sitecore/system/Tasks/Schedules</param>
    <LogActivity>true</LogActivity>
  </agent>
</scheduling>

The frequency setting determine when the global Sitecore task runner should run at all.

The agent determine when the tasks configured in the master database at the root /sitecore/system/Tasks/Schedules should run.

So, in the example above, my task runner runs every 5 minutes, checking the config file for tasks to run. It will then run the agent with 5 minute intervals. If another task is running, it could block the task runner, delaying the agent from running. With the above settings, my best case scenario is that my agent runs every 5 minutes.

The tasks configured in Sitecore could also block. If a task should run every 5 minutes, but the execution time is 11 minutes, the agent would run the task again after 15 minutes, in the best case scenario. To avoid this, you can mark your task as “async” in the configuration, but beware that long running (or never ending) tasks will then run simultaneously, slowing down Sitecore.

CAN I HAVE TASKS RUNNING ON MY CM SERVER ONLY?

Yes, you can add a new folder in Sitecore, and then add a new agent that points to the new folder as root, to the sitecore.config file of the CM server.

See more here: Sitecore Scheduled Tasks – Run on certain server instance.

CAN I RUN TASKS AT THE SAME TIME EVERY DAY?

Kind of. You can have your task running once a day within the same interval, using a little code.

See more here: Run Sitecore scheduled task at the same time every day.

IN WHAT CONTEXT DOES MY TASK RUN?

Sitecore have created a site called “scheduler” where the context is defined:

<sites>
  <site name="scheduler" database="master" language="da" enableTracking="false" domain="sitecore" />
</sites>

To run the task in a different context, use a context switcher.

DO I HAVE A HTTP CONTEXT WHEN RUNNING SCHEDULED TASKS?

No.

DO I HAVE A USER WHEN RUNNING SCHEDULED TASKS?

Do not expect to have a user. Expect the Sitecore Scheduled Task – Schedule time format and other quirks to be NULL, unless you use a UserSwitcher.

CAN I RUN THE SAME CODE FROM DIFFERENT TASKS?

Yes. Sitecore have split the definition of the code to run from the definition of the schedule. The code is defined as a “command” where you define the class and the method to run:

Task Commands

Task Commands

The schedule simply points to the command to run, and you can have as many schedules as you want:

Pointing to a command

Pointing to a command

WHAT ARE THE “ITEMS” FIELD FOR?

Items Field

Items Field

No one really knows what the items field are for, but according to old Sitecore folklore, you can add a pipe separated list of item GUIDS (or even item paths), and the “itemArray” property of the method you call will contain the list of items:

public void Execute(Item[] itemArray, CommandItem commandItem, ScheduleItem scheduleItem)
{
  foreach (Item item in itemArray)
  {
    // do something with the item
  }
}

MORE TO READ:

 

Posted in c#, Sitecore, Sitecore 5, Sitecore 6, Sitecore 7, Sitecore 8 | Tagged , , , | 1 Comment

Webhook Event Receiver with Azure Functions

Microsoft Azure Functions is a solution to run small pieces of code in the cloud. If your code is very small and have only one purpose, an Azure Function could be the cost effective solution.

This is an example of a generic Webhook event receiver. A webhook is a way for other systems to make a callback to your system whenever an event is raised in the other system. This Webhook event receiver will simply receive the Webhook event’s payload (payload = the JSON that the other system is POST’ing to you), envelope the payload and write it to a Queue.

STEP 1: SET UP AN AZURE FUNCTION

Select an Function App and create a new function:

Create New Azure Function

Create New Azure Function

 

STEP 2: CREATE A NEW FUNCTION

Select New Function and from the “API & Webhooks”, select “Generic Webhook – C#:

Create Generic Webhook

Create Generic Webhook

Microsoft will now create a Webhook event receiver boilerplate code file, which we will modify slightly later.

STEP 3: ADD A ROUTE TEMPLATE

Because we would like to have more than one URL to our Azure Function (each webhook caller should have it’s own URL so we can differentiate between them) we need to add a route template.

Select the “Integrate” section and modify the “Route template”. Add {caller} to the field:

Add a Route Template

Add a Route Template

STEP 4: INTEGRATE WITH AZURE QUEUE STORAGE

We need to be able to write to an Azure Queue. In Azure Functions, the integration is almost out of the box.

Select the “Integrate” section and under “Outputs”, click “New Output”, and select the “Azure Queue Storage”:

Azure Queue Storage

Azure Queue Storage

Configure the Azure Queue Settings:

Azure Queue Settings

Azure Queue Settings

  • Message parameter name: The Azure Function knows about the queue through a parameter to the function. This is the name of the parameter.
  • Storage account connection: The connection string to the storage where the azure queue is located.
  • Queue name: Name of queue. If the queue does not exist (it does not exist by default) a queue will be created for you.

STEP 5: MODIFY THE BOILERPLATE CODE

We need to make small but simple modifications to the boilerplate code (I have marked the changes form the boilerplate code with comments):

#r "Newtonsoft.Json"

using System;
using System.Net;
using Newtonsoft.Json;

// The string caller was added to the function parameters to get the caller from the URL.
// The ICollector<string> outQueue was added to the function parameters to get access to the output queue.
public static async Task<object> Run(HttpRequestMessage req, string caller, ICollector<string> outQueue, TraceWriter log)
{
    log.Info($"Webhook was triggered!");

    // The JSON payload is found in the request
    string jsonContent = await req.Content.ReadAsStringAsync();
    dynamic data = JsonConvert.DeserializeObject(jsonContent);

    // Create a dynamic JSON output, enveloping the payload with 
	// The caller, a timestamp, and the payload itself
    dynamic outData = new Newtonsoft.Json.Linq.JObject();
    outData.caller = caller;
    outData.timeStamp = System.DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.fff");
    outData.payload = data;
    
	// Add the JSON as a string to the output queue
    outQueue.Add(JsonConvert.SerializeObject(outData));     
    
	// Return status 200 OK to the calling system.
    return req.CreateResponse(HttpStatusCode.OK, new
    {
        caller = $"{caller}",
        status = "OK"
    });
}

STEP 6: TEST IT

Azure Functions have a built in tester. Run a test to ensure that you have pasted the correct code and written the correct names in the “Integrate” fields:

Test

Test

Use the Microsoft Azure Storage Explorer to check that the event was written to the queue:

Azure Storage Explorer

Azure Storage Explorer

STEP 7: CREATE KEYS FOR THE WEBHOOK EVENT SENDERS

Azure Functions are not available unless you know the URL and the key. Select “Manage” and add a new Function Key.

Function Keys

Function Keys

The difference between Function Keys and Host Keys are that Function Keys are specific to that function, but the Host Keys are global keys that can be used for any function.

To call your Azure Function, the caller need to know the URL + the key. The key can be send in more than one way:

  • Using the URL, parameter ?code=(key value) and &clientid=(key name)
  • In the request header, using the x-functions-key HTTP header.

STEP 8: GIVE URL AND KEY TO CALLING SYSTEM

This is a Restlet Client example that calls my function. I use the QueryString to add the code and clientid parameters:

MORE TO READ:

 

Posted in .net, c#, General .NET, Microsoft Azure | Tagged , , , | Leave a comment

Requesting Azure API Management URL’s

The Azure API Management is a scalable and secure API gateway/proxy/cache where you can expose your API’s externally and still have secure access.

In Azure API Management you create a “Product” which is a collection of API’s that is protected using the same product key.

2 Azure API Management products, protected with a key

2 Azure API Management products, protected with a key

The 2 products above contains a collection of API’s, and each product have it’s own key.

As a developer you can find the API Keys using the Azure API Management Service Developer Portal:

APIM Developer Portal

APIM Developer Portal

When clicking around you will end up finding the “Try it” button where you are allowed to test your API endpoints:

Try it button

Try it button

And here you can get the subscription key by clicking the icon shaped as an eye:

Find the key here

Find the key here

When calling any API, you simply need to add the subscription key to the request header in the field:

  • Ocp-Apim-Subscription-Key

This is an example on how to GET or POST to an API that is secured by the Azure API Management. There is many ways to do it, and this is not the most elegant. But this code will work in production with most versions of .NET:

using System;
using System.IO;
using System.Net;
using System.Text;

namespace MyNamespace
{
  public class AzureApimService
  {
    private readonly string _domain;
    private readonly string _ocp_Apim_Subscription_Key;

    public AzureApimService(string domain, string subscriptionKey)
    {
      _domain = domain;
      _ocp_Apim_Subscription_Key = subscriptionKey;
    }

    public byte[] Get(string relativePath, out string contentType)
    {
      Uri fullyQualifiedUrl = GetFullyQualifiedURL(_domain, relativePath);
      try
      {
        byte[] bytes;
        HttpWebRequest webRequest = (HttpWebRequest) WebRequest.Create(fullyQualifiedUrl);
        webRequest.Headers.Add("Ocp-Apim-Trace", "true");
        webRequest.Headers.Add("Ocp-Apim-Subscription-Key", _ocp_Apim_Subscription_Key);
        webRequest.Headers.Add("UserAgent", "YourUserAgent");
        webRequest.KeepAlive = false;
        webRequest.ProtocolVersion = HttpVersion.Version10;
        webRequest.ServicePoint.ConnectionLimit = 24;
        webRequest.Method = WebRequestMethods.Http.Get;
        using (WebResponse webResponse = webRequest.GetResponse())
        {
          contentType = webResponse.ContentType;
          using (Stream stream = webResponse.GetResponseStream())
          {
            using (MemoryStream memoryStream = new MemoryStream())
            {
              byte[] buffer = new byte[0x1000];
              int bytesRead;
              while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
              {
                memoryStream.Write(buffer, 0, bytesRead);
              }
              bytes = memoryStream.ToArray();
            }
          }
        }
        // For test/debug purposes (to see what is actually returned by the service)
        Console.WriteLine("Response data (relativePath: \"{0}\"):\n{1}\n\n", relativePath, Encoding.Default.GetString(bytes));
        return bytes;
      }
      catch (Exception ex)
      {
        throw new Exception("Failed to retrieve data from '" + fullyQualifiedUrl + "': " + ex.Message, ex);
      }
    }

    public byte[] Post(string relativePath, byte[] postData, out string contentType)
    {
      Uri fullyQualifiedUrl = GetFullyQualifiedURL(_domain, relativePath);
      try
      {
        byte[] bytes;
        HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(fullyQualifiedUrl);
        webRequest.Headers.Add("Ocp-Apim-Trace", "true");
        webRequest.Headers.Add("Ocp-Apim-Subscription-Key", _ocp_Apim_Subscription_Key);
        webRequest.KeepAlive = false;
        webRequest.ServicePoint.ConnectionLimit = 24;
        webRequest.Headers.Add("UserAgent", "YourUserAgent");
        webRequest.ProtocolVersion = HttpVersion.Version10; 
        webRequest.ContentType = "application/json";
        webRequest.Method = WebRequestMethods.Http.Post;
        webRequest.ContentLength = postData.Length;
        Stream dataStream = webRequest.GetRequestStream();
        dataStream.Write(postData, 0, postData.Length);
        dataStream.Close();
        using (WebResponse webResponse = webRequest.GetResponse())
        {
          contentType = webResponse.ContentType;
          using (Stream stream = webResponse.GetResponseStream())
          {
            using (MemoryStream memoryStream = new MemoryStream())
            {
              byte[] buffer = new byte[0x1000];
              int bytesRead;
              while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
              {
                memoryStream.Write(buffer, 0, bytesRead);
              }
              bytes = memoryStream.ToArray();
            }
          }
        }
        // For test/debug purposes (to see what is actually returned by the service)
        Console.WriteLine("Response data (relativePath: \"{0}\"):\n{1}\n\n", relativePath, Encoding.Default.GetString(bytes));
        return bytes;
      }
      catch (Exception ex)
      {
        throw new Exception("Failed to retrieve data from '" + fullyQualifiedUrl + "': " + ex.Message, ex);
      }
    }

    private static Uri GetFullyQualifiedURL(string domain, string relativePath)
    {
      if (!domain.EndsWith("/"))
        domain = domain + "/";
      if (relativePath.StartsWith("/"))
        relativePath = relativePath.Remove(0, 1);
      return new Uri(domain + relativePath);
    }
  }
}

The service is simple to use:

AzureApimService service = new AzureApimService("https://yourapim.azure-api.net", "12a6aca3c5a242f181f3dec39b264ab5");
string contentType;
byte[] response = service.Get("/api/endpoint", out contentType);

MORE TO READ:

Posted in c#, General .NET | Tagged , , , | Leave a comment

Sitecore contact cannot be loaded, code never responds

In Sitecore, it is possible to encounter a situation where the calls identifying or locking a contact never responds, but there is no errors returned.

A call to identify:

Tracker.Current.Session.Identify(contactName);

And a call to Load a contact:

Contact contact = 
    contactRepository.LoadContactReadOnly(username);

Can both take forever without any errors or any timeout.

This situation can occur if the Contact Successor points to the original Contact in a loop. When merging a contact, Sitecore will create a new contact, the Surviving contact. The existing contact (called the Dying contact) still contains all the interaction data from before the merge, so instead of Sitecore having to update all data fields with a new ID, it creates a “Successor” pointer to the Dying Contact.

Surviving Contact

Surviving Contact

But in certain situations, the Dying Contact will also have a Successor, which points back to the Surviving Contact, creating an infinite loop:

The Dying Contact's Successor points to the surviving contact.

The Dying Contact’s Successor points to the surviving contact.

The patterns for this situation are many, but usually involves merging and changing contact identifiers, and can be reproduced like this:

  • Create a contact “A”
  • Create a new contact “B”
  • Merge contact “B” with “A”
  • Merge contact “A” with “B”

To avoid this situation, it is customary to rename the dying contact’s (“A”) identifier to an obscure name (a guid). But the renaming might fail if the dying contact is locked, leaving a contact with a reusable identifier. The “Extended Contact Repository” which I have described previously will unfortunately gladly create a new contact with an existing name.

HOW TO RESOLVE IT:

The situation needs to be resolved manually. Find the contact, open RoboMongo and search for the contact:

identifier = db.Identifiers.findOne({_id: /NAME_OF_IDENTIFIER/i});
contact = db.Contacts.find({_id: identifier.contact});

Copy the “Successor” value from the contact, and find the Successor:

successor = db.Contacts.find({_id: NUUID("b1e760d7-7c60-4b1d-818f-e357f303ebef9")});

Right click the “Edit Document” button and delete the “Successor” field from the dying contact:

Delete Successor Field from the Dying Contact, breaking the infinite loop

Delete Successor Field from the Dying Contact, breaking the infinite loop

This can be done directly in production, and the code reacts instantly when the loop have been broken.

MORE TO READ:

 

Posted in Sitecore 7, Sitecore 8 | Tagged , , , , | Leave a comment

Sitecore Media Library integration with Azure CDN using origin pull

If your Sitecore website is heavy on content from the media library you can offload your Sitecore instances by allowing images to be retrieved from a Content Delivery Network (CDN). If you use Microsoft Azure, you do not need to upload images to the CDN, as Azure support origin pull.

Origin pull is a mechanism where the CDN automatically retrieves any missing content item from an origin host if the content is missing. In Azure, even parameters to the content is supported, so if you scale images with ?w=100, these parameters are supported, and the Azure CDN will store the scaled image.

To set up origin pull in Azure CDN, you first go to your CDN profile:

Azure CDN Profile

Azure CDN Profile

Then you click the + sign to add an endpoint:

Azure CDN Add Endpoint

Azure CDN Add Endpoint

And add an endpoint with the type “Custom Origin”:

Azure CDN Endpoint with Custom Origin

Azure CDN Endpoint with Custom Origin

The “name” is the name of the endpoint. The “Origin hostname” is the URL to your public Sitecore website. And remember to specify the correct protocol. If your website is running HTTPS, the CDN should use HTTPS as well.

SETTING UP SITECORE:

The rest is configuration in Sitecore. You control the CDN properties using these settings, found in the Sitecore.config file:

<setting name="Media.MediaLinkServerUrl" value="https://myendpoint.azureedge.net" />
<setting name="Media.MediaLinkPrefix" value="-/media" />
<setting name="Media.AlwaysIncludeServerUrl" value="true" />
<setting name="MediaResponse.Cacheability" value="public" />
  • Media.MediaLinkServerUrl = The URL to the Azure CDN, as defined when creating the Azure Endpoint
  • Media.MediaLinkPrefix = The media library link URL. Together with the Media.MediaLinkServerUrl, the complete server URL is created. In the example, my url is https://myendpoint.azureedge.net/-/media/%5Bmedia library content]
  • Media.AlwaysIncludeServerUrl = Tells Sitecore to always include the server URL in the media requets
  • MediaResponse.Cacheability = Allows the cache settings of any item to be publicly available, allowing the Azure CDN to access the MaxAge, SlidingExpiration and VaryHeader parameters.

DRAWBACKS OF USING A CDN:

  • Your website needs to be public. When developing and testing you need to disable the CDN settings as the Azure CDN cannot read from a non-public website. Testing is therefore in production as the website runs.
  • Security settings on media library items cannot be used. Once a media library item is on the CDN it is public to everyone.

MORE TO READ:

 

Posted in Sitecore 8 | Tagged , , , , , , , | 3 Comments

Sitecore ContentSearch – Get items from SOLR or Lucene – A base class implementation

Reading items from Sitecore is pretty straight forward:

Sitecore.Data.Items.Item item = 
   Sitecore.Context.Database.GetItem("/sitecore/context/.../...");

And it is fast, unless you need to retrieve items from many paths, or need to retrieve every child of a certain base class. In these situations you resolve to using the built in ContentSearch, which is a Lucene or SOLR index.

When working with objects from the ContentSearch API you will have to create your own model classes that maps the indexed fields to class properties. This is done using an IndexFieldAttribute to the properties of the class that will represent the indexed data:

[IndexField("customerpage")]
public ID CustomerPageId {	get; internal set; }

[IndexField("customername")]
public string CustomerName { get; internal set; }

The default indexes called sitecore_core_index, sitecore_master_index and sitecore_web_index is born with a long list of default properties that is useful for every class. Because of this it makes sense to let every one of your model classes inherit from a base class that maps these fields for you.

So let’s code.

STEP 1: CREATE A BASE CLASS

This base class maps the most common fields. There are many more for you to explore, but this particular class have been the base class of a huge project that I have been working on for the past 4 years:

using System;
using System.Collections.Generic;
using System.ComponentModel;
using Sitecore.Configuration;
using Sitecore.ContentSearch;
using Sitecore.ContentSearch.Converters;
using Sitecore.Data;
using Sitecore.Data.Items;
using Sitecore.Diagnostics;

namespace MySearch
{
  [Serializable]
  public abstract class SearchResultItem
  {
    [NonSerialized]
    private Item _item;

    // Get the actual Sitecore item. Beware that using this property 
    // will substantially slow your query, as it looks up the item
    // in Sitecore. Use with caution, and try to avoid using it in
    // LINQ or enumerations 
    public virtual Item Item
    {
      get { return _item ?? (_item = GetItem()); } set { _item = value; }
    }

    // Returns the Item ID (in SOLR this is stored as a short GUID in the _group field)
    [IndexField(Sitecore.ContentSearch.BuiltinFields.Group)]
    [TypeConverter(typeof(IndexFieldIDValueConverter))]
    public virtual ID ItemId
    {
      get; set;
    }

    // This is a combined key describing the Sitecore item in details
    // For example: sitecore://web/{7102ee6b-6361-41ad-a47f-832002082a1a}?lang=da&ver=1&ndx=sitecore_web_index
    // With the ItemUri class you can extract the individual values like database, id, language, version
    [IndexField(Sitecore.ContentSearch.BuiltinFields.UniqueId)]
    [TypeConverter(typeof(IndexFieldItemUriValueConverter))]
    public virtual ItemUri ItemUri
    {
      get; set;
    }

    // Return the item language
    [IndexField(Sitecore.ContentSearch.BuiltinFields.Language)]
    public virtual string Language
    {
      get; set;
    }

    // Returns true if the item is the latest version. When reading from the
    // web database index, this will alwaus be true.    
    [IndexField(Sitecore.ContentSearch.BuiltinFields.LatestVersion)]
    public bool IsLatestVersion
    {
      get; set;
    }

    // Returns the ID's of every parent sorted by top parent first
    [IndexField(Sitecore.ContentSearch.BuiltinFields.Path)]
    [TypeConverter(typeof(IndexFieldEnumerableConverter))]
    public virtual IEnumerable<ID> ItemAncestorsAndSelf
    {
      get; set;
    }

    // Returns the updated datetime
    [IndexField(Sitecore.ContentSearch.BuiltinFields.SmallUpdatedDate)]
    public DateTime Updated
    {
      get; set;
    }

    // Returns every template that this item implements and inherits
    [IndexField(Sitecore.ContentSearch.BuiltinFields.AllTemplates)]
    [TypeConverter(typeof(IndexFieldEnumerableConverter))]
    public virtual IEnumerable<ID> ItemBaseTemplates
    {
      get; set;
    }

    private Item GetItem()
    {
      Assert.IsNotNull(ItemUri, "ItemUri is null.");
      return Factory.GetDatabase(ItemUri.DatabaseName).GetItem(ItemUri.ItemID, ItemUri.Language, ItemUri.Version);
    }
  }
}

STEP 2: CREATE A MODEL CLASS FOR A SPECIFIC TEMPLATE

This example inherits from the SearchResultItem base class, and encapsulates a Customer template containing 2 fields, CustomerPage and CustomerName.

namespace MySearch
{
  [DataContract]
  [Serializable]
  public class CustomerModel : SearchResultItem
  {
    [DataMember]
    [IndexField("customername")]
    public string CustomerName { get; internal set; }

    [IndexField("customerpage")]
    public ID CustomerPageId { get; internal set; }
  }
}

STEP 3: USING THE BASE CLASS TO SEARCH USING PREDICATES

A Predicate are a Latin word for “making search soo much easier”. Predicates defines reusable static functions. When run, Predicates become part of the index query itself, further improving performance. So let’s start by making 3 predicates:

namespace MySearch
{
  public static class Predicates
  {
    // Ensure that we only return the latest version
    public static Expression<Func<T, bool>> IsLatestVersion<T>() where T : SearchResultItem
    {
      return searchResultItem => searchResultItem.IsLatestVersion;
    }

    // Ensure that the item returned is based on, or inherits from the specified template
    public static Expression<Func<T, bool>> IsDerived<T>(ID templateID) where T : SearchResultItem
    {
      return searchResultItem => searchResultItem.ItemBaseTemplates.Contains(templateID);
    }

    // Ensure that the item returned is a content item by checking that the 
    // content root is part of the item path 
    public static Expression<Func<T, bool>> IsContentItem<T>() where T : SearchResultItem
    {
      return searchResultItem => searchResultItem.ItemAncestorsAndSelf.Contains(ItemIDs.ContentRoot);
    }
  }
}

With these predicates in place, I can create a repository for my Customer items:

namespace MySearch
{
  public class CustomerModelRepository
  {
    private readonly Database _database;

    public CustomerModelRepository() : this(Context.Database)
    {
    }

    public CustomerModelRepository(Database database)
    {
      _database = database;
    }

    public IEnumerable<CustomerModel> GetAll()
    {
      return Get(PredicateBuilder.True<CustomerModel>());
    }

    private IEnumerable<CustomerModel> Get(Expression<Func<CustomerModel, bool>> predicate)
    {
      using (IProviderSearchContext context = GetIndex(_database).CreateSearchContext(SearchSecurityOptions.DisableSecurityCheck))
      {
        return context.GetQueryable<CustomerModel>()
          .Where(Predicates.IsDerived<CustomerModel>(new ID("{1EB6DC02-4EBD-427A-8E36-7D2327219B6C}")))
          .Where(Predicates.IsLatestVersion<CustomerModel>())
          .Where(Predicates.IsContentItem<CustomerModel>())
          .Where(predicate).ToList();
      }
    }
    
    private static ISearchIndex GetIndex(Database database)
    {
      Assert.ArgumentNotNull(database, "database");
      switch (database.Name.ToLowerInvariant())
      {
        case "core":
          return ContentSearchManager.GetIndex("sitecore_core_index");
        case "master":
          return ContentSearchManager.GetIndex("sitecore_master_index");
        case "web":
          return ContentSearchManager.GetIndex("sitecore_web_index");
        default:
          throw new ArgumentException(string.Format("Database '{0}' doesn't have a default index.", database.Name));
      }
    }
  }
}

The private Get() method returns every index item following these criteria:

  • Must implement or derive from the template with the specified GUID (the GUID of the Customer template) = Predicates.IsDerived
  • And must be the latest version = Predicates.IsLatestVersion
  • And must be a content item = Predicates.IsContentItem

The repository is used like this:

CustomerModelRepository rep = new CustomerModelRepository(Sitecore.Context.Database);
IEnumerable<CustomerModel> allCustomers = rep.GetAll();
foreach (CustomerModel customer in allCustomers)
{
  // do something with the customer
  customer.CustomerName;
}

I hope this introduction will help you create your own base class implementation and start making fast content searches.

MORE TO READ:

For more SOLR knowledge, you should read my colleague Søren Engel‘s posts about SOLR:

These resources could also be helpful:

 

Posted in .net, c#, General .NET, Sitecore 7, Sitecore 8 | Tagged , , , , , , | Leave a comment

Sitecore user:created event not fired on Membership.CreateUser

Sitecore have since 7.5’ish fired events each time you manipulate the users in the .NET Membership database:

<event name="user:created"></event>
<event name="user:deleted"></event>
<event name="user:updated"></event>
<event name="roles:usersAdded"></event>
<event name="roles:usersRemoved"></event>

But I noticed that the user:created event was not fired. This is because I call the .NET Membership provider directly:

string userNameWithDomain = "extranet\\myuser";
string password = "somepassword";
string email = "myuser@somewhere.com";
Membership.CreateUser(userNameWithDomain, password, email);

This call to Membership is not handled by Sitecore, thus no event is executed. To fix this I have found 2 solutions, one is not good and the other one is not good either.

SOLUTION 1: CALL THE SITECORE MEMBERSHIP PROVIDER DIRECTLY

This solution ignores the web.config settings and assume that you have not switched or overwritten the Membership Provider yourself. But it will fire the event though:

Sitecore.Security.SitecoreMembershipProvider provider = new Sitecore.Security.SitecoreMembershipProvider();
MembershipCreateStatus status = MembershipCreateStatus.Success;
provider.CreateUser(usernameWithDomain, password, email, "", "", true, null, out status);
if (status != MembershipCreateStatus.Success)
  throw new MembershipCreateUserException(status);

SOLUTION 2: RAISE THE EVENT YOURSELF

This solution requires you to raise the event yourself. You need encapsulate the call to create a user in your own class, and instruct everyone to never call Membership.CreateUser() directly:

MembershipUser user = Membership.CreateUser(usernameWithDomain, password, email);
Event.RaiseEvent("user:created", user);

I can see from other blog posts that the user events are not the most widely used events in the Sitecore toolbox. If you have found another solution to this problem please let me know.

MORE TO READ:

Posted in .net, c#, General .NET, Sitecore 7, Sitecore 8 | Tagged , , , , , , , , | Leave a comment