HttpClient follow 302 redirects with .NET Core

The HttpClient in .NET Core will not automatically follow a 302 (or 301) redirect. You need to specify that you allow this. use the HttpClientHandler to do this:

private static HttpClient _httpClient = new HttpClient(
    new HttpClientHandler 
    { 
        AllowAutoRedirect = true, 
        MaxAutomaticRedirections = 2 
    }
);

Now your code will follow up to 2 redirections. Please note that a redirect from a HTTPS to HTTP is not allowed.

You can now grab the contents even from a redirected URL:

public static async Task<string> GetRss()
{
    // The /rss returns a 301 Moved Permanently, but my code will redirect to 
    // /feed and return the contents
    var response = await _httpClient.GetAsync("https://briancaos.wordpress.com/rss");
    if (!response.IsSuccessStatusCode)
        throw new Exception($"The requested feed returned an error: {response.StatusCode}");

    var content = await response.Content.ReadAsStringAsync();
    return content;
}

MORE TO READ:

Posted in .net, .NET Core, c#, General .NET | Tagged , , , , | Leave a comment

Create a custom Azure Dashboard Tile Using an Azure Function and Markdown Format

The tiles on an Azure Dashboard will only display data from Azure itself. So if you have data from the outside, like a database, you need to make a few workarounds.

Tiles of an Azure Dashboard

Now, you cannot make your custom tile as pretty as these tiles. This is not Grafana after all. But you are able to add a dashboard that displays markdown (the kindergarden version of HTML) and this markdown can come from an URL. So with a little compromise you can in fact make a custom tile.

STEP 1: CREATE AN AZURE FUNCTION

First you must create an Azure Function that can provide the markdown to display.

Create a HTTPTrigger endpoint:

HTTP Trigger Endpoint

Depending on what you are monitoring, the code will differ. The important part is that you return markdown text. This is an example of a method that will draw a SQL table using the markdown format:

using System.Net;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Primitives;
using System.Data;
using System.Data.SqlClient;
using System.Configuration;
using System.Collections.Generic;
using System.Text;

public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
{
    var connectionString = Environment.GetEnvironmentVariable("myConnectionString");
    StringBuilder sb = new StringBuilder();
    sb.AppendLine("| Customer | DateTime1 | DateTime2 | DateTime3 | DateTime4 | ");
    sb.AppendLine("|----------|-----------|-----------|-----------|-----------| ");

    string sql = "SELECT * FROM [dbo].[MyTable]";
 
    using (var conn = new SqlConnection(connectionString))
    using (var command = new SqlCommand(sql, conn)) 
    {
        conn.Open();
        using (SqlDataReader dr = command.ExecuteReader())
        {
            while (dr.Read())
            {
                var dateTime1 = dr["DateTime1"];
                var dateTime2 = dr["DateTime2"];
                var dateTime3 = dr["DateTime3"];
                var dateTime4 = dr["DateTime4"];
                sb.AppendLine($"| {dr["CustomerName"].ToString()} | {ToLocalTime(dateTime1)} | {ToLocalTime(dateTime2)} | {ToLocalTime(dateTime3)} | {ToLocalTime(dateTime4)} | ");
            }  
        }
    }
    return new OkObjectResult(sb.ToString());
}

private static string ToLocalTime(object o)
{
    if (DBNull.Value.Equals(o))
        return string.Empty;
    DateTime dt = Convert.ToDateTime(o);
    var timeZone = TimeZoneInfo.FindSystemTimeZoneById("Romance Standard Time");
    return TimeZoneInfo.ConvertTimeFromUtc(dt, timeZone).ToString("yyyy-MM-dd HH:mm");
}

STEP 2: GET THE URL OF THE AZURE FUNCTION

Grab the function URL and copy it to your clipboard.

STEP 3: ADD A MARKDOWN TILE FROM THE GALERY

In the gallery, choose the Markdown tile. Instead of adding static markdown, allow the tile to get the markdown from an URL. The URL is the Azure Function URL.

Add a markdown tile to your dashboard

STEP 4: ENJOY YOUR NEW DASHBOARD

Granted, the output is not as nice as the built in tiles, but if you squint your eyes you won’t notice:

Custom Azure Dashboard Tile

MORE TO READ:

Posted in c#, Microsoft Azure, .NET Core | Tagged , , , , | Leave a comment

C# Remove Duplicates from List with LINQ

C# LINQ do have a Distinct() method that works on simple types:

// An string with non-unique elements
string s = "a|b|c|d|a";

// Split the list and take the distinctive elements
var distinctList = s.Split('|').Distinct().ToArray();

// Re-join the list 
var distinctString = string.Join("|", distinctList);

// Output will be: "a|b|c|d"
Console.WriteLine(distinctString);

For non-simple types, you have 2 options, but first lets make a non-simple type, a class:

public class MyClass
{
    public string Title { get; set; }
    public string Text { get; set; }
}

OPTION 1: IMPLEMENT AN EQUALITYCOMPARER

The equalitycomparer is a class that is specifically designed to compare a specific class. An example that will compare on the Title property of the MyClass looks like this:

public class MyClassDistinctComparer : IEqualityComparer<MyClass>
{
    public bool Equals(MyClass x, MyClass y) 
    {
        return x.Title == y.Title;
    }

    public int GetHashCode(MyClass obj) 
    {
        return obj.Title.GetHashCode() ^ obj.Text.GetHashCode();
    }       
}

And to use it:

// Create a list of non-unique titles
List<MyClass> list = new List<MyClass>();
list.Add(new MyClass() { Title = "A", Text = "Text" });
list.Add(new MyClass() { Title = "B", Text = "Text" });
list.Add(new MyClass() { Title = "A", Text = "Text" });

// Get the distinct elements:
var distinctList = list.Distinct(new MyClassDistinctComparer());

// Output is: "A B"
foreach (var myClass in distinctList)
     Console.WriteLine(myClass.Title);

OPTION 2: GROUP AND TAKE FIRST

If an equalitycomparer is too much of a hassle, you can take a shortcut and group the list by the title, and take the first element in each group:

// Make a list of non-unique elements
List<MyClass> list = new List<MyClass>();
list.Add(new MyClass() { Title = "A", Text = "Text" });
list.Add(new MyClass() { Title = "B", Text = "Text" });
list.Add(new MyClass() { Title = "A", Text = "Text" });

// Skip the equalitycomparer. Instead, group by title, and take the first element of each group
var distinctList = list.GroupBy(s => s.Title).Select(s => s.First()).ToArray();

// Output is: "A B"
foreach (var myClass in distinctList)
    Console.WriteLine(myClass.Title);

What happens is, that the GroupBy(s => s.Title) will make 2 groups, one for title “A” with 2 elements, and one for title “B” with 1 element. The Select(s => s.First()) then takes the first element from each group, resulting in a list with unique elements.

MORE TO READ:

Posted in .net, .NET Core, c#, General .NET | Tagged , , | Leave a comment

Read and Write blob file from Microsoft Azure Storage with .NET Core

The documentation on the Azure Storage Blobs are a little fuzzy, as the NuGet packages and the approach have changed over time.

The latest NuGet Package is now called:

The concept of blob storages are the same though:

  • You use a connectionstring to connect to an Azure Storage Account.
  • Blob storage is divided into containers. To access a container you need a BlobContainerClient.
  • To access a blob you get a BlobClient from a BlobContainerClient.
  • With the BlobClient you can upload and download blobs.

Blobs can be accessed via an URL, like this:

ENOUGH TALK, SHOW ME THE CODE:

This is an example of a very simple repository that will read, write and delete blobs:

using Azure.Storage.Blobs;
using Azure.Storage.Blobs.Models;
using System.IO;
using System.Text;
using System.Threading.Tasks;

namespace MyCode
{
  public class BlobRepository
  {
    private BlobContainerClient _client;

    /// <summary>
    /// Create an instance of blob repository
    /// </summary>
    /// <param name="connectionString">The storage account connection string</param>
    /// <param name="containerName">The name of the container</param>
    public BlobRepository(string connectionString, string containerName)
    {
      _client = new BlobContainerClient(connectionString, containerName);
      // Only create the container if it does not exist
      _client.CreateIfNotExists(PublicAccessType.BlobContainer);
    }

    /// <summary>
    /// Upload a local file to the blob container
    /// </summary>
    /// <param name="localFilePath">Full path to the local file</param>
    /// <param name="pathAndFileName">Full path to the container file</param>
    /// <param name="contentType">The content type of the file being created in the container</param>
    public async Task Upload(string localFilePath, string pathAndFileName, string contentType)
    {
      BlobClient blobClient = _client.GetBlobClient(pathAndFileName);

      using FileStream uploadFileStream = File.OpenRead(localFilePath);
      await blobClient.UploadAsync(uploadFileStream, new BlobHttpHeaders { ContentType = contentType });
      uploadFileStream.Close();
    }

    /// <summary>
    /// Download file as a string
    /// </summary>
    /// <param name="pathAndFileName">Full path to the container file</param>
    /// <returns>Contents of file as a string</returns>
    public async Task<string> Download(string pathAndFileName)
    {
      BlobClient blobClient = _client.GetBlobClient(pathAndFileName);
      if (await blobClient.ExistsAsync())
      {
        BlobDownloadInfo download = await blobClient.DownloadAsync();
        byte[] result = new byte[download.ContentLength];
        await download.Content.ReadAsync(result, 0, (int)download.ContentLength);

        return Encoding.UTF8.GetString(result);
      }
      return string.Empty;
    }

    /// <summary>
    /// Delete file in container
    /// </summary>
    /// <param name="pathAndFileName">Full path to the container file</param>
    /// <returns>True if file was deleted</returns>
    public async Task<bool> Delete(string pathAndFileName)
    {
      BlobClient blobClient = _client.GetBlobClient(pathAndFileName);
      return await blobClient.DeleteIfExistsAsync(DeleteSnapshotsOption.IncludeSnapshots);
    }
  }
}

HOW TO USE IT?

// Create a new repository. Enter the connectionstring and the
// container name in the constructor:
BlobRepository rep = new BlobRepository("YOUR_SECRET_CONNECTIONSTRING","test");

// To upload a file, give a file name from the local disk.
// Add the name of the blob file (notice that the path is in the blob filename
// not, the container name), and the content type of the file to be uploaded
await rep.Upload("d:\\test.json", "blobtest/test.json", "application/json")

// To download a file, enter the path and file name of the blob
// (not including the container name) and the contents is returned as a 
// string
string jsonFile = await rep.Download("blobtest/test.json");

// Deleting the blob is done by entering the path and file of the blob
await rep.Delete("blobtest/test.json");

In the example above, the URL to the created blob will be:

This is just a simple example, but it can easily be modified to uploading streams or downloading binary files.

That’s it. Happy coding.

MORE TO READ:

Posted in .net, .NET Core, c#, General .NET, Microsoft Azure | Tagged , | 1 Comment

Read and Write Azure Queue with .NET Core

The documentation around Azure Queues and .NET Core is a little fuzzy, as the approach have changed slightly over the last updates. Previously you had a shared Storage Account NuGet Package that gave access to Queues, Blob Storage and Table storage. Now you use seperate NuGet packages, and the one for queues are:

The reading and writing to the queue have changed slightly also, so this is a simple but complete example on how to send and receive messages using Azure.Storage.Queues:

using Azure.Storage.Queues;
using Azure.Storage.Queues.Models;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace MyCode
{
  public class QueueRepository 
  {
    private int _batchCount;
    private QueueClient _client;

    /// <summary>
    /// Create a new instance of QueueRepository
    /// </summary>
    /// <param name="connectionString">Connectionstring to Azure Storage Account</param>
    /// <param name="queueName">Name of queue</param>
    /// <param name="batchCount">Number of messages to get from queue per call</param>
    public QueueRepository(string connectionString, string queueName, int batchCount)
    {
      _client = new QueueClient(connectionstring, queueName);
      _batchCount = batchCount;
    }

    /// <summary>
    /// Add a new message to the queue
    /// </summary>
    /// <param name="messageText">Contents of the message</param>
    public async Task Send(string messageText)
    {
	  // If you do not base64 encode the message before adding it, you
	  // cannot read the message using Visual Studio Cloud Explorer
      await _client.SendMessageAsync(Base64Encode(messageText));
    }

    /// <summary>
    /// Read a maximum of _batchCount messages from the queue. Once they are
	/// read, they are immediately deleted from the queue. This approach is
	/// not the default approach, and will negate the auto-retry machanism 
	/// built into the queue system. But it makes the queue easier to understand
    /// </summary>
    public async Task<IEnumerable<QueueMessage>> Receive()
    {
      int maxCount = _batchCount;
      int maxBatchCount = 32;
      List<QueueMessage> receivedMessages = new List<QueueMessage>();
      do
      {
        if (maxCount < 32)
          maxBatchCount = maxCount;
        QueueMessage[] messages = await _client.ReceiveMessagesAsync(maxBatchCount, TimeSpan.FromSeconds(30));
        receivedMessages.AddRange(messages);
        await DeleteMessageFromQueue(messages);

        if (messages.Count() < maxBatchCount)
          return receivedMessages;

        maxCount -= messages.Count();
      } while (maxCount > 0);

      return receivedMessages;
    }

    private async Task DeleteMessageFromQueue(QueueMessage[] messages)
    {
      foreach (QueueMessage message in messages)
      {
        await _client.DeleteMessageAsync(message.MessageId, message.PopReceipt);
      }
    }

    private static string Base64Encode(string plainText)
    {
      var plainTextBytes = System.Text.Encoding.UTF8.GetBytes(plainText);
      return System.Convert.ToBase64String(plainTextBytes);
    }
  }
}

EXPLANATION:

The constructor takes the azure storage account connection string and the queue name as parameters, along with a batchcount. The batchcount is how many messages are received from the queue per read.

The Send method will base64 encode the message before adding it to the queue. If you do not do this, you cannot read the message using Visual Studio Cloud Explorer.

The Receive method will read batchcount messages from the queue. Messages are deleted immediately after reading. If you do not do this, you will have to manually delete the message before 30 seconds, or the message will appear in the queue again. Delete the code that deletes from the queue if you wish that functionality.

Also, please note that messages that you receive are also base64 encoded. To decode the message you can use this QueueMessage Extension Method. The extension method will also convert a JSON queue message into an object.

MORE TO READ:

Posted in .net, .NET Core, c#, General .NET | Tagged , , , | Leave a comment

Azure.Storage.Queues QueueMessage Deserialize JSON with .NET Core

The documentation around .NET QueueMessage is a little fuzzy so depending on the version of your NuGet libraries might differ in properties. This article uses the Azure.Storage.Queues, Version=12.7.0.0.

If you, like me, have systems writing JSON messages to the queue, you also struggle with converting these queue messages back to an object when reading from the queue.

But with a little help from NewtonSoft, it does not have to be that difficult.

Imagine that you wish to get this simple message from the queue:

A simple JSON message added to the queue via Visual Studio

This message can be mapped to this class:

using Newtonsoft.Json;

namespace MyCode
{
  public class HelloWorld
  {
    [JsonProperty("title")]
    public string Title { get; set; }

    [JsonProperty("text")]
    public string Text { get; set; }
  }
}

CHALLENGE #1: IS THE CONTENT ENCODED?

Now, when you read the message from the queue, you might get a surprise, as the original message is nowhere to be seen:

The message in the MessageText property?

Yes, when adding messages from Visual Studio, the contents is base 64 encoded. So first the message needs to be decoded, and then converted into an object.

THE EXTENSION METHOD:

This extension method will do the heavy lifting for you:

using Azure.Storage.Queues.Models;
using Newtonsoft.Json;
using System;
using System.Text;

namespace MyCode
{
  public static class QueueMessageExtensions
  {
    public static string AsString(this QueueMessage message)
    {
      byte[] data = Convert.FromBase64String(message.MessageText);
      return Encoding.UTF8.GetString(data);
    }

    public static T As<T>(this QueueMessage message) where T : class
    {
      byte[] data = Convert.FromBase64String(message.MessageText);
      string json = Encoding.UTF8.GetString(data);
      return Deserialize<T>(json, true);
    }

    private static T Deserialize<T>(string json, bool ignoreMissingMembersInObject) where T : class
    {
      T deserializedObject;
      MissingMemberHandling missingMemberHandling = MissingMemberHandling.Error;
      if (ignoreMissingMembersInObject)
        missingMemberHandling = MissingMemberHandling.Ignore;
      deserializedObject = JsonConvert.DeserializeObject<T>(json, new JsonSerializerSettings { MissingMemberHandling = missingMemberHandling, });
      return deserializedObject;
    }

  }
}

USAGE:

// This is an arbitrary class that returns a list of messages from 
// an Azure Queue. You have your own class here
IEnumerable<QueueMessage> messages = await _queueRepository.Get();

foreach (var message in messages)
{
  // Use the extension method to convert the message to the
  // HelloWorld type:
  var obj = message.As<HelloWorld>();
  // You can now access the properties:
  _logger.LogInformation($"{obj.Title}, {obj.Text}");
}

MORE TO READ:

Posted in .net, .NET Core, c#, General .NET, Microsoft Azure | Tagged , , , , , | 1 Comment

C# Using Dapper as your SQL framework in .NET Core

Dapper is a easy to use object mapper for .NET and .NET Core, an it can be used a variety of ways. I use Dapper instead of Entity Framework because it makes my code less complex.

BASICS OF DAPPER: THE OBJECT MAPPER

Basically, Dapper is an object mapper. This means that Dapper will map SQL rows to C# model classes 1-1. So if you wish to select data from an SQL table, you create a class containing the exact same fields as the SQL table:

Contact Form Table

So for that contact form table above, I can create a corresponding ContactForm model class:

using System;
using Dapper.Contrib.Extensions;

namespace MyCode
{
  [Table("dbo.ContactForms")]
  public class ContactForm
  {
    [Key]
    public int Id { get; set; }

    public DateTime Created { get; set; }
	public string Name { get; set; }
	public string Phone { get; set; }
	public string Email { get; set; }
	public int? ZipCode { get; set; }
	public string Comment { get; set; }
	public string IpAddress { get; set; }
	public string UserAgent { get; set; }
  }
}

By including the Dapper.Contrib.Extensions, I can mark the table key in my code, and the table itself. Nullable fields like the zipcode are also nullable in my class.

SIMPLE SQL SELECT WITH DAPPER

Now with the mapping in place, selecting and returning a class is super easy:

using System.Collections.Generic;
using System.Data.SqlClient;
using System.Linq;
using Dapper;
using Dapper.Contrib.Extensions;

public class ContactFormRepository
{
  public IEnumerable<ContactForm> Get()
  {
    using var connection = new SqlConnection("some_sql_connection_string");
    return connection.Query<ContactForm>("select * from ContactForms").ToList();
  }
}

Dapper will map the ContactForms table to my ContactForm model class.

Selecting with parameters are equally easy, presented here in 2 different forms; one method returning only one row, another method returning all matching the parameter:

public ContactForm Get(int id)
{
    using var connection = new SqlConnection("some_sql_connection_string");
    return connection.QuerySingleOrDefault<ContactForm>("select * from ContactForms where id = @Id", 
      new { Id = id } 
    );
}

public IEnumerable<ContactForm> Get(string email)
{
    using var connection = new SqlConnection("some_sql_connection_string");
    return connection.Query<ContactForm>("select * from ContactForms where email = @Email", 
      new { Email = email } 
    ).ToList();
}

INSERT STATEMENT WITH DAPPER:

With inserting you decide if you wish to use your model class (great for exact inserts) or if you wish to use a dynamic class. The latter is great when you have fields that are autogenerated by the SQL server like auto-incrementing keys or dates that is set to GETDATE():

public void Insert(ContactForm contactForm)
{
    using var connection = new SqlConnection("some_sql_connection_string");
    connection.Insert(contactForm);
}

public void Insert(string name, string email)
{
    using var connection = new SqlConnection("some_sql_connection_string");
    connection.Execute("insert into ContactForms (name, email) values (@Name, @Email)", new { Name = name, Email = email });
}

USING STORED PROCEDURES:

This is also easy, just give the name of the stored procedure. In this example I will also use the DynamicParameters just to be fancy:

public IEnumerable<ContactForm> Get(string email)
{
    using var connection = new SqlConnection("some_sql_connection_string");
    var parameters = new DynamicParameters();
    parameters.Add("@Email", email);
    return connection.Query<ContactForm>("Stored_Procedure_Name", parameters, commandType: CommandType.StoredProcedure).ToList();
}

The same goes with insert using a stored procedure:

public void Insert(string name, string email)
{
    using var connection = new SqlConnection("some_sql_connection_string");
    var parameters = new DynamicParameters();
    parameters.Add("@Name", name);
    parameters.Add("@Email", email);
    connection.Execute("Stored_Procedure_Name", parameters, commandType: CommandType.StoredProcedure);
}

That’s basically it. Very easy to use.

MORE TO READ:

Posted in .net, .NET Core, c#, General .NET | Tagged , , | Leave a comment

Sitecore KeepLockAfterSave – Configuring Security Policies Per-Role Based

Now here is a nifty Sitecore trick. You have probably learned about the AutomaticLockOnSave feature that allows Sitecore to lock an item when it is saved. The feature is enabled or disabled using configuration setting (and can be negated with the inverse AutomaticUnlockOnSaved setting).

But did you know that you can set the lock on save properties on a per-role?

Yes, Sitecore have a section in the core database where a lot of the security properties are stored. Users who have access the a particular item, have that property.

The policies are stored here: /sitecore/system/Settings/Security/Policies

Sitecore Security Policies

For “KeepLockAfterSave“, you will need to modify the config file, and allow Sitecore to read the CORE database setting:

<!-- 
    The "Keep Lock After Save" item is serialized in order to deploy permission on it, 
    making the roles SSTEditor + SSTAdmin unable to read it, thus making them not keep locks on 
    datasource items after doing page editing 
-->
<include name="KeepLockAfterSave" database="core" path="/sitecore/system/Settings/Security/Policies/Page Editor/Keep Lock After Save"/>

To set the property to false for certain groups, block the access to the item for that group:

My manager group does not have access to this item, meaning that the “Keep Lock After Save” is false.

MORE TO READ:

Posted in Sitecore, Sitecore 6, Sitecore 7, Sitecore 8, Sitecore 9 | Tagged , , , | 1 Comment

C# Remove specific Querystring parameters from URL

These 2 extension methods will remove specific query string parameters from an URL in a safe manner.

METHOD #1: SPECIFY THE PARAMETERS THAT SHOULD GO (NEGATIVE LIST):

using System;
using System.Linq;
using System.Web;

namespace MyCode
{
  public static class UrlExtension
  {
    public static string RemoveQueryStringsFromUrl(this string url, string[] keys)
    {
      if (!url.Contains("?"))
        return url;

      string[] urlParts = url.ToLower().Split('?');
      try
      {
        var querystrings = HttpUtility.ParseQueryString(urlParts[1]);
        foreach (string key in keys)
          querystrings.Remove(key.ToLower());

        if (querystrings.Count > 0)
          return urlParts[0] 
            + "?" 
            + string.Join("&", querystrings.AllKeys.Select(c => c.ToString() + "=" + querystrings[c.ToString()]));
        else
          return urlParts[0];
      }
      catch (NullReferenceException)
      {
        return urlParts[0];
      }
    }
  }
}

Usage/Test cases:

string url = "https://briancaos.wordpress.com/page/?id=1&p=2";
string url2 = url.RemoveQueryStringsFromUrl(url, new string[] {"p"});
string url3 = url.RemoveQueryStringsFromUrl(url, new string[] {"p", "id"});

//Result: 
// https://briancaos.wordpress.com/page/?id=1
// https://briancaos.wordpress.com/page

METHOD #2: SPECIFY THE PARAMETERS THAT MAY STAY (POSITIVE LIST):

using System;
using System.Linq;
using System.Web;

namespace MyCode
{
  public static class UrlExtension
  {
    public static string RemoveQueryStringsFromUrlWithPositiveList(this string url, string[] allowedKeys)
    {
      if (!url.Contains("?"))
        return url;

      string[] urlParts = url.ToLower().Split('?');
      try
      {
        var querystrings = HttpUtility.ParseQueryString(urlParts[1]);
        var keysToRemove = querystrings.AllKeys.Except(allowedKeys);

        foreach (string key in keysToRemove)
          querystrings.Remove(key);

        if (querystrings.Count > 0)
          return urlParts[0] 
		    + "?" 
			+ string.Join("&", querystrings.AllKeys.Select(c => c.ToString() + "=" + querystrings[c.ToString()]));
        else
          return urlParts[0];
      }
      catch (NullReferenceException)
      {
        return urlParts[0];
      }
    }
  }
}

Usage/Test cases:

string url = "https://briancaos.wordpress.com/page/?id=1&p=2";
string url2 = url.RemoveQueryStringsFromUrl(url, new string[] {"p"});
string url3 = url.RemoveQueryStringsFromUrl(url, new string[] {"p", "id"});

//Result: 
// https://briancaos.wordpress.com/page/?p=2
// https://briancaos.wordpress.com/page/?id=1&p=2

MORE TO READ:

Posted in .net, .NET Core, c#, General .NET | Tagged , | 3 Comments

Sitecore ComputedIndexField extends your SOLR index

The Sitecore SOLR index is your quick access to Sitecore content. And you can extend this access by adding computed index fields. This is a way of enriching your searches with content that is not part of your Sitecore templates, but is needed when doing quick searches.

THE SIMPLE SCENARIO: GET A FIELD FROM THE PARENT ITEM

This is a classic scenario, where the content in Sitecore is organized in a hierarchy, for example by Category/Product, and you need to search within a certain category:

Category/Product Hierarcy

In order to make a direct search for products within a certain category, you will need to extend the product template with the category ID, so you can do a search in one take. So lets add the category ID to the product template SOLR search using a computed index field.

STEP 1: THE CONFIGURATION:

<?xml version="1.0" encoding="utf-8"?>
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:role="http://www.sitecore.net/xmlconfig/role/" xmlns:env="http://www.sitecore.net/xmlconfig/env/">
  <sitecore>
    <contentSearch>
      <indexConfigurations>
        <defaultSolrIndexConfiguration>
          <fieldMap>
            <fieldNames hint="raw:AddFieldByFieldName">
              <field fieldName="CategoryId" returnType="guid" />
            </fieldNames>
          </fieldMap>
          <documentOptions>
            <fields hint="raw:AddComputedIndexField">          
              <field fieldName="CategoryId" returnType="string">MyCode.ComputedIndexFields.CategoryId, MyDll</field>
            </fields>
          </documentOptions>
        </defaultSolrIndexConfiguration>
      </indexConfigurations>
    </contentSearch>
  </sitecore>
 </configuration>  

The configuration is a 2 step process. The “fieldMap” maps field names (CategoryId in this case) to output types, in this case a GUID. The documentOptions maps the field name to a piece of code that can compute the field value. Please note that the documentOptions claims that the output type is a string, not a Guid. But don’t worry, as long as our code returns a Guid, everything will be fine.

STEP 2: THE CODE

using Sitecore.ContentSearch;
using Sitecore.ContentSearch.ComputedFields;
using Sitecore.Data.Items;

namespace MyCode.ComputedIndexFields
{
  public class CategoryId : IComputedIndexField
  {
    public object ComputeFieldValue(IIndexable indexable)
    {
      Item item = indexable as SitecoreIndexableItem;

      if (item == null)
        return null;

      if (item.TemplateName != "Product")
        return null;

      Item categoryItem = item.Parent;
      if (categoryItem.TemplateName != "Category")
        return null;

      return categoryItem.ID.ToGuid();
    }

    public string FieldName
    {
      get;
      set;
    }

    public string ReturnType
    {
      get;
      set;
    }
  }
}

The code is equally straight forward. If the code returns NULL, no value will be added.

The code first checks to see if the item being indexed is a product. If not, the code is skipped. Also, if the parent item is not a category, we also skip the code. Only if the item is a product and the parent is a category, the category ID is added to the index.

You will need to re-index your SOLR index. When the index is updated, you will find a “CategoryId” field on all of the “Product” templates in the SOLR index.

MORE TO READ:

Posted in General .NET, Sitecore, Sitecore 5, Sitecore 6, Sitecore 7, Sitecore 8, Sitecore 9 | Tagged , , | Leave a comment