Add free search for your website. Sign up now! https://webveta.alightservices.com/
Categories
.Net ASP.Net C# NLog

An abnormal way of logging – open for discussion

While I have been brainstorming about something, some small idea came to my mind. People who would read this blog post would either call me stooooopid or might say nice idea.

Anyway the point is, we use logging for various purposes – mostly for troubleshooting. Very verbose logs are a nightmare in terms of performance, storage, retrieval and digging through for the right information. Sometimes, issues troubleshooting becomes a pain because of inadequate information in logs.

What if we log Info and above under normal circumstances, trace and / or debug in certain conditions such as unexpected expectations or errors?

Here is a brief overview of how this might be implemented – in this case, there is a slight memory pressure.

  1. Collect trace and / or debug into Memory log i.e for example if using NLog, use Memory target.
  2. Have some static method that writes the logs from Memory target into a different log target such as File / Database etc…
  3. In the specific conditions such as exception call the static method and in ASP.Net even implement a exception filter to perform the same.

This might be a win-win scenario i.e collecting detailed information in case of unexpected exceptions and error, for any other normal scenario normal logging. Because memory target is being used, very small performance hit, slightly higher memory usage are the drawbacks.

I would love to know how other developers are implementing or handling such use cases.

Mr. Kanti Kalyan Arumilli

Arumilli Kanti Kalyan, Founder & CEO
Arumilli Kanti Kalyan, Founder & CEO

B.Tech, M.B.A

Facebook

LinkedIn

Threads

Instagram

Youtube

Founder & CEO, Lead Full-Stack .Net developer

ALight Technology And Services Limited

ALight Technologies USA Inc

Youtube

Facebook

LinkedIn

Phone / SMS / WhatsApp on the following 3 numbers:

+91-789-362-6688, +1-480-347-6849, +44-07718-273-964

kantikalyan@gmail.com, kantikalyan@outlook.com, admin@alightservices.com, kantikalyan.arumilli@alightservices.com, KArumilli2020@student.hult.edu, KantiKArumilli@outlook.com and 3 more rarely used email addresses – hardly once or twice a year.

Categories
.Net C# gRPC

Advantages of gRPC vs REST

REST – Representational State Transfer is the most common way of data communication and is based on HTTP 1.1. The data format are generally XML or JSON. HTTP Status codes are usually used for statuses, and sometimes the payload has the statuses.

gRPC – Google’s Remote Procedure Call is a more modern method and based on HTTP 2. Due to HTTP 2 some older software or browsers might not have support. But, gRPC is the way forward. The advantages are several, I am mentioning some of the advantages here:

  1. High performance serializer and deserializer based on protobuf binary format.
  2. The data size is much lesser compared with JSON / XML.
  3. Server to client communication (based on existing connection), Client to server communication, Streaming from Server to Client (on existing connection), Streaming from Client to Server and even duplex communication.
  4. Efficient usage of networking i.e because gRPC is based on HTTP 2, network connections need not be opened for every call.

.Net has fully embraced and supports modern code generation for gRPC. In further blog posts, I will explain and provide some code samples for using gRPC in .Net.

I personally have not performed any speed tests trying to compare REST vs gRPC but, I did use gRPC in some micro-service architecture application and found the performance of gRPC significantly higher than REST.

Mr. Kanti Kalyan Arumilli

Arumilli Kanti Kalyan, Founder & CEO
Arumilli Kanti Kalyan, Founder & CEO

B.Tech, M.B.A

Facebook

LinkedIn

Threads

Instagram

Youtube

Founder & CEO, Lead Full-Stack .Net developer

ALight Technology And Services Limited

ALight Technologies USA Inc

Youtube

Facebook

LinkedIn

Phone / SMS / WhatsApp on the following 3 numbers:

+91-789-362-6688, +1-480-347-6849, +44-07718-273-964

kantikalyan@gmail.com, kantikalyan@outlook.com, admin@alightservices.com, kantikalyan.arumilli@alightservices.com, KArumilli2020@student.hult.edu, KantiKArumilli@outlook.com and 3 more rarely used email addresses – hardly once or twice a year.

Categories
.Net ASP.Net C#

Dependency Injection in ASP.Net MVC Core explained

Dependency Injection is a software development pattern where instead of directly instantiating objects, the objects required by a class are passed in. This helps with maintaining code flexibility, writing unit test cases etc…

The first and foremost thing is to define interfaces and then write implementations. This way, the consuming code needs to know about the methods to be invoked without worrying about the implementation. Software known as Dependency Injection container takes care of instantiating the actual objects as long as the bindings are defined.

This blog post is not about Dependency Injection or Unit Tests but more about how to use Dependency Injection in ASP.Net MVC Core. ASP.Net MVC Core comes with an in-built DI container and supports constructor-based injection i.e instances are passed into the constructor of the consuming class.

There are 3 scopes for objects:

Transient: Every time a class needs an object, a new instance of the requested object is instantiated and passed in. i.e for example if there are 3 classes that need an instance of IService, each class will receive it’s own copy every time even if the three classes are used as part of the same request/response.

Scoped: One object for a particular type is created per request/response and the same object is passed into every class that requests the object processing one request/response cycle.

Singleton: One instance of the class is instantiated for the entire lifetime of the application and the same instance is passed for every class in every request/response cycle.

The use cases for each would vary. Scoped is the default i.e one object for a given type for every class in the same request/response cycle.

Singleton’s are useful in cases such as IConfiguration where the same class can be passed around for getting config information rather than having multiple instances.

Interfaces and implementation classes can be registered by calling the following methods on IServiceCollection for example

AddSingleton<IInterface, Implementation>();
AddScoped<IInterface, Implementation>();
AddTransient<IInterface, Implementation>();

or in Singleton if the object is already instantiated, the object can be passed in by calling:

AddSingleton<IInterface>(instance);

Mr. Kanti Kalyan Arumilli

Arumilli Kanti Kalyan, Founder & CEO
Arumilli Kanti Kalyan, Founder & CEO

B.Tech, M.B.A

Facebook

LinkedIn

Threads

Instagram

Youtube

Founder & CEO, Lead Full-Stack .Net developer

ALight Technology And Services Limited

ALight Technologies USA Inc

Youtube

Facebook

LinkedIn

Phone / SMS / WhatsApp on the following 3 numbers:

+91-789-362-6688, +1-480-347-6849, +44-07718-273-964

kantikalyan@gmail.com, kantikalyan@outlook.com, admin@alightservices.com, kantikalyan.arumilli@alightservices.com, KArumilli2020@student.hult.edu, KantiKArumilli@outlook.com and 3 more rarely used email addresses – hardly once or twice a year.

Categories
.Net ASP.Net C#

ViewComponents in ASP.Net and caching ViewComponents

ViewComponent’s are pretty much like PartialView’s but slightly more useful. ViewComponent’s help in rendering part’s of a web page that can be re-used across the website. Also ViewComponent’s output can be cached. This blog article is going to discuss creating ViewComponent’s and caching example.

A ViewComponent is a public class that has a ViewComponent suffix such as HeaderViewComponent or MenuViewComponent etc… ViewComponent class can be decorated with [ViewComponent] attribute or can inherit from ViewComponent class or any other class that’s a ViewComponent. For example some kind of a BaseViewComponent.

ViewComponent must have one method that gets called.

async Task<IViewComponentResult> InvokeAsync()
or
IViewComponentResult Invoke()

The runtime by default searches for the Views in the following locations:

The runtime searches for the view in the following paths:

  • /Views/{Controller Name}/Components/{View Component Name}/{View Name}
  • /Views/Shared/Components/{View Component Name}/{View Name}
  • /Pages/Shared/Components/{View Component Name}/{View Name}

ViewComponent gets invoked from cshtml by using: Component.InvokeAsync()

The call to Component.InvokeAsync() can be wrapped inside <cache> tag helper for caching.

With the concepts discussed above, let’s look at a code sample. Assuming you have a ASP.Net MVC Core test project opened. Now add a new class and name the class TestViewComponent in TestViewComponent.cs.

using Microsoft.AspNetCore.Mvc;


namespace TestProject.ViewComponents
{
    public class TestViewComponent : ViewComponent
    {
        public async Task<IViewComponentResult> InvokeAsync()
        {
            return await Task.FromResult((IViewComponentResult)View("Test"));
        }
    }
}

Now under Views/Shared create a folder and name the folder Components. Under Views/Shared/Components, create another folder Test. Now, Views/Shared/Components/Test folder can contain views for the TestViewComponent. Create a new Test.cshtml under Views/Shared/Components/Test and put some random html content.

<p>Hello from TestViewComponent.</p>

Now somewhere on Views/Home/Index.cshtml place the following invocation:

@(await Component.InvokeAsync("Test"))

If you need to cache the output wrap the invocation inside <cache> tag helper.

        <cache expires-after="@TimeSpan.FromMinutes(5)">
            @(await Component.InvokeAsync("Test"))
        </cache>

Hope this blog post helps someone!

Mr. Kanti Kalyan Arumilli

Arumilli Kanti Kalyan, Founder & CEO
Arumilli Kanti Kalyan, Founder & CEO

B.Tech, M.B.A

Facebook

LinkedIn

Threads

Instagram

Youtube

Founder & CEO, Lead Full-Stack .Net developer

ALight Technology And Services Limited

ALight Technologies USA Inc

Youtube

Facebook

LinkedIn

Phone / SMS / WhatsApp on the following 3 numbers:

+91-789-362-6688, +1-480-347-6849, +44-07718-273-964

kantikalyan@gmail.com, kantikalyan@outlook.com, admin@alightservices.com, kantikalyan.arumilli@alightservices.com, KArumilli2020@student.hult.edu, KantiKArumilli@outlook.com and 3 more rarely used email addresses – hardly once or twice a year.

Categories
.Net C# Solr

Some performance tips when ingesting documents into Solr

As mentioned in several blog posts earlier, I have been building PodDB on Microsoft.Net platform and Solr. Solr is built on top of Apache Lucene.

Lucene.Net is a very high-performance library for working directly with Apache Lucene, SolrNet is a library for working with Solr. Solr is very customizable, fault-tolerant and has several additional features available out of the box and is built on top of Lucene. Working with SolrNet can be a bit slow because all the API calls are routed via a REST API. The usual overhead of establishing network connection, serializing and deserializing JSON or XML.

Over the past few days, I have been working on a small subset of documents (approximately 275 – 300, the same would be part of the Alpha release) and trying to tweak the settings for optimal search relevance. This required trying various Solr configurations, re-indexing data etc… The very first version of the data ingestion component (does much more pre-processing rather than just ingesting into solr) used to take about approximately 10 minutes. And now the performance has been optimized and the ingestion happens within 15 seconds. i.e over 4000% performance gain and entirely programming related.

The trick used was one of the oldest tricks in the book – batch processing. Instead of one document at a time for writing into a MySQL database and writing into Solr, I rewrote the application to ingest in batches and the application was much faster.

Batching with multi-threading might be even faster.

In other words instead of calling solr.Add() for each document, create the documents, hold them in a list, call solr.AddRange().

Similarly for solr.Commit() and solr.Optimize() batch the calls i.e call those methods once for every 1000 or so documents rather than every document.

Assuming doc is a Solr document that needs to be written. For example:

//NO
solr.Add(doc1);
solr.Add(doc2);
solr.Add(doc3);

//YES
var lst = new List<ENTITY>();
lst.Add(doc1);
lst.Add(doc2);
lst.Add(doc3);

solr.AddRange(lst);

I like to share knowledge, I am hoping this blog post helps someone.

My 2 cents to the world of the blogosphere!

Mr. Kanti Kalyan Arumilli

Arumilli Kanti Kalyan, Founder & CEO
Arumilli Kanti Kalyan, Founder & CEO

B.Tech, M.B.A

Facebook

LinkedIn

Threads

Instagram

Youtube

Founder & CEO, Lead Full-Stack .Net developer

ALight Technology And Services Limited

ALight Technologies USA Inc

Youtube

Facebook

LinkedIn

Phone / SMS / WhatsApp on the following 3 numbers:

+91-789-362-6688, +1-480-347-6849, +44-07718-273-964

kantikalyan@gmail.com, kantikalyan@outlook.com, admin@alightservices.com, kantikalyan.arumilli@alightservices.com, KArumilli2020@student.hult.edu, KantiKArumilli@outlook.com and 3 more rarely used email addresses – hardly once or twice a year.

Categories
.Net C#

A quick introduction to Dapper!

Dapper is a micro ORM tool with very high performance and very decent features. Dapper is one of my favourite tools. Dapper has excellent documentation here and here.

Dapper supports SQL statements and Stored Procedures. My preference is usually Stored Procs over SQL statements.

Dapper extends the IDbConnection interface and thus several methods are added on the connection object.

If you have a class Person with Id and Name, Dapper can handle mapping:

using(var connection = new MySqlConnection(connectionString){
    await connection.OpenAsync();

    var persons = await connection.QueryAsync<Person>("SELECT Id, Name FROM Person WHERE ....");
}

The above code snippet shows how to query the database and get a IEnumerable of Person objects.

There are several other methods, and each method has both synchronous and asynchronous versions, some of them have Generic versions for mapping to objects such as:

.Execute / ExecuteAsync
.ExecuteReader / ExecuteReaderAsync
.ExecuteScalar / ExecuteScalarAsync
.ExecuteScalar<T> / ExecuteScalarAsync<T>
etc...

Passing in parameters is also very straightforward, parameters can be passed in as an existing object or

new {Id = 1}

Even transactions and list of objects are supported.

Multiple resultsets are supported etc… If you are familiar with ADO.Net, using Dapper would be very easy, straightforward and much easier and has very excellent performance with minimal overhead.

Entity Framework is a close 2nd choice when dealing with Database-First approach. If using Code-First approach, Entity Framework would be the preferred choice.

Hoping this blog post helps someone.

Categories
.Net C# Lucene MultiFieldQueryParser

Lucene with C#

As part of development effort for PodDB – A search engine for podcasts – A product of ALight Technology And Services Limited, I have been deciding between Lucene.Net and Solr. I strongly suggest Solr over Lucene.Net if you want to scale. For smaller datasets, Lucene.Net shouldn’t be a problem. But, if you want to scale for larger datasets, want built-in sharding, replication features out of the box choose Solr. For smaller datasets and if you know, you wouldn’t be scaling into bigger datasets, Lucene.Net shouldn’t be a problem and as a matter of fact very efficient. With that said, I do have plans of scaling PodDB, if PodDB gains traction, so I chose Solr.

But for the sake of knowledge sharing, in this article, I am going to show how to use Lucene.Net for full-text indexing. I would not go over complex scenarios and at the same time, this article is NOT a Hello World for Lucene.Net.

Moreover, Lucene.Net does not seem to be under active development. As of this blog post date – September 13tth 2022, there are no GitHub commits over the past 2 – 3 months. As software developers, technical leads, and architects we have the responsibility in making the proper choices for the underlying technology stack. Although, ALight Technology And Services Limited is not an enterprise yet, still, I would like to make decisions suitable over the long time.

Now let’s dig into some code.

Lucene.Net version 4 is in pre-release. Use the pre-release versions of Lucene.

Because Lucene.Net is in beta and there could be lot’s of breaking changes, the compatibility version needs to be declared in code.

private const LuceneVersion AppLuceneVersion = LuceneVersion.LUCENE_48;

Now we specify the directory where we want the indexes to be written and some initialization code.

var dir = FSDirectory.Open(indexDirectory);
var analyzer = new StandardAnalyzer(AppLuceneVersion);
var indexConfig = new IndexWriterConfig(AppLuceneVersion, analyzer);
var writer = new IndexWriter(dir, indexConfig);

Now, we use the IndexWriter for writing documents. There are primarily 2 types of string fields that are important.

  1. TextField – The string data is indexed for full-text
  2. StringField – The data is not indexed for full-data but can be searched like normal strings for fields such as id etc…

Based on the above-mentioned types, determine the data that needs full-text search capabilities and the data that would not need full-text search capabilities and if certain data needs to be stored in Lucene.

var doc = new Document
 {
    new TextField("Title", "Some Data", Field.Store.YES),
    new TextField("Description", "Description", Field.Store.YES),
    new StringField("Id", id, Field.Store.YES)
};

You can add as many TextField and StringField instances as needed. You can even create seperate instances of TextField and StringField and call doc.Add().

If you want to optimize the search results provided by Lucene, you can even specify the Boost of the TextField. By default the Boost i.e weight given to any field in 1.0. But can specify a higher weighting for a certain field. For example, if a keyword is in title you might want to boost the entity.

Add the doc instance to writer and flush();

writer.AddDocument(doc);
writer.Flush(triggerMerge: false, applyAllDeletes: false);

For speed and efficiency batch the documents before calling Flush, instead of calling Flush for every document.

Assuming you have built your indexes. Now let’s start to retrieve.

using var dir = FSDirectory.Open(indexPath);
var analyzer = new StandardAnalyzer(AppLuceneVersion);

var indexConfig = new IndexWriterConfig(AppLuceneVersion, analyzer);
using var writer = new IndexWriter(dir, indexConfig);
using var lreader = writer.GetReader(applyAllDeletes: true);
var searcher = new IndexSearcher(lreader);

var exactQuery = new PhraseQuery();
exactQuery.Add(new Term("Id", id));
var search = searcher.Search(exactQuery, null, 1);
var docs = search.ScoreDocs;

if (docs?.Length == 1)
{
    Document d = searcher.Doc(docs[0].Doc);
    var title = d.Get("Title"));

}

The above source code for retrieving document based on Id, not for full-text search. The first few lines of code are standard initializers. Then we instantiated a PhraseQuery, we specified the search should happen on “Id” field. Then if there is a match, we retrieved the Title of the matching document.

Now let’s see how we can search based on Title and Description as mentioned above:

using var dir = FSDirectory.Open(indexPath);
var analyzer = new StandardAnalyzer(AppLuceneVersion);

var indexConfig = new IndexWriterConfig(AppLuceneVersion, analyzer);
using var writer = new IndexWriter(dir, indexConfig);
using var lreader = writer.GetReader(applyAllDeletes: true);
var searcher = new IndexSearcher(lreader);

string[] fnames = { "Title", "Description" };
var multiFieldQP = new MultiFieldQueryParser(AppLuceneVersion, fnames, analyzer);

Query query = multiFieldQP.Parse("My Search Term");
var search = searcher.Search(query, null, 10);

Console.WriteLine(search.TotalHits);

ScoreDoc[] docs = search.ScoreDocs;
for(var doc in docs) {
    Document d = searcher.Doc(docs[i].Doc);

    var Id = d.Get("Id");
    var Title = d.Get("Title");
    var Description = d.Get("Description");
}

In the above source code, we have the standard initializers in the first few lines. Then we are specifying the columns on which the search should happen in the fnames variable. Then we instantiated a MultiFieldQueryParser to enable searching on multiple fields. Then we built the query by specifying the search term. Advanced boolean queries can also be created in this step. Then the search is performed, we can specify how many documents the result should contain, in this case, we specified 10 results. The rest of the code is regarding fetching the field values.

I am hoping this blog article helps someone.

Categories
.Net C# Lucene Solr

Lucene vs Solr

I played around with Lucene.Net and Solr. Solr is built on top of Lucene.

Lucene.Net is a port of Lucene library written in C# for working with Lucene on Microsoft .Net stack.

Lucene is a library built by Apache Software Foundation. Lucene provides full-text search capabilities. There are few other alternatives such as Sphinx, full-text search capabilities built into RDBMS’s such as Microsoft SQL Server, MySQL, MariaDB, PostgreSQL etc… However, full-text search capabilities in RDBMS’s are not as efficient as Lucene.

Solr and ElasticSearch are built on top of Lucene. ElasticSearch is more suitable and efficient for time-series data.

Now let’s see more about Solr vs Lucene.

Solr provides some additional features such as replication, web app GUI, collecting and publishing metrics, fault-tolerant etc… Solr provides HTTP REST-based API’s for management and for adding documents, searching documents etc…

Directly working with Lucene would provide access to more fine-grained control.

Because Solr provides REST based API’s there is the overhead of establishing HTTP connection, formatting the requests, JSON serialization, and deserialization at both ends i.e client making the call and the Solr server. By directly working with Lucene this overhead does not exist.

If searching through the documents happens on the same server, working directly with Lucene might be efficient. Specifically in lesser data scenarios, but if huge datasets and scaling are a concern, Solr might be the proper approach.

If server infrastructure requirements require separate search servers and a bunch of application servers query the search servers for data, Solr might be more useful and easier because of existing support replication and HTTP API’s.

If performance is of the highest importance and still fine-grained control is needed, custom-built applications should expose the data from search servers and some other more efficient protocols such as gRPC could be used and obviously, replication mechanisms need to be custom-built.

Categories
.Net C# UnitTests

Moq Non-Invocable Test Setups

Most of you might know Moq library used for unit tests in .Net. General usage of Moq is for stubbing interfaces and verifying method calls are proper with the expected parameters, but Moq can also be used for validating that a certain method has not been called.

Assume you have an interface ISpecialInterface and expecting ISpecialInterface.Method() to be called only when a certain logic happens like an if condition. If you are writing a unit test for the logic to be false, you want to verify that ISpecialInterface.Method() is not invoked.

var mockSpecialInterface = new Mock<ISpecialInterface>(MockBehavior.Strict);

            mockSpecialInterface.Setup(ss => ss.Method(It.IsAny<string>()));

            var sut = new SpecialObject(mockSpecialInterface.Object);
            sut.LogicMethod();

            mockSpecialInterface.Verify(ss => ss.Method(It.IsAny<string>()), Times.Never());

In the above code, mockSpecialInterface.Verify(ss => ss.Method(It.IsAny<string>()), Times.Never()); is the code that verifies the method is not called.

Erra Diwakar alias Erra Kalyan and some other female who claims to have the first name of Kanti or Erra Sowjanya or Erra Sowmya together try to steal my identity (Kanti Kalyan Arumilli) using some Tamil Nadu-based naming logic. The identity thief couple. I don’t even know them yet they shadow me and claim my bank accounts as theirs by manipulating deliveries and couriers – impersonators with imposter syndrome and shadow rogue R&AW spies and terrorists.

Categories
.Net C#

Programatically configuring NLog in C#

Some people for various reasons might prefer programatically configuring NLog. Some use cases are for example, may be you don’t want to store sensitive information in nlog.config file. Some of the targets that require sensitive information in the config file are (Not an exhaustive list):

You might want to store the sensitive information somewhere else in an encrypted format. Then you might decrypt the password and programmatically configure the logger. Here is some code sample on how to configure such loggers.

var logConfig = new LoggingConfiguration();

//File
var fileTarget = new FileTarget
    {
        FileName=typeof(Program).FullName + ".log"
    };

fileTarget.Layout = @"${date:format=HH\:mm\:ss} ${logger}:${message};${exception}";

var fileRule = new LoggingRule("*", LogLevel.Debug, fileTarget);
logConfig.LoggingRules.Add(fileRule);
logConfig.AddTarget("logfile", fileTarget);

//DB
var dbTarget = new DatabaseTarget();

dbTarget.ConnectionString = YourSecureMethodForDecryptingAndObtainingConnectionString();

dbTarget.CommandText = @"INSERT INTO [Log] (Date, Thread, Level, Logger, Message, Exception) VALUES (GETDATE(), @thread, @level, @logger, @message, @exception)";

dbTarget.Parameters.Add(new DatabaseParameterInfo("@thread", new NLog.Layouts.SimpleLayout("${threadid}")));

.
.
.
logConfig.AddTarget("database", dbTarget);
var dbRule = new LoggingRule("*", LogLevel.Debug, dbTarget);
logConfig.LoggingRules.Add(dbRule);


LogManager.Configuration = logConfig;

In the above sample code, we have looked into how to add multiple types of targets – File and DB. How to set the layout for the FileTarget. How to configure logging rules and finally how to assign the programmatic config.

An interesting target is the Memory target, allows writing log messages to an ArrayList in memory for programmatic retrieval. Great for unit testing.

There are some code samples in the above mentioned link for Memory target.