Canceling abandoned requests in ASP.NET Core – Blog article on how to stop processing of abandoned requests in ASP.Net MVC i.e if a browser loads a page but clicks stop or presses escape key. This way, some server-side resources can be saved.
Two of my favorite and absolutely free desktop based (downloadable) software for diagrams – cloud architecture, UML, Database ER etc… Very useful for small startups and developers.
As a knowledge worker, I.T developer I have a need to stay up to date with the latest technologies. Similarly millions of I.T developers. I cannot cater to the needs of every knowledge workers or I.T developers but for .Net developers, I am planning to post recommended reads every day. The daily recommended reads would be posted almost every day. The links would be for 3rd party blog posts. The topics would be more about I.T computing, cloud computing, .Net, web development etc…
I have explained sitemaps in an earlier blog post – Sitemaps an intro! In this blog post we would look at some code snippets for generating XML sitemaps. Two approaches are mentioned. This blog post is about sample code snippets. Read the comments in the code.
I might consider creating a small library for generating XML sitemaps and open sourcing the library.
The general structure of a valid XML document contains a XML declaration i.e the <?xml….>. Then there would be one and exactly one root node and the root node would contain other child nodes.
With XML sitemap, we would have the XML declaration, the <urlset> root node. The <urlset> root node would have one or more <url> child nodes upto a maximum of 50,000. The <url> node would have one and only one mandatory <loc> node. The <lastmod>, <changefreq>, <priority> are optional. If the optional <lastmod>, <changefreq>, <priority> nodes are present, there can be a maximum of one child node per <url> node.
Business logic validations would be:
The <url> nodes can be between 1 and 50,000.
Unique URL’s
<loc> is mandatory and the value must be a valid http or https URL.
If present, <lastmod> must be in valid w3c format, there can be 0 or 1 <lastmod> per <url> node.
<changefreq> can be 0 or 1 per <url> node and must be one of the following:
always
hourly
daily
weekly
monthly
yearly
never
<priority> can be 0 or 1 per <url> node and must be a valid number between 0 and 1 such as 0.1 or 0.2 etc…
Approach – 1:
In the following code snippets we would not be dealing with the business logic. This is for purely creating.
XmlDocument xmlDoc = new XmlDocument();
var declaration = xmlDoc.CreateXmlDeclaration("1.0", "UTF-8", "yes");
xmlDoc.AppendChild(declaration);
XmlNode urlSet = xmlDoc.CreateElement("urlset", "http://www.sitemaps.org/schemas/sitemap/0.9");
var url = xmlDoc.CreateElement("url");
var loc = xmlDoc.CreateElement("loc");
loc.InnerText = "https://www.google.com";
url.AppendChild(loc);
var lastMod = xmlDoc.CreateElement("lastmod");
lastMod.InnerText = $"{DateTime.UtcNow.ToString("s")}Z";
url.AppendChild(lastMod);
urlSet.AppendChild(url);
xmlDoc.AppendChild(urlSet);
// Append more <url> nodes.
// Move the logic of generating url nodes into a separate method, call the method repetitively, apply business logic etc...
xmlDoc.Save(@"C:\temp\sitemap.xml");
Approach – 2:
In this also we won’t be applying business logic, this is a sample code snippet.
UrlSet.cs
[XmlRoot(ElementName = "urlset", Namespace = "http://www.sitemaps.org/schemas/sitemap/0.9")]
public class Urlset
{
[XmlElement(ElementName = "url", Namespace = "http://www.sitemaps.org/schemas/sitemap/0.9")]
public List<Url> Url { get; set; }
[XmlAttribute(AttributeName = "xmlns")]
public string Xmlns { get; set; }
public Urlset()
{
Xmlns= "http://www.sitemaps.org/schemas/sitemap/0.9";
Url = new List<Url>();
}
}
Url.cs
public class Url
{
[XmlElement(ElementName = "loc", Namespace = "http://www.sitemaps.org/schemas/sitemap/0.9")]
public string Loc { get; set; }
[XmlElement(ElementName = "lastmod", Namespace = "http://www.sitemaps.org/schemas/sitemap/0.9")]
public string Lastmod { get; set; }
[XmlElement(ElementName = "changefreq", Namespace = "http://www.sitemaps.org/schemas/sitemap/0.9")]
public string Changefreq { get; set; }
[XmlElement(ElementName = "priority", Namespace = "http://www.sitemaps.org/schemas/sitemap/0.9")]
public double Priority { get; set; }
public Url()
{
Lastmod = DateTime.Now.ToLongDateString();
Priority = 0.5;
Changefreq = "Never";
}
}
// Generation
XmlSerializerNamespaces ns = new XmlSerializerNamespaces();
//Adding an empty namespace
ns.Add("", "");
var urlSet = new Urlset();
urlSet.Url.Add(new Url { Loc = "https://www.sample.com" });
urlSet.Url.Add(new Url { Loc = "https://www.sample.com/page1" });
XmlSerializer serializer = new XmlSerializer(typeof(Urlset));
string utf8;
using (StringWriter writer = new Utf8StringWriter())
{
serializer.Serialize(writer, urlSet, ns);
utf8 = writer.ToString();
Console.WriteLine(utf8);
}
// DeSerialization - useful for modifications and deletions
XmlSerializer serializer = new XmlSerializer(typeof(Urlset));
var deserializedUrlSet = serialization.Deserialize(new StringReader(utf8));
// Here utf8 is a string with the xml, refer to the overloads of Deserialize for other ways of passing XML
This blog post is about some monitoring and alerting tips for AWS workloads.
AWS Console Logins – Root or IAM user
SSH into an EC2 instance
The above mentioned are considered primary. In addition the following monitoring are necessary:
3. What actions were performed by users and/or AWS such as launching EC2 instances (manual or autoscaling) or configuring Route53 or Security Groups etc…
4. Web logs, Load Balancer logs, Cloudfront logs in rare cases of DDOS attacks by the baddies.
5. Application logs
6. Database logs
7. System logs
In the next few weeks, I would be writing or even live videos / tutorials on how to monitor and alert for 1, 2 and 3. Some of these are based on using existing systems and in some cases, I would show manual and programmatic (C# preferred language of choice) approaches.
I would also share some blog posts on how to ingest logs into AWS Cloudwatch (5 GB ingestion free and some other costs) and Grafana (50GB ingestion free), discuss advantages and disadvantages of both.
As part of implementing NIST cyber security framework at ALight Technology And Services Limited, I am implementing these. I like sharing my knowledge with others as I come across new things, learn new things, even existing knowledge when appropriate, sometimes a blend of existing knowledge and new things.
While I have been brainstorming about something, some small idea came to my mind. People who would read this blog post would either call me stooooopid or might say nice idea.
Anyway the point is, we use logging for various purposes – mostly for troubleshooting. Very verbose logs are a nightmare in terms of performance, storage, retrieval and digging through for the right information. Sometimes, issues troubleshooting becomes a pain because of inadequate information in logs.
What if we log Info and above under normal circumstances, trace and / or debug in certain conditions such as unexpected expectations or errors?
Here is a brief overview of how this might be implemented – in this case, there is a slight memory pressure.
Collect trace and / or debug into Memory log i.e for example if using NLog, use Memory target.
Have some static method that writes the logs from Memory target into a different log target such as File / Database etc…
In the specific conditions such as exception call the static method and in ASP.Net even implement a exception filter to perform the same.
This might be a win-win scenario i.e collecting detailed information in case of unexpected exceptions and error, for any other normal scenario normal logging. Because memory target is being used, very small performance hit, slightly higher memory usage are the drawbacks.
I would love to know how other developers are implementing or handling such use cases.
REST – Representational State Transfer is the most common way of data communication and is based on HTTP 1.1. The data format are generally XML or JSON. HTTP Status codes are usually used for statuses, and sometimes the payload has the statuses.
gRPC – Google’s Remote Procedure Call is a more modern method and based on HTTP 2. Due to HTTP 2 some older software or browsers might not have support. But, gRPC is the way forward. The advantages are several, I am mentioning some of the advantages here:
High performance serializer and deserializer based on protobuf binary format.
The data size is much lesser compared with JSON / XML.
Server to client communication (based on existing connection), Client to server communication, Streaming from Server to Client (on existing connection), Streaming from Client to Server and even duplex communication.
Efficient usage of networking i.e because gRPC is based on HTTP 2, network connections need not be opened for every call.
.Net has fully embraced and supports modern code generation for gRPC. In further blog posts, I will explain and provide some code samples for using gRPC in .Net.
I personally have not performed any speed tests trying to compare REST vs gRPC but, I did use gRPC in some micro-service architecture application and found the performance of gRPC significantly higher than REST.
Dependency Injection is a software development pattern where instead of directly instantiating objects, the objects required by a class are passed in. This helps with maintaining code flexibility, writing unit test cases etc…
The first and foremost thing is to define interfaces and then write implementations. This way, the consuming code needs to know about the methods to be invoked without worrying about the implementation. Software known as Dependency Injection container takes care of instantiating the actual objects as long as the bindings are defined.
This blog post is not about Dependency Injection or Unit Tests but more about how to use Dependency Injection in ASP.Net MVC Core. ASP.Net MVC Core comes with an in-built DI container and supports constructor-based injection i.e instances are passed into the constructor of the consuming class.
There are 3 scopes for objects:
Transient: Every time a class needs an object, a new instance of the requested object is instantiated and passed in. i.e for example if there are 3 classes that need an instance of IService, each class will receive it’s own copy every time even if the three classes are used as part of the same request/response.
Scoped: One object for a particular type is created per request/response and the same object is passed into every class that requests the object processing one request/response cycle.
Singleton: One instance of the class is instantiated for the entire lifetime of the application and the same instance is passed for every class in every request/response cycle.
The use cases for each would vary. Scoped is the default i.e one object for a given type for every class in the same request/response cycle.
Singleton’s are useful in cases such as IConfiguration where the same class can be passed around for getting config information rather than having multiple instances.
Interfaces and implementation classes can be registered by calling the following methods on IServiceCollection for example
ViewComponent’s are pretty much like PartialView’s but slightly more useful. ViewComponent’s help in rendering part’s of a web page that can be re-used across the website. Also ViewComponent’s output can be cached. This blog article is going to discuss creating ViewComponent’s and caching example.
A ViewComponent is a public class that has a ViewComponent suffix such as HeaderViewComponent or MenuViewComponent etc… ViewComponent class can be decorated with [ViewComponent] attribute or can inherit from ViewComponent class or any other class that’s a ViewComponent. For example some kind of a BaseViewComponent.
ViewComponent must have one method that gets called.
async Task<IViewComponentResult> InvokeAsync()
or
IViewComponentResult Invoke()
The runtime by default searches for the Views in the following locations:
The runtime searches for the view in the following paths:
ViewComponent gets invoked from cshtml by using: Component.InvokeAsync()
The call to Component.InvokeAsync() can be wrapped inside <cache> tag helper for caching.
With the concepts discussed above, let’s look at a code sample. Assuming you have a ASP.Net MVC Core test project opened. Now add a new class and name the class TestViewComponent in TestViewComponent.cs.
using Microsoft.AspNetCore.Mvc;
namespace TestProject.ViewComponents
{
public class TestViewComponent : ViewComponent
{
public async Task<IViewComponentResult> InvokeAsync()
{
return await Task.FromResult((IViewComponentResult)View("Test"));
}
}
}
Now under Views/Shared create a folder and name the folder Components. Under Views/Shared/Components, create another folder Test. Now, Views/Shared/Components/Test folder can contain views for the TestViewComponent. Create a new Test.cshtml under Views/Shared/Components/Test and put some random html content.
<p>Hello from TestViewComponent.</p>
Now somewhere on Views/Home/Index.cshtml place the following invocation:
@(await Component.InvokeAsync("Test"))
If you need to cache the output wrap the invocation inside <cache> tag helper.
As mentioned in several blog posts earlier, I have been building PodDB on Microsoft.Net platform and Solr. Solr is built on top of Apache Lucene.
Lucene.Net is a very high-performance library for working directly with Apache Lucene, SolrNet is a library for working with Solr. Solr is very customizable, fault-tolerant and has several additional features available out of the box and is built on top of Lucene. Working with SolrNet can be a bit slow because all the API calls are routed via a REST API. The usual overhead of establishing network connection, serializing and deserializing JSON or XML.
Over the past few days, I have been working on a small subset of documents (approximately 275 – 300, the same would be part of the Alpha release) and trying to tweak the settings for optimal search relevance. This required trying various Solr configurations, re-indexing data etc… The very first version of the data ingestion component (does much more pre-processing rather than just ingesting into solr) used to take about approximately 10 minutes. And now the performance has been optimized and the ingestion happens within 15 seconds. i.e over 4000% performance gain and entirely programming related.
The trick used was one of the oldest tricks in the book – batch processing. Instead of one document at a time for writing into a MySQL database and writing into Solr, I rewrote the application to ingest in batches and the application was much faster.
Batching with multi-threading might be even faster.
In other words instead of calling solr.Add() for each document, create the documents, hold them in a list, call solr.AddRange().
Similarly for solr.Commit() and solr.Optimize() batch the calls i.e call those methods once for every 1000 or so documents rather than every document.
Assuming doc is a Solr document that needs to be written. For example:
//NO
solr.Add(doc1);
solr.Add(doc2);
solr.Add(doc3);
//YES
var lst = new List<ENTITY>();
lst.Add(doc1);
lst.Add(doc2);
lst.Add(doc3);
solr.AddRange(lst);
I like to share knowledge, I am hoping this blog post helps someone.
Dapper is a micro ORM tool with very high performance and very decent features. Dapper is one of my favourite tools. Dapper has excellent documentation here and here.
Dapper supports SQL statements and Stored Procedures. My preference is usually Stored Procs over SQL statements.
Dapper extends the IDbConnection interface and thus several methods are added on the connection object.
If you have a class Person with Id and Name, Dapper can handle mapping:
using(var connection = new MySqlConnection(connectionString){
await connection.OpenAsync();
var persons = await connection.QueryAsync<Person>("SELECT Id, Name FROM Person WHERE ....");
}
The above code snippet shows how to query the database and get a IEnumerable of Person objects.
There are several other methods, and each method has both synchronous and asynchronous versions, some of them have Generic versions for mapping to objects such as:
Passing in parameters is also very straightforward, parameters can be passed in as an existing object or
new {Id = 1}
Even transactions and list of objects are supported.
Multiple resultsets are supported etc… If you are familiar with ADO.Net, using Dapper would be very easy, straightforward and much easier and has very excellent performance with minimal overhead.
Entity Framework is a close 2nd choice when dealing with Database-First approach. If using Code-First approach, Entity Framework would be the preferred choice.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-advertisement
1 year
Set by the GDPR Cookie Consent plugin, this cookie is used to record the user consent for the cookies in the "Advertisement" category .
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.