As a knowledge worker, I.T developer I have a need to stay up to date with the latest technologies. Similarly millions of I.T developers. I cannot cater to the needs of every knowledge workers or I.T developers but for .Net developers, I am planning to post recommended reads every day. The daily recommended reads would be posted almost every day. The links would be for 3rd party blog posts. The topics would be more about I.T computing, cloud computing, .Net, web development etc…
I have explained sitemaps in an earlier blog post – Sitemaps an intro! In this blog post we would look at some code snippets for generating XML sitemaps. Two approaches are mentioned. This blog post is about sample code snippets. Read the comments in the code.
I might consider creating a small library for generating XML sitemaps and open sourcing the library.
The general structure of a valid XML document contains a XML declaration i.e the <?xml….>. Then there would be one and exactly one root node and the root node would contain other child nodes.
With XML sitemap, we would have the XML declaration, the <urlset> root node. The <urlset> root node would have one or more <url> child nodes upto a maximum of 50,000. The <url> node would have one and only one mandatory <loc> node. The <lastmod>, <changefreq>, <priority> are optional. If the optional <lastmod>, <changefreq>, <priority> nodes are present, there can be a maximum of one child node per <url> node.
Business logic validations would be:
The <url> nodes can be between 1 and 50,000.
Unique URL’s
<loc> is mandatory and the value must be a valid http or https URL.
If present, <lastmod> must be in valid w3c format, there can be 0 or 1 <lastmod> per <url> node.
<changefreq> can be 0 or 1 per <url> node and must be one of the following:
always
hourly
daily
weekly
monthly
yearly
never
<priority> can be 0 or 1 per <url> node and must be a valid number between 0 and 1 such as 0.1 or 0.2 etc…
Approach – 1:
In the following code snippets we would not be dealing with the business logic. This is for purely creating.
XmlDocument xmlDoc = new XmlDocument();
var declaration = xmlDoc.CreateXmlDeclaration("1.0", "UTF-8", "yes");
xmlDoc.AppendChild(declaration);
XmlNode urlSet = xmlDoc.CreateElement("urlset", "http://www.sitemaps.org/schemas/sitemap/0.9");
var url = xmlDoc.CreateElement("url");
var loc = xmlDoc.CreateElement("loc");
loc.InnerText = "https://www.google.com";
url.AppendChild(loc);
var lastMod = xmlDoc.CreateElement("lastmod");
lastMod.InnerText = $"{DateTime.UtcNow.ToString("s")}Z";
url.AppendChild(lastMod);
urlSet.AppendChild(url);
xmlDoc.AppendChild(urlSet);
// Append more <url> nodes.
// Move the logic of generating url nodes into a separate method, call the method repetitively, apply business logic etc...
xmlDoc.Save(@"C:\temp\sitemap.xml");
Approach – 2:
In this also we won’t be applying business logic, this is a sample code snippet.
UrlSet.cs
[XmlRoot(ElementName = "urlset", Namespace = "http://www.sitemaps.org/schemas/sitemap/0.9")]
public class Urlset
{
[XmlElement(ElementName = "url", Namespace = "http://www.sitemaps.org/schemas/sitemap/0.9")]
public List<Url> Url { get; set; }
[XmlAttribute(AttributeName = "xmlns")]
public string Xmlns { get; set; }
public Urlset()
{
Xmlns= "http://www.sitemaps.org/schemas/sitemap/0.9";
Url = new List<Url>();
}
}
Url.cs
public class Url
{
[XmlElement(ElementName = "loc", Namespace = "http://www.sitemaps.org/schemas/sitemap/0.9")]
public string Loc { get; set; }
[XmlElement(ElementName = "lastmod", Namespace = "http://www.sitemaps.org/schemas/sitemap/0.9")]
public string Lastmod { get; set; }
[XmlElement(ElementName = "changefreq", Namespace = "http://www.sitemaps.org/schemas/sitemap/0.9")]
public string Changefreq { get; set; }
[XmlElement(ElementName = "priority", Namespace = "http://www.sitemaps.org/schemas/sitemap/0.9")]
public double Priority { get; set; }
public Url()
{
Lastmod = DateTime.Now.ToLongDateString();
Priority = 0.5;
Changefreq = "Never";
}
}
// Generation
XmlSerializerNamespaces ns = new XmlSerializerNamespaces();
//Adding an empty namespace
ns.Add("", "");
var urlSet = new Urlset();
urlSet.Url.Add(new Url { Loc = "https://www.sample.com" });
urlSet.Url.Add(new Url { Loc = "https://www.sample.com/page1" });
XmlSerializer serializer = new XmlSerializer(typeof(Urlset));
string utf8;
using (StringWriter writer = new Utf8StringWriter())
{
serializer.Serialize(writer, urlSet, ns);
utf8 = writer.ToString();
Console.WriteLine(utf8);
}
// DeSerialization - useful for modifications and deletions
XmlSerializer serializer = new XmlSerializer(typeof(Urlset));
var deserializedUrlSet = serialization.Deserialize(new StringReader(utf8));
// Here utf8 is a string with the xml, refer to the overloads of Deserialize for other ways of passing XML
Several search engines support sitemaps. Sitemaps are either xml documents or plain text files that contain a list of URL’s that need to be indexed by search engines. The text file version is simple and straight-forward. I will discuss both the text and xml versions.
Text:
A simple text file that contains one URL per line.
Year: YYYY (eg 1997) Year and month: YYYY-MM (eg 1997-07) Complete date: YYYY-MM-DD (eg 1997-07-16) Complete date plus hours and minutes: YYYY-MM-DDThh:mmTZD (eg 1997-07-16T19:20+01:00) Complete date plus hours, minutes and seconds: YYYY-MM-DDThh:mm:ssTZD (eg 1997-07-16T19:20:30+01:00) Complete date plus hours, minutes, seconds and a decimal fraction of a second YYYY-MM-DDThh:mm:ss.sTZD (eg 1997-07-16T19:20:30.45+01:00) where:
YYYY = four-digit year
MM = two-digit month (01=January, etc.)
DD = two-digit day of month (01 through 31)
hh = two digits of hour (00 through 23) (am/pm NOT allowed)
mm = two digits of minute (00 through 59)
ss = two digits of second (00 through 59)
s = one or more digits representing a decimal fraction of a second
TZD = time zone designator (Z or +hh:mm or -hh:mm)
changefreq – This value is optional and needs to be between 0 and 1.0. The default value is 0.5.
There are some restrictions on the number of entries and size. The maximum number of URL’s per file is 50,000 and the maximum size is 50 Mb (uncompressed).
I am planning to provide some C# code snippets for generating XML sitemaps in the next few days. I had some code and generated sitemaps for the alpha version of PodDB, but now I am kind of integrating into the application i.e now the codebase would handle every update, removal, addition, planned for version 0.2.3 (New Year’s release).
As part of implementing NIST Cyber Security Framework at ALight Technology And Services Limited, one of the important thing to audit / log was database server. I am currently ingesting some logs into CloudWatch. In a blog post / youtube video in the future, I would show how to ingest logs into CloudWatch.
As a one person I do multiple things, now, I digged into some DBA work 🙂
This blog post is about writing audit log for MariaDB. In this blog post MariaDB Audit Plugin would be enabled and configured.
Update conf file, usually /etc/mysql/mariadb.cnf on Ubuntu, but could be different. Add the following lines under [mysqld]:
[mariadb]
plugin_load_add = server_audit
server_audit=FORCE_PLUS_PERMANENT
server_audit_file_path=/var/log/mysql/mariadb-audit.log # path to the audit log
server_audit_logging=ON
server_audit_events = 'CONNECT,QUERY,TABLE'
server_audit_file_rotate_size=1000000 # in bytes
server_audit_file_rotations=10
That’s all. The variables are pretty much self-explanatory. There are few more variables that can be used. Here is a link explaining the variables: Audit Plugin Options.
plugin_load_add – loads the plugin.
server_audit – we want the plugin to be permanently activated.
server_audit_file_path – Path to the file.
server_audit_logging – ON – we want the logging to happen
server_audit_events – We are logging connection requests, queries including failed queries and the affected tables.
server_audit_file_rotate_size – Max file for log before generating new file.
server_audit_file_rotations – Number of older files to hold before deleting.
There is an option for writing into syslog, by settingserver_audit_logging = ‘syslog’ but that’s beyond the scope of current blog post and I would prefer having a seperate file instead of getting sql log mixed into syslog – Personal preference.
This blog post is about some monitoring and alerting tips for AWS workloads.
AWS Console Logins – Root or IAM user
SSH into an EC2 instance
The above mentioned are considered primary. In addition the following monitoring are necessary:
3. What actions were performed by users and/or AWS such as launching EC2 instances (manual or autoscaling) or configuring Route53 or Security Groups etc…
4. Web logs, Load Balancer logs, Cloudfront logs in rare cases of DDOS attacks by the baddies.
5. Application logs
6. Database logs
7. System logs
In the next few weeks, I would be writing or even live videos / tutorials on how to monitor and alert for 1, 2 and 3. Some of these are based on using existing systems and in some cases, I would show manual and programmatic (C# preferred language of choice) approaches.
I would also share some blog posts on how to ingest logs into AWS Cloudwatch (5 GB ingestion free and some other costs) and Grafana (50GB ingestion free), discuss advantages and disadvantages of both.
As part of implementing NIST cyber security framework at ALight Technology And Services Limited, I am implementing these. I like sharing my knowledge with others as I come across new things, learn new things, even existing knowledge when appropriate, sometimes a blend of existing knowledge and new things.
While I have been brainstorming about something, some small idea came to my mind. People who would read this blog post would either call me stooooopid or might say nice idea.
Anyway the point is, we use logging for various purposes – mostly for troubleshooting. Very verbose logs are a nightmare in terms of performance, storage, retrieval and digging through for the right information. Sometimes, issues troubleshooting becomes a pain because of inadequate information in logs.
What if we log Info and above under normal circumstances, trace and / or debug in certain conditions such as unexpected expectations or errors?
Here is a brief overview of how this might be implemented – in this case, there is a slight memory pressure.
Collect trace and / or debug into Memory log i.e for example if using NLog, use Memory target.
Have some static method that writes the logs from Memory target into a different log target such as File / Database etc…
In the specific conditions such as exception call the static method and in ASP.Net even implement a exception filter to perform the same.
This might be a win-win scenario i.e collecting detailed information in case of unexpected exceptions and error, for any other normal scenario normal logging. Because memory target is being used, very small performance hit, slightly higher memory usage are the drawbacks.
I would love to know how other developers are implementing or handling such use cases.
I personally am NOT a DBA. But as a one person company I do everything from product planning, feature prioritization, architecture, documentation, development, deployment, DBA and monitoring. Below are some helpful commands in MySQL / MariaDB that I found useful when I do DBA activities. This is more of a blog post for my own reference.
Connect to database:
mysql -u root -p
Get a list of databases:
show databases;
Use a particular database
use <DATABASE_NAME>;
Show tables in current database
show tables;
Get a list of all the stored procedures in all the databases
SHOW PROCEDURE STATUS;
or
SELECT
routine_schema as "Database",
routine_name
FROM
information_schema.routines
WHERE
routine_type = 'PROCEDURE'
ORDER BY
routine_schema ASC,
routine_name ASC;
Get a list of stored procedures in all the databases that match a certain string pattern.
SHOW PROCEDURE STATUS LIKE '%pattern%';
or
SELECT
routine_schema as "Database",
routine_name
FROM
information_schema.routines
WHERE
routine_type = 'PROCEDURE'
and routine_name LIKE '%pattern%'
ORDER BY
routine_schema ASC,
routine_name ASC;
The second select statement can be modified to limit to a certain database by including filters in the where clause on “routine_schema”.
SELECT
routine_schema as "Database",
routine_name
FROM
information_schema.routines
WHERE
routine_type = 'PROCEDURE'
and routine_name LIKE '%pattern%'
and routine_schema = 'database_name'
ORDER BY
routine_schema ASC,
routine_name ASC;
Hoping this helps someone!
Never trust the scammers who claim to have exchanged identities and who use bank accounts on other people’s name. They are either identity thieves / impersonators / anonymous hackers. Example – Erra’s, Thota’s, Bojja’s, Uttam’s, Srinivas’s etc… the R&AW hackers / spies.
REST – Representational State Transfer is the most common way of data communication and is based on HTTP 1.1. The data format are generally XML or JSON. HTTP Status codes are usually used for statuses, and sometimes the payload has the statuses.
gRPC – Google’s Remote Procedure Call is a more modern method and based on HTTP 2. Due to HTTP 2 some older software or browsers might not have support. But, gRPC is the way forward. The advantages are several, I am mentioning some of the advantages here:
High performance serializer and deserializer based on protobuf binary format.
The data size is much lesser compared with JSON / XML.
Server to client communication (based on existing connection), Client to server communication, Streaming from Server to Client (on existing connection), Streaming from Client to Server and even duplex communication.
Efficient usage of networking i.e because gRPC is based on HTTP 2, network connections need not be opened for every call.
.Net has fully embraced and supports modern code generation for gRPC. In further blog posts, I will explain and provide some code samples for using gRPC in .Net.
I personally have not performed any speed tests trying to compare REST vs gRPC but, I did use gRPC in some micro-service architecture application and found the performance of gRPC significantly higher than REST.
Dependency Injection is a software development pattern where instead of directly instantiating objects, the objects required by a class are passed in. This helps with maintaining code flexibility, writing unit test cases etc…
The first and foremost thing is to define interfaces and then write implementations. This way, the consuming code needs to know about the methods to be invoked without worrying about the implementation. Software known as Dependency Injection container takes care of instantiating the actual objects as long as the bindings are defined.
This blog post is not about Dependency Injection or Unit Tests but more about how to use Dependency Injection in ASP.Net MVC Core. ASP.Net MVC Core comes with an in-built DI container and supports constructor-based injection i.e instances are passed into the constructor of the consuming class.
There are 3 scopes for objects:
Transient: Every time a class needs an object, a new instance of the requested object is instantiated and passed in. i.e for example if there are 3 classes that need an instance of IService, each class will receive it’s own copy every time even if the three classes are used as part of the same request/response.
Scoped: One object for a particular type is created per request/response and the same object is passed into every class that requests the object processing one request/response cycle.
Singleton: One instance of the class is instantiated for the entire lifetime of the application and the same instance is passed for every class in every request/response cycle.
The use cases for each would vary. Scoped is the default i.e one object for a given type for every class in the same request/response cycle.
Singleton’s are useful in cases such as IConfiguration where the same class can be passed around for getting config information rather than having multiple instances.
Interfaces and implementation classes can be registered by calling the following methods on IServiceCollection for example
ViewComponent’s are pretty much like PartialView’s but slightly more useful. ViewComponent’s help in rendering part’s of a web page that can be re-used across the website. Also ViewComponent’s output can be cached. This blog article is going to discuss creating ViewComponent’s and caching example.
A ViewComponent is a public class that has a ViewComponent suffix such as HeaderViewComponent or MenuViewComponent etc… ViewComponent class can be decorated with [ViewComponent] attribute or can inherit from ViewComponent class or any other class that’s a ViewComponent. For example some kind of a BaseViewComponent.
ViewComponent must have one method that gets called.
async Task<IViewComponentResult> InvokeAsync()
or
IViewComponentResult Invoke()
The runtime by default searches for the Views in the following locations:
The runtime searches for the view in the following paths:
ViewComponent gets invoked from cshtml by using: Component.InvokeAsync()
The call to Component.InvokeAsync() can be wrapped inside <cache> tag helper for caching.
With the concepts discussed above, let’s look at a code sample. Assuming you have a ASP.Net MVC Core test project opened. Now add a new class and name the class TestViewComponent in TestViewComponent.cs.
using Microsoft.AspNetCore.Mvc;
namespace TestProject.ViewComponents
{
public class TestViewComponent : ViewComponent
{
public async Task<IViewComponentResult> InvokeAsync()
{
return await Task.FromResult((IViewComponentResult)View("Test"));
}
}
}
Now under Views/Shared create a folder and name the folder Components. Under Views/Shared/Components, create another folder Test. Now, Views/Shared/Components/Test folder can contain views for the TestViewComponent. Create a new Test.cshtml under Views/Shared/Components/Test and put some random html content.
<p>Hello from TestViewComponent.</p>
Now somewhere on Views/Home/Index.cshtml place the following invocation:
@(await Component.InvokeAsync("Test"))
If you need to cache the output wrap the invocation inside <cache> tag helper.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-advertisement
1 year
Set by the GDPR Cookie Consent plugin, this cookie is used to record the user consent for the cookies in the "Advertisement" category .
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.