If there is an existing database and if you need to use Entity Framework, use Database First approach. Have Entity Framework generate the DBContext and the classes.
Symmetric encryption is an encryption technique where the same set of keys are used for encryption and decryption. Whereas, Asymmetric encryption uses different keys i.e public key for encryption and associated private key for decryption.
TripleDES is an algorithm for implementing symmetric encryption.
TripleDES uses Key and IV.
public string EncryptTripleDES(string plainText, byte[] Key, byte[] IV)
{
byte[] encrypted;
using (TripleDESCryptoServiceProvider tdes = new TripleDESCryptoServiceProvider())
{
ICryptoTransform encryptor = tdes.CreateEncryptor(Key, IV);
using (MemoryStream ms = new MemoryStream())
{
using (CryptoStream cs = new CryptoStream(ms, encryptor, CryptoStreamMode.Write))
{
using (StreamWriter sw = new StreamWriter(cs))
sw.Write(plainText);
encrypted = ms.ToArray();
}
}
}
return Convert.ToBase64String(encrypted);
}
The above code snippet is for encryption.
public string DecryptTripleDES(string cipherText, byte[] Key, byte[] IV)
{
string plaintext = null;
var cipherBytes = Convert.FromBase64String(cipherText);
using (TripleDESCryptoServiceProvider tdes = new TripleDESCryptoServiceProvider())
{
ICryptoTransform decryptor = tdes.CreateDecryptor(Key, IV);
using (MemoryStream ms = new MemoryStream(cipherBytes))
{
using (CryptoStream cs = new CryptoStream(ms, decryptor, CryptoStreamMode.Read))
{
using (StreamReader reader = new StreamReader(cs))
plaintext = reader.ReadToEnd();
}
}
}
return plaintext;
}
The above code snippet is for Decryption.
My open-source tool LightKeysTransfer uses TripleDES and the accompanying source code can be found at:
The RSACryptoServiceProvider in C# provides way for asymmetric encryption and decryption. The encryption happens using public key. The encrypted data can be decrypted only by the associated private key.
The implementation supports keys of sizes varying from 512 bits to 16,384 bits. The larger the key size, the more secure but slower. Depending on the size of the key, the amount of data that can be encrypted would be different.
The public key can be exported and passed for encrypting data. The private key needs to be properly secured.
My open source project LightKeysTransfer uses RSA for encryption and decryption. CryptHelper.cs has the code implementation.
var rsa = RSACryptoServiceProvider.Create(2048);
var rsa2 = RSACryptoServiceProvider.Create(2048);
// code for exporting the public key
var publicKey = rsa.ToXmlString(false);
// code for importing the public key on a different instance
rsa2.FromXmlString(publicKey);
// code for getting bytes from string, there are several other ways of converting text into bytes
var plainBytes = UTF8Encoding.UTF8.GetBytes("Hello!");
// code for encrypting
var encryptedBytes = rsa2.Encrypt(plainBytes, RSAEncryptionPadding.OaepSHA512);
// code for decrypting
var decryptedBytes = rsa.Decrypt(encryptedBytes, RSAEncryptionPadding.OaepSHA512);
In a different blog post in the next few days, I would post about TripleDES, I am implementing a combination of TripleDES and RSA for encrypting and decrypting slightly larger data. Larger data cannot be encrypted using RSA!
This script has been slightly modified for certain reasons mentioned below:
The above script uses pre-defined dh.pem (My version generates a new random 2048 bit dh params)
The above script generates client cert without password (My version mandates password and allows specifying the passwords in a separate file)
The above script generates certificates with 10 years validity (My version generates certificates with 1 day validity i.e because I plan to re-generate certificates often, hmmmm, more like One time use certificates like OTP’s)
In the above snippets, infile and outfile contains the same password two times on two different lines. Replace the password with what’s necessary or use some tools or utilities for generating password and writing into infile and outfile.
Now the C# part:
Using C# code, it’s very easy to generate random passwords and writing the passwords to infile, outfile.
System.Diagnostics.Process class allows executing shell scripts on Linux. Let’s look at some code sample:
Process process = new();
process.StartInfo.WorkingDirectory = "/path";
process.StartInfo.FileName = "/path/openvpn-install.sh";
process.StartInfo.Arguments = "";
process.EnableRaisingEvents = true;
process.Exited += Process_Exited;
process.ErrorDataReceived += Process_ErrorDataReceived;
process.OutputDataReceived += Process_OutputDataReceived;
process.StartInfo.RedirectStandardInput = true;
process.Start();
.
.
.
process.WaitForExit();
void Process_ErrorDataReceived(object sender, DataReceivedEventArgs e)
{
// Do whatever is necessary with e.Data;
}
void Process_OutputDataReceived(object sender, DataReceivedEventArgs e)
{
// Do whatever is necessary with e.Data;
}
void Process_Exited(object? sender, EventArgs e)
{
// Handle code if necessary
}
In the above code snippet, we are executing a shell script located inside /path directory.
Because we are re-directing StandardInput by setting RedirectStandardInput = true, we can enter different values programatically on a necessary basis.
Or in the above shell script, the interactive prompts can be removed and pre-defined values can be used.
Using the above mentioned script, C# code snippets and by having passwords inside file, it becomes very easy to generate new server and client certificates and re-genrate certificates.
BTW the above mentioned script generates /etc/openvpn/server/server.conf, the following code server-config snippets might be of use, if needed add manually or update the script.
max-clients n
log /var/log/openvpn/openvpn.log
status /var/log/openvpn/status.txt
max-clients limits the maximum number of simultaneous connections.
log – writes log file, the verbosity can be controlled using verb. verb value of 9 means very verbose.
status – a little text file having information about current clients connections.
Most of yesterday and this morning, I have been playing around with this library and found the library very useful.
When we develop modern web applications, we consume various services such as internal or 3rd party REST / SOAP / gRPC services. Sometimes, the 3rd party service might be intermittently unavailable. Polly allows a fluid way of re-writing resilient code.
There are several different policies available in Polly, but I liked Retry, Timeout, RateLimit, Cache, PolicyWrap.
The extensive documentation is very well-written and thanks to the 70 contributors who have put in effort for making a perfectly useful library and extensive documentation.
The retry policy, allows retrying multiple times, we can even specify how many times to retry, how long to wait before each retry etc…
Getting started code:
var retryThricePolicy = Policy.Handle<Exception>().Retry(2);
retryThricePolicy.Execute(() => {
DoSomething()
}
);
In the above code, we defined a policy to retry two more times if an Exception is thrown, we are not introducing any delay – not recommended for Production, because we want to use Exponential Backoff strategy.
The next line of code we are using the policy to execute the method DoSomething. In reality, we can have few more lines of code.
For introducing delay instead of Retry, we can use WaitAndRetry().
Timeout() is for handling timeouts.
Cache() for write-through caching strategy.
RateLimit for handling rate limits i.e x number of request per second or per minute etc…
PolicyWrap for a combination of policies.
Carefully by observing and implementing this pattern, lot of routine boiler plate code can be removed and code can be made more consistent.
I might make another blog post in the future on the usage of Func, Action and how Polly type of code can be written, and how some routine boiler plate code can be avoided.
In this blog post, I am going to show a very simple example of how to send alerts to a Slack channel using C#.
Add a custom integration in Slack, search for Incoming WebHooks and add the custom integration. Make a note of the URL and as much as possible, consider the URL like sensitive information and store in a secret location, the reason – anyone can start sending fake messages / spam. But that’s beyond the scope of the article.
Now let’s look at some C# code:
public class Message
{
[JsonProperty("text")]
public string Text { get; set; }
}
In the above code, we have defined a class called Message, with a property known as Text for holding the message.
var url = "https://hooks.slack.com/services/xxx/xxxx/xxxx";
var message = new Message { Text = "Hello!" };
using (WebClient client = new WebClient())
{
NameValueCollection data = new NameValueCollection();
data["payload"] = JsonConvert.SerializeObject(payload);
var response = client.UploadValues(url, "POST", data);
}
Very simple and easy to use. As part of the plan to stabilize and prepare for the production launch of WebVeta, I have been reviewing some security measures and alerts, which are developed and implemented by me at my own startup(s) – ALight Technology And Services Limited and ALight Technologies USA Inc. While reviewing I found some possible security lapses and wanted to close / minimize the security risks and implement some additional internal custom-built intrusion monitoring, alerting, preventing system. Apart from emails, I though of adding Slack and Slack integration seems very simple and straightforward.
SolrNet – is a .Net based library for interacting with Solr using C#.
Solr is a full-text engine server built on top of Apache Lucene. Apache Lucene is a full-text engine.
SolrNet is a C# library for easily generating the REST calls for interacting with Solr server.
One of the most important class is the QueryOptions class. The QueryOptions class allows to specify several options and probably some options need own blog posts.
For paging the results, the following options can be used:
var pageNumber = 2;
var options = new QueryOptions()
{
Rows = 10,
StartOrCursor = new StartOrCursor.Start((pageNumber - 1) * 10)
};
The above code shows getting 10 results, starting from the 11th. The pageNumbers variable was 2, so (pageNumber – 1) * 10 would mean 10. The default 0 i.e from the beginning.
Another useful option is specifying the Fields to retrieve. Think of this like specifying the columns to retrieve in SQL statement instead of all i.e SELECT col1, col2 instead of SELECT *.
var options = new QueryOptions()
{
Fields = new[] { "col1", "col2" }
};
I am hoping this blog post helps someone.
BTW, LMAO! Funny seeing little scumbags of planet earth using some powerful spying equipment and they trying to pass commands. The scumbags/pests/leeches and sl*ts with the equipment have false prestige and false propaganda.
Azure Key Vault is a service for storing sensitive information such as passwords etc…
The following nuget packages are:
Azure.Security.KeyVault.Secrets
Azure.Identity
The following code snippet is for accessing Azure Key Vault programatically.
var kvClient = new Azure.Security.KeyVault.Secrets.SecretClient(new Uri([URL]), new DefaultAzureCredential());
var result = await kvClient.SetSecretAsync("Hello", "Hello1");
var secret = await kvClient.GetSecretAsync("Hello");
Console.Write(secret.Value.Value);
The above code snippet assumes RBAC based authentication.
gRPC is a bery efficient form of communication between different servers.
In the past each programming language had its own technique of Remote Invocation using a very efficient binary serialization and de-serialization. But then these services are not compatible across programming languages. Then webservices standards such as SOAP based on XML, REST API’s based on XML or JSON have evolved. SOAP and REST API’s are a standard and can be implemented in different programming languages. Even now, REST based API’s are the most popular choice. Now comes gRPC alleviating the problems and offers very high performance, development tools are shared across different programming languages and takes advantage of HTTP/2, HTTP/3 where possible.
gRPC re-uses server connections and offers significant advantages in terms of performance i.e the overhead of establishing and disconnecting connections gets minified. Efficient serialization and de-serialization offers high performance in terms of payload and speed i.e fewer CPU cycles and fewer network bytes.
The following tips are for .Net platform:
Increase connection concurrency limit i.e by default the concurrency limit is 100, on large servers with many connections, if you see a performance hit i.e gRPC calls getting queued, consider increasing the concurrency limit.
Consider client side load balancing where applicable, server side load balancing adds a little extra latency, because the request reaches the load balancer and then gets routed to a server. With client side load balancing the client knows how to communicate with the different servers and sends requests appropriately removing the overhead of extra latency.
If the service’s gRPC messages are larger than 96kb consider increasing the InitialConnectionWindowSize and InitialStreamWindowSize.
Latest versions of .Net support creating REST based API’s and gRPC based services with the same code – a topic for a different blog post, but definitely a reason to start using gRPC.
CSRF – Cross Site Request Forgery is a certain type of cyber attack, specifically when using cookies!
A different website would post content into a different domain when the user of the other domain is logged in or in certain other scenarios. CSRF is considered one of the major vulnerabilities and has been in the OWASP top 10 – Cross Site Request Forgery (CSRF).
If you are using token based authentication and if the token is stored in browser’s local storage, CSRF isn’t an issue. This is specifically when using cookies.
Basic Usage:
In .cshtml of web pages inside forms add the following tag:
@Html.AntiForgeryToken()
The above code fragment would render a hidden input element with a long random string.
In the controller class, decorate the action method with the following attribute:
[ValidateAntiForgeryToken]
When the action method is invoked, the validation happens. If the validation succeeds, the action method get invoked. If the validation fails, the action method does not get invoked.
Recommended Usages:
If we forget decorating a post method with [ValidateAntiForgeryToken], we would be susceptible to CSRF attack. Instead we can use a MiddleWare and use the Middleware in the Startup.cs
There are other ways of customizing the middleware, for example if there is a use-case where json data is being sent to a web api and cookies are used for authentication, we can add a customizable header i.e in the calling code we would add the hidden element’s value as header and then make the call.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-advertisement
1 year
Set by the GDPR Cookie Consent plugin, this cookie is used to record the user consent for the cookies in the "Advertisement" category .
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.