Add free search for your website. Sign up now! https://webveta.alightservices.com/
Categories
.Net C# Lucene MultiFieldQueryParser

Lucene with C#

As part of development effort for PodDB – A search engine for podcasts – A product of ALight Technology And Services Limited, I have been deciding between Lucene.Net and Solr. I strongly suggest Solr over Lucene.Net if you want to scale. For smaller datasets, Lucene.Net shouldn’t be a problem. But, if you want to scale for larger datasets, want built-in sharding, replication features out of the box choose Solr. For smaller datasets and if you know, you wouldn’t be scaling into bigger datasets, Lucene.Net shouldn’t be a problem and as a matter of fact very efficient. With that said, I do have plans of scaling PodDB, if PodDB gains traction, so I chose Solr.

But for the sake of knowledge sharing, in this article, I am going to show how to use Lucene.Net for full-text indexing. I would not go over complex scenarios and at the same time, this article is NOT a Hello World for Lucene.Net.

Moreover, Lucene.Net does not seem to be under active development. As of this blog post date – September 13tth 2022, there are no GitHub commits over the past 2 – 3 months. As software developers, technical leads, and architects we have the responsibility in making the proper choices for the underlying technology stack. Although, ALight Technology And Services Limited is not an enterprise yet, still, I would like to make decisions suitable over the long time.

Now let’s dig into some code.

Lucene.Net version 4 is in pre-release. Use the pre-release versions of Lucene.

Because Lucene.Net is in beta and there could be lot’s of breaking changes, the compatibility version needs to be declared in code.

private const LuceneVersion AppLuceneVersion = LuceneVersion.LUCENE_48;

Now we specify the directory where we want the indexes to be written and some initialization code.

var dir = FSDirectory.Open(indexDirectory);
var analyzer = new StandardAnalyzer(AppLuceneVersion);
var indexConfig = new IndexWriterConfig(AppLuceneVersion, analyzer);
var writer = new IndexWriter(dir, indexConfig);

Now, we use the IndexWriter for writing documents. There are primarily 2 types of string fields that are important.

  1. TextField – The string data is indexed for full-text
  2. StringField – The data is not indexed for full-data but can be searched like normal strings for fields such as id etc…

Based on the above-mentioned types, determine the data that needs full-text search capabilities and the data that would not need full-text search capabilities and if certain data needs to be stored in Lucene.

var doc = new Document
 {
    new TextField("Title", "Some Data", Field.Store.YES),
    new TextField("Description", "Description", Field.Store.YES),
    new StringField("Id", id, Field.Store.YES)
};

You can add as many TextField and StringField instances as needed. You can even create seperate instances of TextField and StringField and call doc.Add().

If you want to optimize the search results provided by Lucene, you can even specify the Boost of the TextField. By default the Boost i.e weight given to any field in 1.0. But can specify a higher weighting for a certain field. For example, if a keyword is in title you might want to boost the entity.

Add the doc instance to writer and flush();

writer.AddDocument(doc);
writer.Flush(triggerMerge: false, applyAllDeletes: false);

For speed and efficiency batch the documents before calling Flush, instead of calling Flush for every document.

Assuming you have built your indexes. Now let’s start to retrieve.

using var dir = FSDirectory.Open(indexPath);
var analyzer = new StandardAnalyzer(AppLuceneVersion);

var indexConfig = new IndexWriterConfig(AppLuceneVersion, analyzer);
using var writer = new IndexWriter(dir, indexConfig);
using var lreader = writer.GetReader(applyAllDeletes: true);
var searcher = new IndexSearcher(lreader);

var exactQuery = new PhraseQuery();
exactQuery.Add(new Term("Id", id));
var search = searcher.Search(exactQuery, null, 1);
var docs = search.ScoreDocs;

if (docs?.Length == 1)
{
    Document d = searcher.Doc(docs[0].Doc);
    var title = d.Get("Title"));

}

The above source code for retrieving document based on Id, not for full-text search. The first few lines of code are standard initializers. Then we instantiated a PhraseQuery, we specified the search should happen on “Id” field. Then if there is a match, we retrieved the Title of the matching document.

Now let’s see how we can search based on Title and Description as mentioned above:

using var dir = FSDirectory.Open(indexPath);
var analyzer = new StandardAnalyzer(AppLuceneVersion);

var indexConfig = new IndexWriterConfig(AppLuceneVersion, analyzer);
using var writer = new IndexWriter(dir, indexConfig);
using var lreader = writer.GetReader(applyAllDeletes: true);
var searcher = new IndexSearcher(lreader);

string[] fnames = { "Title", "Description" };
var multiFieldQP = new MultiFieldQueryParser(AppLuceneVersion, fnames, analyzer);

Query query = multiFieldQP.Parse("My Search Term");
var search = searcher.Search(query, null, 10);

Console.WriteLine(search.TotalHits);

ScoreDoc[] docs = search.ScoreDocs;
for(var doc in docs) {
    Document d = searcher.Doc(docs[i].Doc);

    var Id = d.Get("Id");
    var Title = d.Get("Title");
    var Description = d.Get("Description");
}

In the above source code, we have the standard initializers in the first few lines. Then we are specifying the columns on which the search should happen in the fnames variable. Then we instantiated a MultiFieldQueryParser to enable searching on multiple fields. Then we built the query by specifying the search term. Advanced boolean queries can also be created in this step. Then the search is performed, we can specify how many documents the result should contain, in this case, we specified 10 results. The rest of the code is regarding fetching the field values.

I am hoping this blog article helps someone.

Categories
AWS

AWS CLI – Copying objects from and to S3

This is a very small blogpost about how to copy objects from S3 and to S3. The blog post assumes AWS CLI is installed and configured, and the user has appropriate permissions.

The command is simple and strightforward:

> aws s3 cp <src> <dest>

If copying from current computer to s3 an example would be:

> aws s3 cp "/path/file.zip" "s3://<BucketName>/Path/destinationFile.zip"

If copying from s3 to localhost the source and destination would be reversed.

> > aws s3 cp "s3://<BucketName>/Path/destinationFile.zip" "/path/file.zip" 
Categories
.Net C# Lucene Solr

Lucene vs Solr

I played around with Lucene.Net and Solr. Solr is built on top of Lucene.

Lucene.Net is a port of Lucene library written in C# for working with Lucene on Microsoft .Net stack.

Lucene is a library built by Apache Software Foundation. Lucene provides full-text search capabilities. There are few other alternatives such as Sphinx, full-text search capabilities built into RDBMS’s such as Microsoft SQL Server, MySQL, MariaDB, PostgreSQL etc… However, full-text search capabilities in RDBMS’s are not as efficient as Lucene.

Solr and ElasticSearch are built on top of Lucene. ElasticSearch is more suitable and efficient for time-series data.

Now let’s see more about Solr vs Lucene.

Solr provides some additional features such as replication, web app GUI, collecting and publishing metrics, fault-tolerant etc… Solr provides HTTP REST-based API’s for management and for adding documents, searching documents etc…

Directly working with Lucene would provide access to more fine-grained control.

Because Solr provides REST based API’s there is the overhead of establishing HTTP connection, formatting the requests, JSON serialization, and deserialization at both ends i.e client making the call and the Solr server. By directly working with Lucene this overhead does not exist.

If searching through the documents happens on the same server, working directly with Lucene might be efficient. Specifically in lesser data scenarios, but if huge datasets and scaling are a concern, Solr might be the proper approach.

If server infrastructure requirements require separate search servers and a bunch of application servers query the search servers for data, Solr might be more useful and easier because of existing support replication and HTTP API’s.

If performance is of the highest importance and still fine-grained control is needed, custom-built applications should expose the data from search servers and some other more efficient protocols such as gRPC could be used and obviously, replication mechanisms need to be custom-built.

Categories
Github

How to access GitHub with SSH keys

Sometimes we need to access private repositories of a github account from linux servers for various purposes. For example, I have MFA enabled on my Github account, I need to clone a private repository inside a Linux server hosted in AWS. This article explains how to deal with such situations.

This is pretty much a re-hash of the steps provided in but in a slightly friendlier way.

Generating a new SSH key and adding it to the ssh-agent

Adding a new SSH key to your GitHub account

Login into your server and generate SSH keys by issuing the following command:

ssh-keygen -t ed25519 -C "<YOUR_EMAIL_ADDRESS>"

You would be prompted for a filename, password and re-type password. You can accept the defaults and click enter or enter a separate filename and password.

Now start the SSH agent by issuing the following command:

eval "$(ssh-agent -s)"

Now add the previously generated key for usage by issuing the following command:

ssh-add ~/.ssh/id_ed25519

Now either copy the content of the public key by using the clip command or view the content of the public key by using the cat command:

clip < ~/.ssh/id_ed25519.pub
or
cat < ~/.ssh/id_ed25519.pub

Now go to your GitHub account, sign-in and click on your profile on the right side top and click settings.

GitHub Profile Menu

From the left menu click SSH and GPG Keys

GitHub profile settings page’s left-side menu

Click New SSH Key

SSH Keys

Provide a title, a friendly name to remember, and enter the public key and click Add SSH key.

Add SSH key screen

Depending on your settings you might be prompted to re-authenticate or for MFA.

Now try cloning a repository from your Linux terminal. First copy the SSH URL of the repo:

Git clone URL screen

Then issue the following command on the Linux terminal to clone the main branch or a specific branch.

git clone <SSH_URL>
or
git clone --branch <BRANCH_NAME> <SSH_URL>

Hoping this helps someone :).

Categories
.Net C# UnitTests

Moq Non-Invocable Test Setups

Most of you might know Moq library used for unit tests in .Net. General usage of Moq is for stubbing interfaces and verifying method calls are proper with the expected parameters, but Moq can also be used for validating that a certain method has not been called.

Assume you have an interface ISpecialInterface and expecting ISpecialInterface.Method() to be called only when a certain logic happens like an if condition. If you are writing a unit test for the logic to be false, you want to verify that ISpecialInterface.Method() is not invoked.

var mockSpecialInterface = new Mock<ISpecialInterface>(MockBehavior.Strict);

            mockSpecialInterface.Setup(ss => ss.Method(It.IsAny<string>()));

            var sut = new SpecialObject(mockSpecialInterface.Object);
            sut.LogicMethod();

            mockSpecialInterface.Verify(ss => ss.Method(It.IsAny<string>()), Times.Never());

In the above code, mockSpecialInterface.Verify(ss => ss.Method(It.IsAny<string>()), Times.Never()); is the code that verifies the method is not called.

Erra Diwakar alias Erra Kalyan and some other female who claims to have the first name of Kanti or Erra Sowjanya or Erra Sowmya together try to steal my identity (Kanti Kalyan Arumilli) using some Tamil Nadu-based naming logic. The identity thief couple. I don’t even know them yet they shadow me and claim my bank accounts as theirs by manipulating deliveries and couriers – impersonators with imposter syndrome and shadow rogue R&AW spies and terrorists.

Categories
.Net C#

Programatically configuring NLog in C#

Some people for various reasons might prefer programatically configuring NLog. Some use cases are for example, may be you don’t want to store sensitive information in nlog.config file. Some of the targets that require sensitive information in the config file are (Not an exhaustive list):

You might want to store the sensitive information somewhere else in an encrypted format. Then you might decrypt the password and programmatically configure the logger. Here is some code sample on how to configure such loggers.

var logConfig = new LoggingConfiguration();

//File
var fileTarget = new FileTarget
    {
        FileName=typeof(Program).FullName + ".log"
    };

fileTarget.Layout = @"${date:format=HH\:mm\:ss} ${logger}:${message};${exception}";

var fileRule = new LoggingRule("*", LogLevel.Debug, fileTarget);
logConfig.LoggingRules.Add(fileRule);
logConfig.AddTarget("logfile", fileTarget);

//DB
var dbTarget = new DatabaseTarget();

dbTarget.ConnectionString = YourSecureMethodForDecryptingAndObtainingConnectionString();

dbTarget.CommandText = @"INSERT INTO [Log] (Date, Thread, Level, Logger, Message, Exception) VALUES (GETDATE(), @thread, @level, @logger, @message, @exception)";

dbTarget.Parameters.Add(new DatabaseParameterInfo("@thread", new NLog.Layouts.SimpleLayout("${threadid}")));

.
.
.
logConfig.AddTarget("database", dbTarget);
var dbRule = new LoggingRule("*", LogLevel.Debug, dbTarget);
logConfig.LoggingRules.Add(dbRule);


LogManager.Configuration = logConfig;

In the above sample code, we have looked into how to add multiple types of targets – File and DB. How to set the layout for the FileTarget. How to configure logging rules and finally how to assign the programmatic config.

An interesting target is the Memory target, allows writing log messages to an ArrayList in memory for programmatic retrieval. Great for unit testing.

There are some code samples in the above mentioned link for Memory target.

Categories
.Net AWS C# CI/CD Code Build Github

How to setup CI/CD for C# application in AWS Lambda

In this blog post, I am going to write how to setup a CI (Continuous Integration)/CD (Continuous Deployment) pipeline for C# application in AWS Lambda.

The source code repository can be either be Github, AWS CodeStar. Then we are going to use CodeBuild for setting up build by writing a YAML file in the root of source code repository known as buildspec.yml.

Inside the buildspec.yml file, we would use dotnet lambda tool for the deployment.

Create a new AWS CodeBuild Project by choosing the source provider and the other options. Use the Managed Image, Ubuntu, Standard, aws/codebuild/standard:6.0 image. Make note of the new role created or if using existing role, make note of the role.

Create a new Lambda function with a .Net 6.0 runtime, make note of the IAM role for the Lambda function and the name of the Lambda function. The buildspec.yml mentioned below assumes the lambda function has a name of LambdaFunctionName and the role arn:aws:iam::xxxxxx:role/service-role/xxxxrole.

The following is an example buildspec.yml file:

version: 0.1
env:
  variables:
    DOTNET_ROOT: /root/.dotnet
phases:
  install:
    runtime-versions:
      dotnet: 6.0
  pre_build:
    commands:
      - echo Restore started on `date`
      - export PATH="$PATH:/root/.dotnet/tools"
      - pip install --upgrade awscli
      - cd Project1
      - dotnet clean
      - dotnet build
      - dotnet test
  build:
    commands:
      - echo Build started on `date`
      - dotnet new -i Amazon.Lambda.Templates::*
      - dotnet tool install -g Amazon.Lambda.Tools
      - dotnet tool update -g Amazon.Lambda.Tools
      - dotnet lambda deploy-function "LambdaFunctionName" --function-role "arn:aws:iam::xxxxxx:role/service-role/xxxxrole" --region "eu-west-2" --fn "LambdaFunctionName"

In the above buildspec.yml file sample, we are using .Net 6, navigating to the folder where the .sln file is located, and doing a clean, restore, build and test. Once these steps have passed, we are installing AWS Lambda tools, using lambda deploy-function for deploying.

Remember to change the cd statement to the appropriate folder structure to navigate to the folder which contains your .sln solution file, the Lambda Function Name and the IAM role of the Lambda Function in the above buildspec.yml file.

We still need to grant permissions for the role under which the build is running the permissions to deploy the code to Lambda. Now navigate to IAM and either create a custom policy or attach an inline policy directly to the CodeBuild role.

In the below screenshot, I have attached an inline policy:

That’s all for now! Happy coding my dear fellow developers!

Meanwhile, terrorist Veera, Bandhavi, Erra surnamed people, Uttam, the female who claims to have a first name of Kanti would be happily hacking, violating human rights and doing identity theft.

We need to secure our applications and our users from such malicious hackers/spies/terrorists and prevent espionage.

Categories
Welcome

NLog in .Net Applications

NLog in .Net Applications

The source code can be obtained from: https://github.com/ALightTechnologyAndServicesLimited/NLogSampleDemo001

NLog is a structured logging framework for .Net. NLog is highly extensible, configurable and supports lot of plugins. This post is about some basic usage of NLog, we won’t be discussing any of the advanced features. NLog can be added via nuget package. There are several ways to configure NLog, the documentation can be found here.

NLog supports several built-in Targets such as Console, File, Database etc…, the list of layouts are found here. Some of my favorite targets are: Console, File, Database, BufferingWrapper, AsyncWrapper, AspNetBufferingWrapper, FallbackGroup, RetryingWrapper, ElasticSearch, EventLog, FluentD, MicrosoftTeams, Amazon CloudWatch, Amazon SNS, Amazon SQS, Microsoft Azure ApplicationInsights, Microsoft Azure Blob Storage, Microsoft Azure DataTables Storage, Microsoft Azure FileSystem, Microsoft Azure Queue Storage, Memory, Network, WebService, MessageBox and many more. All the targets are very well documented.

Layouts allow custom layout of log messages and can be viewed here.

Layout Renders are very useful and can be used to determine exactly what information needs to be captured and can be found here. Determining the proper information to capture helps in troubleshooting issues.

In this post, we are going to discuss how to integrate NLog into a console application and how to inject logger into a class for use.

We start off by adding the required nuget packages:

NLog
NLog.Extensions.Logging
Microsoft.Extensions.Configuration.Json
Microsoft.Extensions.DependencyInjection

//packages can be added using the syntax below from .Net CLI
> dotnet add package <package_name>
// optionally the version can be specified via --version flag
// As of the date of this blog post, the following commands would add the latest packages.

> dotnet add package NLog --version 5.0.1
> dotnet add package NLog.Extensions.Logging --version 5.0.0
> dotnet add package Microsoft.Extensions.Configuration.Json --version 7.0.0-preview.5.22301.12
> dotnet add package Microsoft.Extensions.DependencyInjection --version 7.0.0-preview.5.22301.12

Next we will create a nlog.config file with the following content:

<?xml version="1.0" encoding="utf-8" ?>
<!-- XSD manual extracted from package NLog.Schema: https://www.nuget.org/packages/NLog.Schema-->
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd" xsi:schemaLocation="NLog NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      autoReload="true"
      internalLogFile="c:\temp\logs\nlog-internal.log"
      internalLogLevel="Info" >

  <!-- the targets to write to -->
  <targets>
    <!-- write logs to file -->
    <target xsi:type="File" name="logfile" fileName="c:\temp\logs\logs.log"
            layout="${longdate}|${level}|${message} |${all-event-properties} ${exception:format=tostring}" />
    <target xsi:type="Console" name="logconsole"
            layout="${longdate}|${level}|${message} |${all-event-properties} ${exception:format=tostring}" />
  </targets>

  <!-- rules to map from logger name to target -->
  <rules>
    <logger name="*" minlevel="Trace" writeTo="logfile,logconsole" />
  </rules>
</nlog>

I personally prefer creating environment specific nlog.config files, so a file with the name nlog.Development.config can be created for Development specific configuration. We will see some code snippets further in this blog for using environment specific config file. Remember to set the copyToOutput value to Always.

Now we will add a sample Interface and called IMyInterface

public interface IMyInterface
{
   void PrintHello();
}

Now will add a class that implements IMyInterface known as MyClass

public class MyClass : IMyInterface
{
   private readonly ILogger<MyClass> logger;

   public MyClass(ILogger<MyClass> logger)
   {
      this.logger = logger;
      logger.LogDebug("Constructor Invoked.");
   }

   public void PrintHello()
   {
      logger.LogDebug("PrintHello Invoked.");
      Console.WriteLine("Hello!");
   }
}

Now for the main, The following list of usings are used.

using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using NLog;
using NLog.Extensions.Logging;

Now the actual code:

var logger = LogManager.GetCurrentClassLogger();

try
{
    var environment = Environment.GetEnvironmentVariable("DOTNET_ENVIRONMENT");
    var configBuilder = new ConfigurationBuilder()
           .SetBasePath(System.IO.Directory.GetCurrentDirectory())
           .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true);
    var nlogConfigFileName = "nlog.config";

    if (!String.IsNullOrEmpty(environment))
    {
        configBuilder.AddJsonFile($"appsettings.json", optional: true, reloadOnChange: true);
        nlogConfigFileName = $"nlog.{environment}.config";
    }

    var config = configBuilder.Build();

    
    var servicesProvider = new ServiceCollection()
       .AddTransient<IMyInterface, MyClass>()
// Add Additional Services i.e bind interfaces to classes
       .AddLogging(loggingBuilder =>
       {
           // configure Logging with NLog
           loggingBuilder.ClearProviders();
           loggingBuilder.SetMinimumLevel(Microsoft.Extensions.Logging.LogLevel.Trace);
           loggingBuilder.AddNLog(nlogConfigFileName);
       })
       .BuildServiceProvider();

   using(servicesProvider as IDisposable)
      {
         var myInterface = servicesProvider.GetRequiredService<IMyInterface>();
        myInterface.PrintHello();
  }
}
catch(Exception e)
{
    logger.Error(e, "Stopped program because of exception");
    throw;
}
finally
{
    LogManager.Shutdown();
}

In the above code, we assume DOTNET_ENVIRONMENT environment variable holds information about the environment. ASPNETCORE_ENVIRONMENT environment variable is used for ASP.Net core applications. Another assumption is that the appsetting.[ENVIRONMENT].json holds environment specific information that needs to be over-written and of course, we specified the appsettings.[ENVIRONMENT].json file as optional.

We are specifying the nlog.[ENVIRONMENT].config to be used for NLog logging. If needed, some additional code can be added for verifying the file exists. This way we can have seperate NLog config files for each environment.

Now using dependency injection ILogger<classname> can be injeced into the constructor of the class and the logger gets injected through dependency injection. For example:

using Microsoft.Extensions.Logging;

public class MyClass : IMyInterface {
   private ILogger<MyClass> logger;

   public MyClass(ILogger<MyClass> logger){
      this.logger = logger;

      logger.LogDebug("Message");
   }
}

The source code can be obtained from: https://github.com/ALightTechnologyAndServicesLimited/NLogSampleDemo001

NLog in .Net Applications

Categories
.Net UnitTests

Code coverage in .Net

This blog post can be considered obsolete. OpenCover tool has been archived. Refer to this blog post for latest:

Code coverage in .Net

This blog post covers code coverage in .Net. Particularly we would be discussing about code coverage using NUnit, OpenCover, ReportGenerator tools.

For the sake of this discusson we would assume we have some projects in the following namespaces:

Project1.Facade;

Project1.App

Project1.Service

Project1.Facade.Tests – NUnit Test Project

Project1.Service.Tests – NUnit Test Project

First we would need to run the tests using NUnit console test runner. The NUnit Console Test Runner can be downloaded from here under releases section. The assumption is the package has been installed under C:\ drive and C:\NUnit\nunit-console\nunit3-console.exe exists, if not adjust the path as necessary.

Create a batch file and call it “RunTests.bat” with the following content:

C:\NUnit\nunit-console\nunit3-console.exe .\Project1.Facade.Tests\bin\Debug\net6.0\Project1.Facade.Tests.dll
C:\NUnit\nunit-console\nunit3-console.exe .\Project1.Service.Tests\bin\Debug\net6.0\Project1.Service.Tests.dll

Now we will create another batch file “CodeCoverage.bat” for the commands for OpenCover and ReportGenerator. The assumption is that OpenCover and ReportGenerator have been installed at the following locations:

C:\OpenCover\OpenCover.Console.exe, C:\ReportGenerator\net6.0\ReportGenerator.exe.

The content of batch file would be:

C:\OpenCover\OpenCover.Console.exe -target:.\RunTests.bat -register:user -filter:"+[Project1*]* -[Project1*Tests]*" -oldstyle -mergeOutput

C:\ReportGenerator\net6.0\ReportGenerator.exe -reports:results.xml -targetdir:coverage

start .\coverage\index.htm

We first have OpenCover run the RunTests.bat it collects some data and generates results.xml file. The important parameters are the -filter, -oldstyle, -mergeOutput. The results.xml file becomes the input of ReportGenerator. ReportGeerator generates some reports and saves inside a directory by the name of coverage.

The last command opens the index.html file from the generated report.

A bunch of log files, xml files are generated during these. Assuming there are no other xml or log files, these can be removed easily by adding appropriate del commands in the second batch file.

The project structure is slightly different, but some sample Add and Subtract example screenshots can be seen below:

Hoping this post helps someone in improving coding practices! Worked a little bit late into the night, I work for myself – for my own startup, no worries!

Code coverage in .Net

Categories
AWS Security

AWS – Discouraging the use of CLI!

AWS – Discouraging the use of CLI!

After much thought and consideration, I think the use of AWS CLI should be discouraged unless a very very strong VPN authentication and pro-active monitoring are in place.

The reason is, more and more workforce are working remote, accessing the cloud remotely. If VPN connections are not properly secured and monitored, with today’s advanced spying equipment being used by spies for espionage (unfortunately in my case, identity thief R&AW spies are hacking me (Indian citizen) for cover-up of identity theft), it becomes extremely easy for them to log into VPN. And even with restrictions in place, stealing Access Key and Secret Key would be much easier.

Remember the advanced spying equipment has very advanced capabilities: viewing, recording video, screenshotting, listening, recording audio, whispering and even mind-reading capabilities. Lookup https://www.alightservices.com, Corrupted R&AW online – not associated – ALight Technology And Services Limited (alightservices.com),

https://www.simplepro.site/

And the female R&AW agent needs to frame someone as a “ok” or “okay” or “oray” or whatever because the female with the first name of “Kanti” is an identity thief and needs to frame someone or make some absurd claims.

Some of the greedy/criminal people have accepted to act as the so-called non-existent “ok” or “okay” for getting access to the advanced spying equipment – peeping toms, because the female controls the spying equipment.Simple and striaghtforward logic.

Another most important consideration must be no sensitive information should be displayed on the screen.

Accessing AWS services by using properly tested and secure applications that have proper audit and logging should not be a problem in most scenarios.

Hypothetically, let’s say some sensitive data is stored in some dynamodb table and even if read-only permission is provided for access via CLI by using Secret Key and Access Key, there is a possibility of the data getting stolen. But in a proper well-tested application, the sensitive data might not be even displayed or even if displayed may be only masked data would be displayed. Or may even in some very rare cases, the actual data and masked data might be stored separately and maybe the applications just reads the masked data.

For example, irrespective of whether the data is stored in RDBMS or NoSQL database, in a hypothetical situation if some data like credit card number is being stored in a database and some page has to show 100 masked credit card numbers. It would be risky and even cpu consuming to decrypt 100 credit card numbers, and then calculate the masked values. Worst case scenario, if searching millions of encrypted credit card numbers based on last 4 digits. It’s safer to store the last 4 digits.

At least in ALight Technology And Services Limited, I would personally discourage the usage of CLI in most scenarios.

AWS – Discouraging the use of CLI!