Customers who sign-up prior to 30/06/2024 get unlimited access to free features, newer features (with some restrictions), but for free for at least 1 year.Sign up now! https://webveta.alightservices.com/
Categories
Javascript

Some useful Javascript DOM manipulation functions

Most of you know I like sharing my knowledge. Here are some simple but very useful DOM manipulation functions in Javascript.

As part of development for WebSearch, I wanted leaner Javascript and for some part of the development, I am using direct Javascript rather than libraries such as jQuery. jQuery has these functionality and allows easier development. My situation and necessity are a bit different.

var ele = document.getElementById("elementid");
// for getting a reference to an existing element in the DOM

var dv = document.createElement("div");
// for creating a in-memory element.

parentEle.appendChild(childEle);
// for adding an element as a child element of another element

ele.id = "elementId";
// Setting id of element

ele.classList.add("cssClass");
ele.classList.remove("cssClass");
// Adding and removing css classes

ele.innerText = "Text";
ele.innerHTML = "<>...M/>";
// Setting text and innerHTML
// caution with innerHTML - don't inject unsafe/unvalidated markup

ele.addEventListener("event", (ev) => {
   // Anonymous function
});
// Handle events such as click etc...

ele.addEventListener("event", fnEventHandler);
// Handle events by using a function - fnEventHandler

I am hoping this blog post helps some people.

Mr. Kanti Kalyan Arumilli

Arumilli Kanti Kalyan, Founder & CEO
Arumilli Kanti Kalyan, Founder & CEO

B.Tech, M.B.A

Facebook

LinkedIn

Threads

Instagram

Youtube

Founder & CEO, Lead Full-Stack .Net developer

ALight Technology And Services Limited

ALight Technologies USA Inc

Youtube

Facebook

LinkedIn

Phone / SMS / WhatsApp on the following 3 numbers:

+91-789-362-6688, +1-480-347-6849, +44-07718-273-964

+44-33-3303-1284 (Preferred number if calling from U.K, No WhatsApp)

kantikalyan@gmail.com, kantikalyan@outlook.com, admin@alightservices.com, kantikalyan.arumilli@alightservices.com, KArumilli2020@student.hult.edu, KantiKArumilli@outlook.com and 3 more rarely used email addresses – hardly once or twice a year.

Categories
Welcome

Injecting Metrics into Graphite – Hosted Grafana – Some C# Code samples

There are various solutions for collecting, storing and viewing metrics. This blog post is specifically about the following list of software:

  1. CollectD – For collecting system metrics
  2. Carbon-Relay-ng – Like a server but forwards the metrics into Graphite
  3. Hosted Graphite at Grafana.com – The backend that stores the metrics
  4. Grafana – For viewing metrics
  5. Grafana for alerts

Collectd

Collectd is a very light-weight, low memory, low CPU usage Linux tool that runs as a service and can collect various system related metrics. Collectd is very extensible and has several plugins. Some of the plugins, I like and have used are:

  1. Apache web server – Gathers Apache related stats
  2. ConnTrack – Number of connections in Linux connection tracking table
  3. ContextSwitch – Number of context switches
  4. CPU
  5. DNS
  6. IP-Tables
  7. Load
  8. MySQL
  9. Processes
  10. tcpconns
  11. users
  12. vmem

My favorite output plugins and some I am familiar with are:

  1. CSV
  2. Write Graphite
  3. gRPC

Carbon-relay-ng

This is not necessarily my favorite, because little heavy on system resources 🙁

Now host Carbon-relay-ng on one of the servers, Install Collectd on the servers that need to ingest metrics. Use Collectd’s Write_Graphite for ingesting metrics into Carbon-relay-ng. Configure Carbon-relay-ng to ingest metrics into hosted Graphite on Grafana.com.

For ingesting any code-based metrics use ahd.Graphite.

var client = new CarbonClient("example.com");

var datapoints = new[]
    {
        new Datapoint("data.server1.cpuUsage", 10, DateTime.Now),
        new Datapoint("data.server2.cpuUsage", 15, DateTime.Now),
        new Datapoint("data.server3.cpuUsage", 20, DateTime.Now),
    };

await client.SendAsync(datapoints);

//Sample code from - https://github.com/ahdde/graphite.net

I would say instead of instantiating too many instances, use either singleton or use a very small pool of instances.

I have promised to semi-open-source some code from my upcoming project – Alerts in the anouncements blog. Anyone with some programming knowledge, can implement such a solution by following this blog. This would be implemented slowly because I am planning to get normal 9 – 5 job, instead of joining or participating in the r&aw dawgs human rights violation, game of loans, game of identity distortion (in this case, I am the victim and their offer, if I participate – identity distortion of some American – sorry, I am not a psycho)

Moreover, for at least 6 – 12 months, the project would be offered completely free of charge for some companies / individuals who see a need and can provide feedback.

Roadmap for next few months!

Mr. Kanti Kalyan Arumilli

Arumilli Kanti Kalyan, Founder & CEO
Arumilli Kanti Kalyan, Founder & CEO

B.Tech, M.B.A

Facebook

LinkedIn

Threads

Instagram

Youtube

Founder & CEO, Lead Full-Stack .Net developer

ALight Technology And Services Limited

ALight Technologies USA Inc

Youtube

Facebook

LinkedIn

Phone / SMS / WhatsApp on the following 3 numbers:

+91-789-362-6688, +1-480-347-6849, +44-07718-273-964

+44-33-3303-1284 (Preferred number if calling from U.K, No WhatsApp)

kantikalyan@gmail.com, kantikalyan@outlook.com, admin@alightservices.com, kantikalyan.arumilli@alightservices.com, KArumilli2020@student.hult.edu, KantiKArumilli@outlook.com and 3 more rarely used email addresses – hardly once or twice a year.

Categories
.Net ASP.Net C# Grafana Telemetry

Using OpenTelemetry in ASP.Net MVC

OpenTelemetry is pretty much like logs and metrics with distinguishable TraceId’s.

Yesterday and this morning I have experimented with OpenTelemetry in a sample ASP.Net MVC application.

The Primary components are:

  1. A host for Tempo – using Grafana hosted Tempo – https://www.grafana.com. Grafana has a very generous 100GB traces per month in the free tier.
  2. Grafana Agent – As of now, I have used Grafana Agent on Windows laptop, have not configured on Linux production servers yet. Grafana Agent can be downloaded from here. Click on the releases in the right side and choose the Operating System. Here is the link for v0.31.0.
  3. OpenTelemetry SDK for .Net

The OpenTelemetry SDK for .Net are in preview, the API’s might change.

Install the Grafana Agent and update the configuration file. Here is a sample of the config:

server:
  log_level: warn
metrics:
  wal_directory: C:\ProgramData\grafana-agent-wal
  global:
    scrape_interval: 1m
  configs:
    - name: integrations
integrations:
  windows_exporter:
    enabled: true
traces:
  configs:
  - name: default
    remote_write:
      - endpoint: tempo-us-central1.grafana.net:443
        basic_auth:
          username: <YOUR GRAFANA USER_ID>
          password: "<YOUR GRAFANA API KEY>"
    receivers:
      jaeger:
        protocols:
          grpc:
          thrift_binary:
          thrift_compact:
          thrift_http:
      zipkin:
      otlp:
        protocols:
          http:
          grpc:
      opencensus:

Restart the Grafana Service Services.

Add the following pre-release dll’s to your ASP.Net MVC application.

OnLINE Erra, Thota terrorist bastards are spy bastards, they don’t command me, I do whatever I like, because they use invisible spying drone they try to frame me

OpenTelemetry.Api
OpenTelemetry.Exporter.Jaeger
OpenTelemetry.Extensions.Hosting
OpenTelemetry.Instrumentation.AspNetCore
OpenTelemetry.Instrumentation.Http

Now use the following code:

builder.Services.AddOpenTelemetry()
        .WithTracing(builder => builder  .SetResourceBuilder(ResourceBuilder.CreateDefault().AddService("Sample-Web"))
            .AddAspNetCoreInstrumentation()
            .AddGrpcCoreInstrumentation()
            .SetErrorStatusOnException()
            .AddJaegerExporter()
            .AddConsoleExporter())
        .StartWithHost();

Run the application.

Now goto your Grafana account, click browse select the traces from the drop down in the top.

Grafana

Clicking on one of the trace id shows the details:

Grafana Traces

There are additional Trace Collectors that can be used on a necessity basis for:

MySQL Client

SQL Server Client

HTTP Client

GRPC

ElasticSearch

AWS

AWS Lambda

You can expect to see some more blog articles regarding Loggin, Tracing and Metrics i.e Observability.

Mr. Kanti Kalyan Arumilli

Arumilli Kanti Kalyan, Founder & CEO
Arumilli Kanti Kalyan, Founder & CEO

B.Tech, M.B.A

Facebook

LinkedIn

Threads

Instagram

Youtube

Founder & CEO, Lead Full-Stack .Net developer

ALight Technology And Services Limited

ALight Technologies USA Inc

Youtube

Facebook

LinkedIn

Phone / SMS / WhatsApp on the following 3 numbers:

+91-789-362-6688, +1-480-347-6849, +44-07718-273-964

+44-33-3303-1284 (Preferred number if calling from U.K, No WhatsApp)

kantikalyan@gmail.com, kantikalyan@outlook.com, admin@alightservices.com, kantikalyan.arumilli@alightservices.com, KArumilli2020@student.hult.edu, KantiKArumilli@outlook.com and 3 more rarely used email addresses – hardly once or twice a year.

Categories
Cloudwatch Logging

High level architecture of centralized logging and retention strategy at ALight Technology And Services Limited

This blog post is a general blog post on how centralized logging has been implemented, some of the tools used while keeping the costs low.

Having the ability to maintain logs is very important for software companies, even small startups. Centralized logging, monitoring, metrics and alerts are also important.

Log ingestion is done using FluentD. FluentD is installed on all the servers and even a Golden Base AMI has been created with FluentD installed.

Grafana Loki is used as the log ingestion server.

Grafana front-end for viewing logs from Loki.

FluentD has been configured to set different labels for different log sources, the output is written into Loki and into file output.

The output files would be zipped and uploaded into S3 with lifecycle policies. S3 buckets can be configured to be immutable i.e once a file is uploaded, can’t be deleted or re-written or modified until a specified period.

Loki has been configured with a smaller retention period. I wish Grafana Loki supported something like retaining time slices. More on the concept of time slices later in this blog post.

Loki can be configured for a longer retention period but unnecessary EBS storage costs. S3 Standard Infrequent Access or S3 Glacier Instant Retrieval are much cheaper for archival data. Based on your needs you can configure the system.

A new component in C# is being developed to ingest logs into Loki on a need basis. I will definitely post some sample code of the new component.

With the above configuration in place, once logs are written and within 6 minutes, the logs become immutable. Let’s say something happened, and was noticed within 1 day. I can immediately change the retention period of Loki and keep the log retention for longer period. If I saw some abnormality and if the logs are no longer available in Loki due to shorter retention period. The new component being developed would ingest the archived logs from S3 into Loki with the old timestamps. Under normal circumstances this wouldn’t be required, but there is no point in having archived logs that cannot be ingested and searched when required.

Some sample config elements for FluentD:

Code block for ingesting logs from CloudWatch:

I am ingesting CloudTrail logs, I would write a blog post or a video sometime later.

<source>
  @id cloudwatch_logs
  @type cloudwatch_logs
  tag cloudwatch.cloudtrail
  log_group_name <LOG_GROUP_NAME>
  add_log_group_name false
  use_log_group_name_prefix true
  log_stream_name <LOG_STREAM_PREFIX>
  use_log_stream_name_prefix true
  region <AWS-REGION>
  include_metadata true
  <parse>
   @type json
  </parse>
  <storage>
    @type local
    path /var/log/td-agent/cloudwatch_cloudtrail.json
  </storage>
</source>

Sample for log files:

<source>
  @type tail
  @id grafana
  path /var/log/grafana/*.log
  pos_file /var/log/td-agent/grafana.pos
  tag software.grafana
  refresh_interval 5
  <parse>
    @type none
  </parse>
  read_from_head true
  pos_file_compaction_interval 1h
</source

Sample filters for adding additional labels:

<filter **>
  @type record_transformer
  <record>
    tag_name ${tag}
    td_host GrafanaLoki
  </record>
</filter>

<filter cloudwatch.**>
  @type record_transformer
  <record>
    group cloud
    subgroup cloudwatch
  </record>
</filter>

Sample for outputting into files, archiving, ingesting into loki

<match **>
  @type copy
  @id copy
  <store>
    @id loki
    @type loki
    url "http://grafanaloki:3100"
    extra_labels {"env":"Grafana", "host":"Grafana"}
    flush_interval 1s
    flush_at_shutdown true
    buffer_chunk_limit 1m
    <label>
      tag_name
      td_host
      group
      subgroup
      level_three
    </label>
  </store>
  <store>
    @id file
    @type file
    path /var/log/fluentd/grafana_1/${tag}/file.GrafanaLoki.%Y-%m-%d_%H:%M:00
    append true
    <buffer tag, time>
      timekey 5m
      timekey_use_utc true
      timekey_wait 1m
    </buffer>
  </store>
</match>

The above configs are pretty much self-explanatory. Using Loki, Grafana are also very easy. But most important thing, configure and use Grafana with a 3rd party login instead of just username and password. I can’t stress the importance of MFA and if possible use YubiKey Bio. Most other forms of MFA have vulnerabilities and are hackable considering the advanced capabilities of the R&AW / Mafia / Anonymous hackers group equipment.

Metrics:

I am using collectd, Carbon, Grafana cloud for metrics. i.e all the servers have collectd, collectd ingests metrics into Carbon, Carbon forwards these metrics into Grafana cloud. Based upon patterns, set threshold alerts. I am planning to ingest custom additional metrics. But that’s planned for later. Definitely when I get to this phase, I would write some blog posts.

Alerts:

Considering the R&AW / Mafia / Anonymous hackers threat (capabilities of the equipment) – the most damage can happen if they login into AWS Console / SSH into servers. I have wrote some custom code for a lambda that would parse cloudwatch logs looking for AWS console login pattern and sends an alert. This Lambda runs once every minute. The anonymous hackers / spies / R&AW / Mafia might screenshot my AWS account or even record video or even might show my screen in a live video but they can’t login because of biometric MFA authentication.

Similarly I have configured my servers to send an email alert as soon as a SSH login happens. I access my Linux servers from within AWS website using EC2 Instance Connect rather than direct SSH. In other words, if anyone wants to access my Linux servers, they have to first login into AWS console using YubiKey Bio – in other words, no one else can login as of now.

I can provide code samples for the above 2 activities in a later blog post.

TimeSlices:

Earlier, I mentioned about a concept – TimeSlices. I don’t need all logs forever, if I want a certain logstream during a certain period, retain those logs.

Similarly another nice to have feature would be the ability to configure different retention periods for different types of logs. For example, remove traces after x day, remove debug after y days, remove info after z day. Retain Warn, Error, Critical for a longer period.

I am hoping this blog post helps someone. If anyone needs any help with architecting, planning, designing, developing for horizontal and vertical scalability or want any help with centralized logging or enterprise search using Solr or ElasticSearch or want to reduce costs by rightsizing, please do contact me. I offer free consultation and we can agree on the work that needs to be performed and the pricing.

Mr. Kanti Kalyan Arumilli

Arumilli Kanti Kalyan, Founder & CEO
Arumilli Kanti Kalyan, Founder & CEO

B.Tech, M.B.A

Facebook

LinkedIn

Threads

Instagram

Youtube

Founder & CEO, Lead Full-Stack .Net developer

ALight Technology And Services Limited

ALight Technologies USA Inc

Youtube

Facebook

LinkedIn

Phone / SMS / WhatsApp on the following 3 numbers:

+91-789-362-6688, +1-480-347-6849, +44-07718-273-964

+44-33-3303-1284 (Preferred number if calling from U.K, No WhatsApp)

kantikalyan@gmail.com, kantikalyan@outlook.com, admin@alightservices.com, kantikalyan.arumilli@alightservices.com, KArumilli2020@student.hult.edu, KantiKArumilli@outlook.com and 3 more rarely used email addresses – hardly once or twice a year.

Categories
Welcome

How to mount EFS on EC2 Ubuntu instances and automount on reboot

EFS stands for Elastic File System. EFS is a network filesystem where data can persistent and can be accessed by several different EC2 instances.

In pursuit of having my own crash-resistant, tamper-proof, immutable logs and any other future sensitive information I wanted to leverage EFS in my startup ALight Technology And Services Limited.

This article does not discuss EFS in-depth i.e throughput types, Standard vs One Zone etc.. This article is simply about how to mount and automatically mount.

  1. Create the EFS in the region where you need. My current datacenter is in London, United Kingdom because my company is registered in London, United Kingdom (Once again my sincere respect and gratitude for the Government of United Kingdom)
  2. In your EC2 security groups allow port 2049, attach the EC2’s security groups in the networking section of the EFS.
  3. Install the required software
sudo apt install nfs-common -y && \
    sudo systemctl status nfs-utils

4. The command for mounting can be found in EFS, click the EFS name, click “Attach” and you can get the instructions:

sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport <FS_ID>.efs.aws-region.amazonaws.com:/ <YOUR_MOUNT_POINT>

5. Copy some file

6. Edit /etc/fstab and add the following line

file_system_id.efs.aws-region.amazonaws.com:/ mount_point nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,_netdev 0 0

Update the above bolded text as per your configuration.

7. Reboot

8. See if the filesystem got attached

Reference List

Mounting on Amazon EC2 Linux instances using the EFS mount helper – Amazon Elastic File System (no date). Available at: https://docs.aws.amazon.com/efs/latest/ug/mounting-fs-mount-helper-ec2-linux.html (Accessed: February 1, 2023).

Using NFS to automatically mount EFS file systems – Amazon Elastic File System (no date). Available at: https://docs.aws.amazon.com/efs/latest/ug/nfs-automount-efs.html (Accessed: February 1, 2023).