Tuesday 15 June 2021

Prometheus & Grafana dashboard setup

 Resorce to setup Prometheus:

--------------------

https://itnext.io/using-prometheus-in-azure-kubernetes-service-aks-ae22cada8dd9



To open Grafana dashboard: (userid/password : admin/admin)

---------------------------

kubectl --namespace monitoring port-forward $(kubectl get pod --namespace monitoring -l app=kube-prometheus-grafana -o template --template "{{(index .items 0).metadata.name}}") 3000:3000


(OR)


kubectl get pod --namespace monitoring -l app=kube-prometheus-grafana

kubectl port-forward -n monitoring kube-prometheus-grafana-8669b7f999-xnq7q  3000:3000

http://localhost:3000/


To open Prometheus dashboard: (Prometheus a datasource, Grafana is used to show it in UI)

------------------------------

kubectl --namespace monitoring port-forward $(kubectl get pod --namespace monitoring -l prometheus=kube-prometheus -l app=prometheus -o template --template "{{(index.items 0).metadata.name}}") 9090:9090


Create keys - Public and Private - OpenSSL

 kubectl create secret tls ambassador-certs --cert=emea_com.pem --key=emea_com.key


Private key:

------------

openssl pkcs12 -in emea_com.pfx -nocerts -nodes -out emea_com1.key


Public key:

------------

openssl pkcs12 -in emea_com.pfx -nokeys -out emea_com1.pem


chmod 664 emea_com.key emea_com.pem

Terms - words

superficial

transient

GIT force commit

 git clone <repo url>

git status

if any changes u see do - git stash

git fetch --all

git branch -a  (in this u should see qa2.0 branch)

git check out qa2.0

git reset --hard origin/dev2.0

git status

git gui - do force commit here

Kafka Connect basic - including setup and run

 C:\kafka_2.13-2.8.0\bin\windows>zookeeper-server-start.bat C:\kafka_2.13-2.8.0\config\zookeeper.properties

C:\kafka_2.13-2.8.0\bin\windows>kafka-server-start.bat C:\kafka_2.13-2.8.0\config\server.properties

C:\mysql-8.0.25-winx64\bin>mysqld --console

C:\mysql-8.0.25-winx64\bin>mysql --user=root --password=t3r6e*OjnQRa

C:\kafka_2.13-2.8.0\bin\windows>kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic connect-test


ALTER USER 'root'@'localhost' IDENTIFIED BY 'password123';



to find port : netstat -ano | findstr :8080

to kill port  : taskkill /PID <PROCESSID> /F


Supported Jar files

====================


Create a folder kafka-connect-jdbc in C:\kafka_2.13-2.8.0\confluent-6.1.1\share\java, and copy the below 2 jars into  kafka-connect-jdbc


https://dev.mysql.com/downloads/connector/j/     ===> select 'Platform Independent' option and download the jar

https://www.confluent.io/hub/confluentinc/kafka-connect-jdbc ==> Download this jar


To know the number of messages exists in a topic:

==================================================

C:\kafka_2.13-2.8.0\bin\windows>kafka-run-class kafka.tools.GetOffsetShell --broker-list localhost:9092 --topic myTopic


To view the Kafka cluster details

==================================

download kafdrop github project and build the project.

java -jar kafdrop-2.1.0.jar --zookeeper.connect=localhost:2181 --kafka.brokers=localhost:9092

Open the browser and launc the URL as mentioned on kafdrop github



connect-standalone.properties

================================


bootstrap.servers=localhost:9092

key.converter=org.apache.kafka.connect.json.JsonConverter

value.converter=org.apache.kafka.connect.json.JsonConverter

key.converter.schemas.enable=true

value.converter.schemas.enable=true

offset.storage.file.filename=C:/kafka_2.13-2.8.0/kafkaconnect-Standalone-offset/connect.offsets

offset.flush.interval.ms=10000

plugin.path=C:/kafka_2.13-2.8.0/confluent-6.1.1/share/java


connect-file-source.properties

===============================


name=local-file-source

connector.class=FileStreamSource

tasks.max=3

file=C:/kafka_2.13-2.8.0/testFileStreamData/test.txt

topic=connect-test


connect-jdbc-source.properties

===============================


name=test-source-mysql-jdbc-kafka-connect

connector.class=JdbcSourceConnector

tasks.max=1

connection.url=jdbc:mysql://localhost:3306/kafka_database?user=root&password=password123

mode=incrementing

incrementing.column.name=id

catalog.pattern=kafka_database

table.whitelist=user_entity

topic.prefix=connect-test


C:\kafka_2.13-2.8.0\bin\windows>connect-standalone.bat C:\kafka_2.13-2.8.0\confluent-6.1.1\etc\kafka\connect-standalone.properties C:\kafka_2.13-2.8.0\confluent-6.1.1\etc\kafka\connect-file-source.properties


C:\kafka_2.13-2.8.0\bin\windows>connect-standalone.bat C:\kafka_2.13-2.8.0\confluent-6.1.1\etc\kafka\connect-standalone.properties C:\kafka_2.13-2.8.0\confluent-6.1.1\etc\kafka-connect-jdbc\connect-jdbc-source.properties

Monday 14 June 2021

Basic estimation factors

Efforts includes : 

-------------------

TRQ Evaluate

LLD Preparation

CUT (Code and Unit Testing)

CodeReview

Rework

Run SonarQube & Fix

External Code Review

External Rework

Integration Dev Test Rework

Integration Testing  with external systems

SAST/DAST (Critical/High/Medium)

DevOps (Release)

Friday 11 June 2021

To export jar file to an Artifacts repo (maven command/settings.xml)

 mvn deploy:deploy-file -DgroupId=com.pom.group.id -DartifactId=POMArtifactid -Dversion=2.1.9 -Dpackaging="jar" -Durl=https://artifacts-url/maven/v1 -Dfile="JarfileToBeExported-2.1.9.jar" -DrepositoryId=reposerver-id

To use this jar file:

<dependency>
<groupId>com.pom.group.id</groupId>
<artifactId>POMArtifactid</artifactId>
<version>2.1.9</version>
</dependency>
Settings.xml:
<?xml version="1.0" encoding="UTF-8"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0                           https://maven.apache.org/xsd/settings-1.0.0.xsd">
    <localRepository/>
   <interactiveMode />
   <usePluginRegistry />
   <offline />
   <pluginGroups />
   <servers>
      <server>
         <id>reposerver-id</id>
         <username>username</username>
         <password>password</password>
      </server>
   </servers>
   <profiles>
     
    <profile>
      <id>distributionManagement</id>
      <repositories>
        <repository>
          <id>reposerver-id</id>
          <url>https://artifacts-url/maven/v1</url>
        </repository>
      </repositories>
    </profile>
  </profiles>
	 
   
   <activeProfiles>
      <activeProfile>distributionManagement</activeProfile>
   </activeProfiles>
   <mirrors />
   <proxies />  
   
</settings>


Thursday 25 March 2021

@Transaction management internals (Proxy pattern)

How exactly @Transaction works:

Earlier we used to start, close, rollback the transactions programmatically. But with the introduction of @Transaction annotation life became easy. All the underlying transaction management will he handled by the framework.

@Service

public class EmployeeService {

        @Transactional

    public void saveEmployee (Employee employee){

        dao.saveEmployee (employee);

    }

}

The underlying  transaction managers handle the transaction, whichever is used, for example: HibernateTransactionManager or JpaTransactionManager


Internally Spring creates a Proxy for the EmloyeeService class. This proxy class is the one which handles the transaction management related activates, including exception handling and rollback of the transaction








Java interface - Diamond Problem

 public interface InterfaceEx {


default void name(){} //default method
static void name6(){}
}

interface InterfaceEx1{
default void name(){} // default method
static void name6(){}
}

//If a class is implementing 2 or more interfaces, with same method signature, compiler enforces class to Override that method.
//This is how diamond problem is resolved.
class ClassEx implements InterfaceEx, InterfaceEx1{ //

@Override
public void name() {

}
}

Tuesday 23 March 2021

Spring AOP example

 import com.fasterxml.jackson.databind.ObjectMapper;

import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;
import org.springframework.stereotype.Component;

@Component
@Aspect
public class ControllerAOP {

//execution command : <access specifier> <return type> <pacakge.classname.methodname(returntype)>
//example: execution(* com.some.packagename.ClassName.methodname(allarguments))
@Pointcut(value = "execution(public * com.api.*.*.*(..))")
public void controllerPointCut() {}

//A pointcut on a annotation. example: to pointcut on 'Service' annotation
@Pointcut("@within(org.springframework.stereotype.Service)")
public void serviceAnnotationPointCut() {}

@Around(value = "controllerPointCut()")
public Object myAround(ProceedingJoinPoint jp) throws Throwable {

ObjectMapper objectMapper = new ObjectMapper();

System.out.println("Method name :" + jp.getSignature().getName()); // to get method name
System.out.println("Class name : " + jp.getTarget().getClass().getName()); // to get class name
System.out.println("Arguments : " + objectMapper.writeValueAsString(jp.getArgs())); //to get arguments

Object obj = jp.proceed();

System.out.println("Response : " + objectMapper.writeValueAsString(obj)); //to get response body

return obj;
}

@Around(value = "serviceAnnotationPointCut()")
public Object serviceAnnotationAround(ProceedingJoinPoint jp) throws Throwable {

ObjectMapper objectMapper = new ObjectMapper();

System.out.println("Method name :" + jp.getSignature().getName()); // to get method name
System.out.println("Class name : " + jp.getTarget().getClass().getName()); // to get class name
System.out.println("Arguments : " + objectMapper.writeValueAsString(jp.getArgs())); //to get arguments

Object obj = jp.proceed();

System.out.println("Response : " + objectMapper.writeValueAsString(obj)); //to get response body

return obj;
}

}

Friday 19 March 2021

resilience4j circuitBreaker lifecycle

  CircuitBreaker comes up with three default states: CLOSED, OPEN and HALF_OPEN

 CLOSED : Everything is normal. All requests and responses are passed

 OPEN : CB opens if the configured number of failure is matched

 HALF_OPEN : CB waits till the duration we configure, then again it tries to CLOSE the CB by few more configured attempts. It fails again, CB goes back to CLOSE state. CB doesn't go back to CLOSED state until HALF_OPEN attempts are passed.

 

Below are the sample CB configurations:


ringBufferSizeInClosedState: 5  // CB is kept closed until 5 consecutive fail attempts. Then CB moves to OPEN state after 5 consecutive failures

waitDurationInOpenState: 10s // CB waits for 10 seconds in HALF_OPEN state

ringBufferSizeInHalfOpenState: 2 // CB tries 2 attempts being in HALF_OPEN state, within these 2 attempts if CB is passed, CB moves to CLOSED state. If it fails, CB moves back to OPEN state.