5/27/2020

Duration vs Period in Java 8

Both classes can be used to represent an amount of time or also measure the difference two dates.
How to use both in different situations?

  • date-based value (Period): check whether days of period is over setting.

    LocalDate localDate = java.time.LocalDate.parse(matcher.group(1), DATE_TIME_FORMATTER);
    long days = Period.between(localDate, java.time.LocalDate.now()).getDays();
    return (days >= 5);
  • time-based value (Duration): check an interval of time in seconds or nanoseconds.

    Duration.between(startTime, LocalTime.now()).getSeconds();

Ref

Java Period and Duration

5/25/2020

Redpill

Redpill is a my side project which is used to visualize financial statement analysis is the process of analyzing a company's financial statements for decision-making purpose. For me use it to UNDERSTAND the overall health of an organization as well as to evaluate financial performance and business values.

Here are some technologies that can help build it as below.

  1. JDK 11- Module, JavaFx, HttpClient...
  2. Apache Spark - an unified analytics engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing.
  3. Integration of Sprint Boot and JavaFx
  4. Spring-based development.
  5. Based on Hexagonal architecture, Ports and adapters architecture, is an architectural pattern used in software design. It aims at creating loosely coupled application components that can be easily connected to different software environments.

Now it's still a drat version will be updated by new ideas.

1/05/2020

Kafka Topic Partitions

In kafka, A topic is a category/feed name for messages are stored and pushed. Further, Kafka breaks topic logs up into partitions, interesting part here.

From the kafka documentation

Each partition is an ordered, immutable sequence of records that is continually appended to—a structured commit log. The records in the partitions are each assigned a sequential id number called the offset that uniquely identifies each record within the partition.

The partitions in the log serve several purposes. First, they allow the log to scale beyond a size that will fit on a single server. Each individual partition must fit on the servers that host it, but a topic may have many partitions so it can handle an arbitrary amount of data. Second they act as the unit of parallelism—more on that in a bit.

To significantly increase the throughput and performance of handling messages, the multiple partitions be consumed by multiple PODs(instances) should be considered into. Here's using simple kafka commands to simulate this case.

Alter the topic to use two partitions

kafka-topics --alter --zookeeper localhost:2181 --topic alarm --partitions 2

Count partitions in alarm Topic

kafka-topics --describe --zookeeper localhost:2181 --topic alarm

Count current message summary

kafka-run-class kafka.tools.GetOffsetShell --broker-list localhost:9092 --topic alarm

Sending the messages by different keys

kafka-console-producer --broker-list localhost:9092 --topic snmp-trap-alarm  --property "parse.key=true" --property "key.separator=:"
>k1:message1
>k2:message2
>k3:message3
>...

Consumers can assign the specific partition num 0, 1... to handle messages

kafka-console-consumer --bootstrap-server localhost:9092 --from-beginning --topic alarm --property print.key=true --partition 0

kafka-console-consumer --bootstrap-server localhost:9092 --from-beginning --topic alarm --property print.key=true --partition 1

12/20/2019

Configuring Authentication with Kerberos in Kafka

One day R&D team asked by configuring authentication with kerberos to simulate our customer environment and QA be based on this to verify the incoming messages. As we known, Kerberos security for Kafka is optional feature, normal case we don't need to use it in Intranet network or zone.

What's Kerberos

Kerberos is a network authentication protocol. It is designed to provide strong authentication for client/server applications by using secret-key cryptography. A free implementation of this protocol is available from the Massachusetts Institute of Technology. Kerberos is available in many commercial products as well. [ref]

Prerequisite

Kerberos, Kafka and Zookeeper are installed on same host with using same domain. First time don't consider to setup in different hosts with different domain.
A Kerberos realm is the domain over which a Kerberos authentication server has the authority to authenticate a user, host or service.
All hosts must be reachable using hostnames; It is a Kerberos requirement that all your hosts can be resolved with their FQDNs.

Configure

Step 1: Prepare keytab for kafka/client/zookeeper

Using two commands to create principal and export the keytab as file.

sudo /usr/sbin/kadmin.local -q 'addprinc -randkey {principal}/{hostname}@{REALM}'
sudo /usr/sbin/kadmin.local -q "ktadd -k /tmp/keytabs/{keytabname}.keytab kafka/{hostname}@{REALM}"

Kafka

sudo /usr/sbin/kadmin.local -q 'addprinc -randkey kafka/mydomain.com@SUPER_HERO'
sudo /usr/sbin/kadmin.local -q "ktadd -k /tmp/keytabs/kafka.keytab kafka/mydomain.com@SUPER_HERO"

zookeeper

sudo /usr/sbin/kadmin.local -q 'addprinc -randkey zookeeper/mydomain.com@SUPER_HERO'
sudo /usr/sbin/kadmin.local -q "ktadd -k /tmp/keytabs/zookeeper.keytab zookeeper/mydomain.com@SUPER_HERO"

kafka client

sudo /usr/sbin/kadmin.local -q 'addprinc -randkey kafka_client/mydomain.com@SUPER_HERO'
sudo /usr/sbin/kadmin.local -q "ktadd -k /tmp/keytabs/kafka_client.keytab kafka_client/mydomain.com@SUPER_HERO"

Step 2: Kafka and Zookeeper Operation First

Configure server.properties for kafka
vim server.properties

...
#Binding SASL_PLAINTEXT protocol in 9094 port
listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://mydomain.com:9094 
advertised.listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://mydomain.com:9094
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

## inner comuncation with SASL_PLAINTEXT
security.protocol=SASL_PLAINTEXT
sasl.mechanism=GSSAPI
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.kerberos.service.name=kafka
...
zookeeper.connect=mydomain.com:2181
...

Prepare JDK's Kerberos Requirements (/usr/local/etc/kafka/security/krb5.conf)

[libdefaults]
default_realm = SUPER_HERO
forwardable = true
kdc_timeout = 3000
ns_lookup_kdc = false
dns_lookup_realm = false
[realms]
SUPER_HERO = {
  kdc = mydomain.com
  admin_server = mydomain.com
}
[domain_realm]
.mydomain.com = SUPER_HERO
mydomain.com = SUPER_HERO

Step 2.1: Startup Zookeeper

Prepare Zookeeper's Jaas file (/usr/local/etc/kafka/security/zookeeper_jaas.conf)

Server {
  com.sun.security.auth.module.Krb5LoginModule required 
  debug=true
  useKeyTab=true
  keyTab="/usr/local/etc/kafka/security/zookeeper.keytab" <--your zookeper keytab path
  storeKey=true
  useTicketCache=false
  principal="zookeeper/mydomain.com@SUPER_HERO";
};

Export KAFKA_HEAP_OPTS for zookeeper process. MUST ENABLE sun.security.krb5.debug MODE, otherwise it's super hard to find the cause.

export KAFKA_HEAP_OPTS="-Djava.security.krb5.conf=/usr/local/etc/kafka/security/krb5.conf -Djava.security.auth.login.config=/usr/local/etc/kafka/security/zookeeper_jaas.conf  -Dsun.security.krb5.debug=true"

start zookeeper process

zookeeper-server-start /usr/local/etc/kafka/zookeeper.properties

Step 2.2: Startup Kafka

Prepare Kafka's Jaas file (/usr/local/etc/kafka/security/kafka_server_jaas.conf)

KafkaServer {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    debug=true
    serviceName="kafka"
    keyTab="/usr/local/etc/kafka/security/kafka.keytab"
    principal="kafka/mydomain.com@SUPER_HERO";
};

Client {
  com.sun.security.auth.module.Krb5LoginModule required
  debug=true
  useKeyTab=true
  storeKey=true
  keyTab="/usr/local/etc/kafka/security/kafka.keytab"
  principal="kafka/mydomain.com@SUPER_HERO";
};

Export KAFKA_HEAP_OPTS for zookeeper process. MUST ENABLE sun.security.krb5.debug MODE, otherwise it's super hard to find the cause.

export KAFKA_HEAP_OPTS="-Djava.security.krb5.conf=/usr/local/etc/kafka/security/krb5.conf -Djava.security.auth.login.config=/usr/local/etc/kafka/security/kafka_server_jaas.conf -Dsun.security.krb5.debug=true"

start kafka process

kafka-server-start /usr/local/etc/kafka/server.properties

EVERYTHING IS FINE, THE KAFKA LOG LOOK LIKE as

Added key: 16version: 2
Added key: 23version: 2
Added key: 18version: 2
Using builtin default etypes for default_tkt_enctypes
default etypes for default_tkt_enctypes: 18 17 16 23.
>>> EType: sun.security.krb5.internal.crypto.Aes256CtsHmacSha1EType
>>> KrbAsReq creating message
>>> KrbKdcReq send: kdc=mydomain.com UDP:88, timeout=3000, number of retries =3, #bytes=281
>>> KDCCommunication: kdc=mydomain.com UDP:88, timeout=3000,Attempt =1, #bytes=281
>>> KrbKdcReq send: #bytes read=815
>>> KdcAccessibility: remove mydomain.com
Looking for keys for: kafka/mydomain.com@SUPER_HERO
Found unsupported keytype (1) for kafka/mydomain.com@SUPER_HERO
Added key: 16version: 2
Added key: 23version: 2
Added key: 18version: 2
>>> EType: sun.security.krb5.internal.crypto.Aes256CtsHmacSha1EType
>>> KrbAsRep cons in KrbAsReq.getReply kafka/mydomain.com
principal is kafka/mydomain.com@SUPER_HERO
Will use keytab
Commit Succeeded

[2019-12-21 12:13:43,633] INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin)
[2019-12-21 12:13:43,634] INFO [Principal=kafka/mydomain.com@SUPER_HERO]: TGT refresh thread started. (org.apache.kafka.common.security.kerberos.KerberosLogin)

Step 2.3: Kafka Client

Prepare client.properties which tell client to use SASL_PLAINTEXT.

security.protocol=SASL_PLAINTEXT
sasl.kerberos.service.name=kafka
sasl.mechanism=GSSAPI

Prepare Client's Jaas file (/Users/chliu/temp/qa_kafka/jaas.conf)

KafkaClient {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    debug=true
    keyTab="/usr/local/etc/kafka/security/kafka-client.keytab"
    principal="kafka-client/mydomain.com@SUPER_HERO";
};
export KAFKA_OPTS="-Djava.security.auth.login.config=/Users/chliu/temp/qa_kafka/jaas.conf -Djava.security.krb5.conf=/usr/local/etc/kafka/security/krb5.conf -Dsun.security.krb5.debug=true"

Sending message into test topic.

kafka-console-producer  --broker-list mydomain.com:9094 --topic test --producer.config client.properties

10/29/2019

Change log level at runtime for logback

For Ops issues cares a lot about logging to store in elasticsearch, which is impacted to the performance. In past We tried to reduce logging as default and change to debug level, but sometimes devs also need to go to troubleshooting incidents in production. Here is change log level way which doesn't need to rebuild project based on logback jmx feature.

Enable JMX in logback configuration

jmxConfigurator need to be added in logback.

  <jmxConfigurator />
  
  <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
    <layout class="ch.qos.logback.classic.PatternLayout">
      <Pattern>%date [%thread] %-5level %logger{25} - %msg%n</Pattern>
    </layout>
  </appender>

  <root level="debug">
    <appender-ref ref="console" />
  </root>  
</configuration>

The logback with jmx can be verified by jconsole.

Use jmxterm command line based interactive to access

Welcome to JMX terminal. Type "help" for available commands.
$>open 2767  <--process id
#Connection to 2767 is opened
$>domains
#following domains are available
JMImplementation
ch.qos.logback.classic
com.sun.management
java.lang
java.nio
java.util.logging
kafka
kafka.consumer
kafka.producer
$>domain ch.qos.logback.classic
#domain is set to ch.qos.logback.classic
$>bean ch.qos.logback.classic:Name=default,Type=ch.qos.logback.classic.jmx.JMXConfigurator
#bean is set to ch.qos.logback.classic:Name=default,Type=ch.qos.logback.classic.jmx.JMXConfigurator
$>run setLoggerLevel com.abc.cde.consumer.MessageRequestConsumer DEBUG  
#calling operation setLoggerLevel of mbean ch.qos.logback.classic:Name=default,Type=ch.qos.logback.classic.jmx.JMXConfigurator with params [com.abc.cde.consumer.MessageRequestConsumer, DEBUG]
#operation returns:
null
$>close
#disconnected

jmxterm has been supported silent mode, so we can easily implement a shell script for some special handling as below example.

#!/bin/bash
file=jmxterm-1.0.0-uber.jar
if [ ! -f "$file" ]; then
    wget https://github.com/jiaqi/jmxterm/releases/download/v1.0.0/jmxterm-1.0.0-uber.jar
fi
processId=`jps -lvm | grep Bootstrap | awk '{print $1}'`
echo "Java App processId; $processId ; fm logging level to ;$1"
cat <<EOF > jmxcommands
open $processId
bean ch.qos.logback.classic:Name=default,Type=ch.qos.logback.classic.jmx.JMXConfigurator
run setLoggerLevel com.abc.cdef.Parser $1
close
EOF
java -jar "$file" -n < jmxcommands

References

8/21/2019

[Ref] Data Classes Considered Harmful

In reality, there ain't no such thing as a free lunch, when you used data class as Project Lombok.

Data Classes Considered Harmful

6/03/2019

Linux ate ram as disk cache

One day QA asked why memory was really low in QA linux server. Really not sure, this question is interesting for me, as known, our java applications always configured memory limitation, shouldn't happen low memory issue.

From result of "free -m" command executed and checked, If you just naively look at "used" and "free", you'll think your ram is 98/99% full.

Googling that situation, linuxatemyram, what's going on?
Linux is borrowing unused memory for disk caching. This makes it looks like you are low on memory, but everything is fine. Try to use unused memory as disk caching makes the system much faster and more responsive.

If you deeply read "linux ate ram", don't panic, your ram is fine. ^^!