Love of Python : My first Python script

Every next day my love of Python and Scala is getting stronger, Hence I thought to post one python script here which is a sample script to perform some common configurations that we need in application deployment on production, test or automation servers. I am going to use this script (updated version of it) in my project to reduce an overtake of approximately 15-20 mins of manual configuration works of service deployment.

This python script performs following tasks
(1) Add new configuration
(2) Update configurations
(3) Run Unix commands (native OS) on server and few other tasks

You can customize this script as per your project requirement. The script is available at my gitlab repository. I would love to improvise any of my reader’s recommendations for any specific change/requirement over this script.

PS: This script is in the initial draft form, I will update it with more useful commands and comments to make it more user-friendly.

 

 

How to achieve parallel code execution without user Threads

What comes to your mind when you have to run a piece of code in parallel? you will create some threads to do it, correct! But this is not only a way to achieve it. Recently I came across one interesting requirement where I had to run a piece of code of an API for three parallel calls (without Threads).My approach is not something new, many of you might be doing it already and others might have not noticed this approach while maintaining the legacy code 🙂 .

Well, so this approach is nothing but a combination of Bean loading in Spring container and corresponding mapped Java class execution, and hooking a API to support parallelism. when spring container loads a annotated/configured Bean it executes Java method associated with it. using this concept we can define and load the same bean with three different ids to execute the method ( the same code runnable implementation ) code rather than doing it through Three different threads.


<beans profile="cluster">
<bean id="mongoQueryServList" class="com.skilledminds.thirdPartyService.configure.db.connectionProvider.cacheQueryProcessor"/>
</beans>
<beans profile="cluster">
<bean id="cassandraQueryServList" class="com.skilledminds.thirdPartyService.configure.db.connectionProvider.cacheQueryProcessor"/>
</beans>
<beans profile="no_cluster">
<bean id="graphQueryServList" class="com.skilledminds.thirdPartyService.configure.db.connectionProvider.cacheQueryProcessor"/>
</beans>

Above piece of code is just a replacement of Three threads for  "cacheQueryProcessor" task, now to achieve parallelism/conditional task execution this method should be designed properly to meet the requirement. For more details on parallelism/conditional task execution please refer Spring-Batch Scaling and Parallel Processing or Java executor framework ( which is a base of every Parallelism implementation).

Still you want to take a deep dive of similar implementation then take a look at ongoing Spring Jira https://jira.spring.io/browse/SPR-8767

PS: For an experienced java professional this may look very simple to do but considering this in a bigger picture gives the insight of an option to design an utility tool/Jar without heavy Spring batch or similar APIs.

Handling dynamic/unknown Datasource in Application

Let’s see how we can deal with a requirement when server-side input configuration or data source is not constant in your application and furthermore memory footprint of application should be also very low ( i.g a mobile Product ).

Once for a big data ( web crawler based ) product we had one requirement where input datasource was not fixed, based on customer choice datasource was configurable. Since product was a mobile application so we wanted to keep application’s memory footprint as low as possible, so we did not use Spring / Hibernate / Gauva etc…. framework for creating and maintaining configuration container on Fly.

Here is one basic design approach that I had prepared which later I improved to meet exact requirement. Contact me if you need further details on such implementations. I will upload this design in SkilledMind’s (http://skilledminds.in/) gitlab.

PS: Due to an agreement I have not added exact design.

Data Access layer Design

 

 

who will dance with Nancy ? Find in below code !

My new post is going to take little more time so thought of to post solution of this long pending problem posted by me on November 1, 2014

public class DecideDancePair {

DecideDancePair ddp1;
StringBuffer ddp2 = new StringBuffer();

public static void main(String[] args) {

new Thread(new Runnable() {

@Override
public void run() {
synchronized (DecideDancePair.class) {
System.out.println("Its weekend .... Hey Guys !! this is Nancy , who will dance with me? ");
}

}
}).start();

new Thread(new Runnable() {

@Override
public void run() {

synchronized (new DecideDancePair().ddp1) {
System.out.println("Hey Nancy !! this is Peter , I will dance with you.");
}

}
}).start();

new Thread(new Runnable() {

@Override
public void run() {
synchronized (new DecideDancePair().ddp2) {
System.out.println("Hey Nancy !! this is Rozer , I will dance with you.");
}

}
}).start();

}

}

==============================

Soln :

Nancy will always go for dance because she having Lock over a object which is not NULL. same with Rozer , he is also having a non-NULL lock. while poor Peter is caught with NULL object Lock so that piece of code will throw below exception and Hence Nancy and Rozer will be in dance 🙂 !!!

Code execution will looks like :


Its weekend .... Hey Guys !! this is Nancy , who will dance with me?
Exception in thread "Thread-1" Hey Nancy !! this is Rozer , I will dance with you.
java.lang.NullPointerException
	at BasicConcepts.DecideDancePair$2.run(DecideDancePair.java:26)
	at java.lang.Thread.run(Thread.java:745)

An Introduction to REST and a High Level Design of a REST API

What is REST and Why is it called Representational State Transfer?

REpresentational State Transfer (REST) is a style of software architecture for distributed systems such as the World Wide Web. REST has emerged as a predominant Web service design model.
The Web is comprised of resources. A resource is any item of interest. For example, the Jet Aircraft Corp may define a 747 resource. Clients may access that resource with this URL:
http://www.jet.com/aircraft/747
A representation of the resource is returned (e.g., Jet747.html). The representation places the client application in a state. The result of the client traversing a hyperlink in Jet747.html is another resource is accessed. The new representation places the client application into yet another state. Thus, the client application changes (transfers) state with each resource representation –> Representational State Transfer!

Below is one sample high level REST API design that I had designed some year back. REST’s new learner can take reference of it.

Sample search REST API High level design

SparkJava multipart/form-data fileUpload

Spark Framework is a simple and lightweight Java web framework built for rapid development. With Spark it’s possible to start a REST web server with a few lines of code.It is the Java porting of Sinatra: famous micro-framework written in Ruby.The purpose of this post is to explain how to work with Spark  for multipart/form-data fileUpload requirement.In different dev community I have seen people searching fix of fileUpload failure of SparkJava framework, so I thought to share the fix that I have encountered and fixed recently.


Spark.post("/files/upload/:userName", "multipart/form-data", new Route() {
			@Override
			public Object handle(Request request, Response response) {
				// process request
				String userID = request.params("userName");
				if (isValidUser(userID)) {
// These two lines work as fix.
MultipartConfigElement multipartConfigElement = new MultipartConfigElement("data/tmp");
request.raw().setAttribute("org.eclipse.jetty.multipartConfig", multipartConfigElement);

					Collection<Part> parts = null;
					try {
						parts = request.raw().getParts();
					} catch (IOException | ServletException e2) {
						// TODO Auto-generated catch block
						e2.printStackTrace();
					}
					for (Part part : parts) {
						System.out.println("Name:" + part.getName());
						System.out.println("Size: " + part.getSize());
						System.out.println("Filename:" + part.getSubmittedFileName());
					}
					String fName = null;
					Part file = null;
					try {
						file = request.raw().getPart("fileToBeUploaded");
						fName = request.raw().getPart("fileToBeUploaded").getSubmittedFileName();
					} catch (IOException | ServletException e1) {
						e1.printStackTrace();
					}

 

 

Spark Exceptions

List of exceptions and troubleshooting steps that I have encountered till now using Spark in my project.

Issue 1 # Exception in thread “main” org.apache.spark.SparkException: A master URL must be set in your  configuration

Fix :   It means you simply forgot to specify the master URL.
SparkConf configuration = new SparkConf() .setAppName(“Your Application Name”) .setMaster(“local”);

Issue 2 # <Next_Exception_will_be_listed_soon>

Kafka Exceptions

Kafka is super buzzword nowadays in Bigdata space, so I thought to share some of exceptions and troubleshooting steps that I have encountered till now using Kafka in my project.

Issue 1 # Error while fetching metadata with correlation id 0 : {Visitor=LEADER_NOT_AVAILABLE (org.apache.kafka.clients.NetworkClient)

Fix :  There could be many reason of this failure but I resolved it by updating
                           host.name=localhost
                           advertised.host.name=localhost
        in $Kafka_home/config/server.properties where localhost is kafka’s server                   hostname. Basically it was happening due to incorrect network binding of my               laptop’s wireless interface.

Issue 2 # <Next_Exception_will_be_listed_soon>

Effective use of IdentityHashMap and Flyweight Pattern

Imagine a online finance dashboard where a Finance consultant and his Client discuss, do some calculation , presentation, template and query filling like activities, and finally ends session of discussion. Such dashboard can have various tools and repetitive actions and forms which can be used through discussion. Lets take a case of 1000 customers at any time on dashboard, 100 tasks and forms , again 40 conditions for each task and form. So at least
1000*100*40= 4000000 Objects ( or near to it if consider some static design ) and corresponding GC() cycle we have to support to meet our requirement.

By using proper design and data structure we can reduce this problem and can speed up Dashboard loading and tool performance etc… use of Flyweight pattern and IdentityHashMap is perfect combination for use-case like above.

The Flyweight pattern is suitable for a context free and frequently used large object creations.
IdentityHashMap – This class implements the Map interface with a hash table, in an IdentityHashMap, two keys k1 and k2 are considered equal if and only if (k1==k2). (In normal Map implementations (like HashMap) two keys k1 and k2 are considered equal if and only if (k1==null ? k2==null : k1.equals(k2)).)

Here is one sample code that you can refactor for your own similar use-case.


import java.util.HashMap;
import java.util.IdentityHashMap;
import java.util.Map.Entry;

public class HashedIdentityDashBoard {

	public static void main(String[] args) {
		HashMap<String, DashBoard> hm = new HashMap<String, DashBoard>();
		IdentityHashMap<DashBoard, String> idenHM = new IdentityHashMap<DashBoard, String>();

		hm.put("PendingTasks", new PendingTasks());
		hm.put("DailyTasks", new DailyTasks());
		hm.put("EscalatedTasks", new EscalatedTasks());
		// and 100 more repetitive tasks

		// identity map for repetitive tasks and their Templates and Query Forms
		idenHM.put(hm.get("PendingTasks"), "40 Templates and Query form of PendingTasks");
		idenHM.put(hm.get("DailyTasks"), "40 Templates and Query form of DailyTasks");
		idenHM.put(hm.get("EscalatedTasks"), "40 Templates and Query form of EscalatedTasks");

		// only for 1000 users we need 1000*100*40 = 4000000 Objects

		for (Entry<DashBoard, String> entry : idenHM.entrySet()) {
			DashBoard dashBoard = entry.getKey();
			System.out.println("Key : " + dashBoard + " : Value : " + idenHM.get(dashBoard));
		}

	}

}

interface DashBoard {

	public void setTitle(String widgetTitle);

}

abstract class AbstractUserDashBoard implements DashBoard {
	public String dashBoardName;

	abstract public void setTitle(String widgetTitle);

	@Override
	public String toString() {
		return "Title of DashBoard is :" + this.dashBoardName + " and hashCode is " + this.hashCode();
	}
}

class DailyTasks extends AbstractUserDashBoard {

	public DailyTasks() {
		setTitle("DailyTasks");
		System.out.println(this);
	}

	@Override
	public void setTitle(String widgetTitle) {
		this.dashBoardName = widgetTitle;
	}
}

class PendingTasks extends AbstractUserDashBoard {
	public PendingTasks() {
		setTitle("PendingTasks");
		System.out.println(this);
	}

	@Override
	public void setTitle(String widgetTitle) {
		this.dashBoardName = widgetTitle;
	}

}

class EscalatedTasks extends AbstractUserDashBoard {
	public EscalatedTasks() {
		setTitle("EscalatedTasks");
		System.out.println(this);
	}

	@Override
	public void setTitle(String widgetTitle) {
		this.dashBoardName = widgetTitle;
	}

}

Shared queue processing with Pub and Sub Threads

import java.util.Random;

public class PubSubDemo {

	public static void main(String[] args) {
		SharedQ sharedQ = new SharedQ();

		new Thread(new Consumer(sharedQ), "Consumer").start();
		// Delay publishing
		try {
			Thread.sleep(2000);
		} catch (InterruptedException e) {
			e.printStackTrace();
		}
		new Thread(new Producer(sharedQ), "Producer").start();
	}

}

class Producer implements Runnable {
	SharedQ sharedQPro = null;

	public Producer(SharedQ sharedQ) {
		this.sharedQPro = sharedQ;
	}

	@Override
	public void run() {
		for (int i = 0; i < 3; i++) {
			produce();
		}
	}

	public void produce() {
		synchronized (sharedQPro.sharedQueue) {

			for (int i = 0; i < 3; i++) {
				sharedQPro.sharedQueue[i] = (new Random().nextInt(20));
				System.out.println(" ### Produced ### " + sharedQPro.sharedQueue[i]);
			}
			sharedQPro.sharedQueueIsEmpty = false;
			sharedQPro.sharedQueue.notify();
			try {
				sharedQPro.sharedQueue.wait();
			} catch (InterruptedException e) {
				// TODO Auto-generated catch block
				e.printStackTrace();
			}
		}
	}

}

class Consumer implements Runnable {
	SharedQ sharedQCon = null;

	public Consumer(SharedQ sharedQ) {
		this.sharedQCon = sharedQ;
	}

	@Override
	public void run() {
		for (int i = 0; i < 3; i++) {
			consume();
		}
	}

	public void consume() {
		synchronized (sharedQCon.sharedQueue) {
			if (sharedQCon.sharedQueueIsEmpty) {
				try {
					sharedQCon.sharedQueue.wait();
				} catch (InterruptedException e) {
					// TODO Auto-generated catch block
					e.printStackTrace();
				}

			}

			System.out.println();

			for (int i = 0; i < 3; i++) {
				System.out.println(" *** Consumed *** " + sharedQCon.sharedQueue[i]);
				sharedQCon.sharedQueue[i] = 0;
			}
			sharedQCon.sharedQueue.notify();
			System.out.println(" --------------------------");
			try {
				sharedQCon.sharedQueue.wait();
			} catch (InterruptedException e) {
				// TODO Auto-generated catch block
				e.printStackTrace();
			}

		}
	}

}

class SharedQ {

	int[] sharedQueue = new int[3];
	boolean sharedQueueIsEmpty = true;

}

/*
Output of pub sub :

 ### Produced ### 1
 ### Produced ### 0
 ### Produced ### 3

 *** Consumed *** 1
 *** Consumed *** 0
 *** Consumed *** 3
 --------------------------
 ### Produced ### 12
 ### Produced ### 0
 ### Produced ### 14

 *** Consumed *** 12
 *** Consumed *** 0
 *** Consumed *** 14
 --------------------------
 ### Produced ### 13
 ### Produced ### 8
 ### Produced ### 17

 *** Consumed *** 13
 *** Consumed *** 8
 *** Consumed *** 17
 --------------------------
*/