Sparkles

that were shone when I got tempered!

Dancing with Docker… !!!

leave a comment »

Traditional Virtualisation

In the traditional virtualisation, Host OS is there and top of that the Hypervisor (HyperV, VMWare, TVM etc) must be there. Then top of that, you install the virtual machines. Virtualisation layer provides virtualized Motherboard, CPU Memory etc. Thre you install the virtual OS (Linux / Windows etc), binaries libraries and the applications.

But here it emulates the hyperwisor, the whole virtual machines. So, there is an overhead. And the density (number of OS’s are also ) is also limitted.

Docker virtualisation

In Docker virtualisation, you don’t have to install guest OS’s, you dont have to provide virtualize CPUs, Motherboards so on. So, this is container vritualisations. And it also boots faster since you don’t want to emulate the hardware.

 

Once we have an dokcer image, we can create multiple containers according to the configurations we want. (eg : using one image we can create two containers with different configurations).

You can get the docker images using the docker hub [2]. There are official images and there are thrid party (comunity) images also. So, try to use select them care using the ratings / reviews number downloads etc.

Sample scenario 1 : Step to download the images and run:

1. docker pull nginx:1.10.2-alpine
– Pull the image from the hub

2. docker images
– Check the pulled images

3. docker run –name my-nginx -p 80:80 nginx:1.10.1-alpine
– Start the image.
– –name : This is the name of the image.
– p :
Ports running in the docker container are with the container itself. They are not exposed in the host machine. So, you need to map them to the host machine. So, 80:80 means, docker container 80 is mapped to host machine’s port 80.
– then the downloaded image name (nginx:1.10.2-alpine)

4. Now you have nginx in your machine.

 

Actually if you don’t have the image locally, run command will download it for you and run upon it!

Adding another runtime parameter with -d which means it should run as a detached mode.

docker run --name my-nginx -d -p 80:80 nginx:1.10.1-alpine

Here I will get a long ID. It is running in the background in detached mode.

Adding another parameter / modify parameter (port mapping etc) to the existing container can’t be done. Even you stopped it you cannot do it. Because you created the container and it has the parameters.
You can launch and crete a new container with another name. Instead you can remove the container and add another. (stop, remove and run).

But if you create anther container that means all the previous data will be disappear. (eg : if you have mariadb in one port, then you start another instance in another port, then all the [provious content will be disappear).

So, because of this, data is normally put to the outside of the container and map them with the container using valumes.

 

Mapping volumes :

For this, we are using -v. like -v docker-local-file-with-path : machine-local-file-with-path.
eg :

docker run --name my-custom-nginx-container -v /host/path/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx

This will map the docker’s /host/path/nginx.conf file to local /etc/nginx/nginx.conf file. ‘ro’ means read only(so container runtime will not change it).

Instead of a single file you can mount an entire directoty also.

 

How to build our own docker images or how to extend an image :

The file you need to consider is the “Dockerfile”.

Dockerfile has below like commands.
FROM <image name>
– This means we starts our image from this image.

COPY / ADD
– copy a local file to the docker container. So, we don’t want to map. So, when the container starts these files will be a part of that. You can use ADD also for this.

RUN, CMD and ENTRYPOINT
These can be used in the Dockerfiles to extend an image.
All three instructions (RUN, CMD and ENTRYPOINT) can be specified in shell form or exec form.

Shell form
<instruction> <command>

Examples:

RUN apt-get install python3
CMD echo “Hello world”
ENTRYPOINT echo “Hello world”

When instruction is executed in shell form it calls /bin/sh -c <command> under the hood and normal shell processing happens.

For example, the following snippet in Dockerfile


ENV name John Dow
ENTRYPOINT echo "Hello, $name"

when container runs as docker run -it <image> will produce output

Hello, John Dow .

Exec form
This is the preferred form for CMD and ENTRYPOINT instructions.

<instruction> [“executable”, “param1”, “param2”, …]

Examples:

RUN [“apt-get”, “install”, “python3”]
CMD [“/bin/echo”, “Hello world”]
ENTRYPOINT [“/bin/echo”, “Hello world”]

When instruction is executed in exec form it calls executable directly, and shell processing does not happen.
So, if you need to run bash (or any other interpreter but sh), use exec form with /bin/bash as executable.

eg :

ENV name John Dow
ENTRYPOINT ["/bin/bash", "-c", "echo Hello, $name"]

RUN
It executes any commands on top of the current image and creates a new layer by committing the results.

RUN means it creates an intermediate container, runs the script and freeze the new state of that container in a new intermediate image. The script won’t be run after that: your final image is supposed to reflect the result of that script.

And RUN runs at the time of building the image. Not at the time it is run.

RUN has two forms:

RUN <command> (shell form)
RUN [“executable”, “param1”, “param2”] (exec form)

eg :

RUN apt-get update && apt-get install -y \
bzr \
cvs \
git \
mercurial \
subversion

CMD

CMD instruction allows you to set a default command, which will be executed only when you run container without specifying a command. If Docker container runs with a command, the default command will be ignored. If Dockerfile has more than one CMD instruction, all but last CMD instructions are ignored.

CMD has three forms:

CMD [“executable”,”param1″,”param2″] (exec form, preferred)
CMD [“param1″,”param2”] (sets additional default parameters for ENTRYPOINT in exec form)
CMD command param1 param2 (shell form)

eg : when container runs with a command, e.g., docker run -it <image> /bin/bash, CMD is ignored and bash interpreter runs instead

ENTRYPOINT

ENTRYPOINT instruction allows you to configure a container that will run as an executable. It looks similar to CMD, because it also allows you to specify a command with parameters. The difference is ENTRYPOINT command and parameters are not ignored when Docker container runs with command line parameters. (There is a way to ignore ENTTRYPOINT, but it is unlikely that you will do it.)

ENTRYPOINT means your image (which has not executed the script yet) will create a container, and runs that script.

ENTRYPOINT has two forms:

ENTRYPOINT [“executable”, “param1”, “param2”] (exec form, preferred)
ENTRYPOINT command param1 param2 (shell form)

Be very careful when choosing ENTRYPOINT form, because forms behaviour differs significantly.

Once you have the dockerfile, you ca build a docker image as mentioned below.

docker build -t=zip-nginx:1.0 . or docker build -t zip-nginx:1.0 .

-t – means the tag. tag means name and the version. the default version is ‘latest’
. – (dot) means to use the current directory to build the docker image. So, it will look for the Dockerfile in the current directory.

 

Sample scenario 2 : Start new container from the new image.

1. cd /[PATH_TO_MAGNIFY]/Magnify/DockerImage/magnify-docker

Files/Folders contained in above path:

/bin
/fusepatch
/modules
/Dockerfile
/hawtio-wildfly-1.5.3.war
/standalone.xml

Dockerfile content :

FROM camunda/camunda-bpm-platform:wildfly-latest
ADD standalone.xml standalone/configuration/
ADD bin/ bin/
ADD fusepatch/ fusepatch/
ADD modules/ modules/
ADD hawtio-wildfly-1.5.3.war standalone/deployments/

This will build an image downloading from the camunda/camunda-bpm-platform:wildfly-latest docker hub image and adding above artifacts.

2. docker build –tag=magnify
This will build the docker image as metion abve.

3. docker images
Check the docker images

4. sudo docker run -d –name magnify –net=”host” -p 7070:7070 -v [PATH_TO_MAGNIFY]/Magnify/DockerImage/docker-volumes/applicationConfigs:/camunda/applicationConfigs -v [PATH_TO_MAGNIFY]/Magnify/DockerImage/docker-volumes/modules/magnify:/camunda/modules/magnify -v [PATH_TO_MAGNIFY]/Magnify/DockerImage/docker-volumes/deployments:/camunda/standalone/deployments magnify

-d : run in detached mode
–name magnify : use magnify as the name.
–net=”host” : [need to know]
-p 7070:7070 : map docker 7070 port to host machines 7070 port.
-v [PATH_TO_MAGNIFY]/Magnify/DockerImage/docker-volumes/applicationConfigs:/camunda/applicationConfigs : Map LHS local pc files to RHS docker container files.
-v [PATH_TO_MAGNIFY]/Magnify/DockerImage/docker-volumes/modules/magnify:/camunda/modules/magnify : Map LHS local pc files to RHS docker container files.
-v [PATH_TO_MAGNIFY]/Magnify/DockerImage/docker-volumes/deployments:/camunda/standalone/deployments magnify : : Map LHS local pc files to RHS docker container files.

Sharing an entire image/container as it is.

For this you can use, docker save an image or docker export a container. This will output a tar file to standard output, so you will like to do something like docker save ‘dockerizeit/agent’ > dk.agent.lastest.tar. Then you can use docker load or docker import in a different host.

eg :

docker save busybox > busybox.tar
docker load < busybox.tar.gz

docker export red_panda > latest.target
docker import /path/to/exampleimage.tgz

Instead of this we can share the artifacts with the Dockerfile also. At that time, consumer must build the image using docker build and run as needed.

 

Running docker in the interactive mode

We can use below format to run the docker in the interactive mode.

docker run -ti ubuntu:14.04 /bin/bash

In this case ‘t’ means we are using the host terminal. ‘i’ means interactive. /bin/bash is needed to be added at the end since we need to access to the docekr’s terminal.

Press CTRL + P + Q to get back to the host terminal again. Don’t hit CTRL+C! Becuase it will leave from the tearminal as well as the Docker container.

If you want to go back to the docker terminal again, type

docker attach <container id/name>

You can get the container id by using docker ps.

You can enter to the shell of a container as mentioned below also.

docker exec-ti my-nginx /bin/sh

exec – This is a docker command that can execute in a running container.

 

Docker cheat sheet :

docker pull nginx:1.10.2-alpine
– pull the nginx:1.10.2-alpine from the docker hub

docker build -t=friendlyname . or docker build -t friendlyname .
– Create image using this directory’s Dockerfile. Last dot is very important and it says the dockerfile is at the same location.

docker run -p 4000:80 friendlyname
– Run “friendlyname” mapping port 4000 to 80

docker run -d -p 4000:80 friendlyname
– Same thing, but in detached mode

docker run -it -p 4000:80 friendlyname /bin/bash
– Same thing, but in interactive mode.

docker run username/repository:tag
– Run image from a registry

docker ps
– this will list all the currently running container

docker ps -a
– this will list all the containers that are running and not running

docker ps -a –no-trunc
– this will list all the content without truncating

docker container ls
– List all running containers

docker container ls -a
– List all containers, even those not running

docker start <container id/name>
– gracefully start the container

docker stop <container id/name>
– gracefully stop the container

docker container stop <hash>
– Gracefully stop the specified container

docker container kill <hash>
– Force shutdown of the specified container

docker rm <container id/name>
This will remove the container.

docker container rm <hash>
– Remove specified container from this machine

docker container rm $(docker container ls -a -q)
– Remove all containers

docker images
– List images on this machine. This is used to check the image is installed etc.

docker images -a
– List all images on this machine

docker image ls -a
– List all images on this machine

docker rmi <image name/id>
– remove specific image

docker image rm <image id>
– Remove specified image from this machine

docker image rm $(docker image ls -a -q)
– Remove all images from this machine

docker login
– Log in this CLI session using your Docker credentials

docker tag <image> username/repository:tag
– Tag <image> for upload to registry

docker push username/repository:tag
– Upload tagged image to registry

docker –version
– will show the docker version

docker save
Save docker image

docker load < busybox.tar.gz
Load the saved image

docker export
Export the container

docker import /path/to/exampleimage.tgz
Import the container

docker top <container id/name>
– Running processors in docker container


References :

[1] : https://docs.docker.com/
[2] : https://hub.docker.com/explore/
[3] : https://hub.docker.com/_/nginx/
[4] : http://takacsmark.com/getting-started-with-docker-in-your-project-step-by-step-tutorial/
[5] : https://www.youtube.com/watch?v=Vyp5_F42NGs
[6] : https://www.youtube.com/watch?v=UV3cw4QLJLs – further
[7] : https://deis.com/blog/2015/creating-sharing-first-docker-image/
[8] : https://stackoverflow.com/questions/24482822/how-to-share-my-docker-image-without-using-the-docker-hub
[9] : https://docs.docker.com/engine/reference/commandline/save/#extended-description
[10] : http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/
[11] : https://stackoverflow.com/questions/34549859/run-a-script-in-dockerfile

Advertisements

Written by Namal Fernando

December 13, 2017 at 1:57 pm

Posted in Uncategorized

Tagged with

Git Cheat Sheet

leave a comment »

How to merge two git barnches safely.

1. Create a branch and checkout to it

git checkout -b new-branch

This creates a new branch new-branch, based on master-branch.

git checkout -b new-branch existing-branch

This creates a new branch new-branch, based on existing-branch.

2. — work-commit-work-commit-work-commit—-

3.Test merge before commit, avoid a fast-forward commit by --no-ff
git pull

git checkout master
git pull

git merge --no-ff --no-commit new-branch

4. Resolve the conflicts if any.

git status

5. Commit and push….

git commit -m 'merge test branch'
git push

Create a new branch with the newly done changes.

Sometimes you may find a situation like you started some work on a new feature and after coding for a bit, you decided that, this feature should be on its own branch.
In that case you can follow below steps.

git checkout -b <new-branch>

This will leave your current branch as is, create and checkout a new branch and keep all your changes.

Next you can then make a commit as mention below:

 
git add <files>
git commit -m "<Brief description of this commit>" 

References :

Written by Namal Fernando

December 5, 2017 at 12:35 pm

Posted in GIT

Tagged with

Camunda BPMN2.0 basics with a sample project

leave a comment »

Sample project – loanApproval [5]

This is a good example of using camunda to manage forms.
In this example, it asks to do a new loan request, ask for the approval and adjust the request if needed.
There to do the approval, the requests will be assigned to different user groups and they will see their requests through their loggins.

Step 0 : You need to setup camunda in your envirionment. follow [0]. I preferred wildfly10.

Step 1 : You need to creaete the maven project and you need to import all the dependencies related to the camunda.

Step 2 : Then create the processes.xml in [PROJECT_HOME]/src/main/resources/META-INF/ path and paste the below content there. We’ll learn on these deeper in another time


<?xml version="1.0" encoding="UTF-8" ?>

<process-application xmlns="http://www.camunda.org/schema/1.0/ProcessApplication" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

<process-archive name="loan-approval">
<process-engine>default</process-engine>
<properties>
<property name="isDeleteUponUndeploy">false</property>
<property name="isScanForProcessDefinitions">true</property>
</properties>
</process-archive>

</process-application>

 

Step 3 :  Create a Process Application Class. At this time we are extending ServletProcessApplication. There are different types of ProcessApplications (eg : EJBProcessApplication etc). We’ll learn on these deeper in another time.


package org.camunda.bpm.demo;

import org.camunda.bpm.application.ProcessApplication;
import org.camunda.bpm.application.impl.ServletProcessApplication;

@ProcessApplication
public class DemoProcessApplication extends ServletProcessApplication{

}

Step 4 : Now its time to create the bpmn diagram here. Where the magic happens.

Note : First you need to add the bmns2 pluggin to eclipse. Otherwise you can still use the camunda modeler tool and generate the bpmn diagram using that. [1]

Name : camunda modeler
Location : http://camunda.org/release/camunda-eclipse-plugin/update-sites/kepler/latest/site/</blockquote&gt;

Create the bpmn diagram like below.
[RIGHT_CLICK on resource]> New > Other > BPMN > BPMN2 Diagram
Give it a name and here we go!!!

This has a design and a source view. In source view you can see the XML and in dsign you can see the nice digram – the representation.

Here are the basics for the content of the diagram. (This artical will be updated when I learn more)

bpmn2:definitions is the root element.
It has two main child elements – bpmn2:process and bpmndi:BPMNDiagram.

BPMNDiagram is for the diagram representation. bpmn2:process is for the process defenition.

There may be some other elements like bpmn2:collaboration, bpmn2:message, bpmn2:error depending on the context. (eg : when we integrate camunda with camel [2] etc). In this like scenarios there can be more than one process also.

Let’s go bit deeper on the bpmn2:process since all the operations happens there.

In this case, every child element of process has a name and an id. Name is for the representation and id is for the internal usage to link one to another  to create the flow.

sequenceFlow :
This is simply arrows in the diagram. Other than the id and name it has sourceRef and targetRef. sourceRef is the id of the source componenet and targetRef is the id of the target componenet.


<bpmn2:sequenceFlow id="SequenceFlow_5" name="" sourceRef="StartEvent_3" targetRef="UserTask_3"/>

And sometimes when the sequence is started from an ExclusiveGateway (explained below), conditionExpressions are added as mentioned below to decide that the flow must be continued in this sequenceFlow or otherwise.


<bpmn2:sequenceFlow id="SequenceFlow_7" name="Yes" sourceRef="ExclusiveGateway_1" targetRef="EndEvent_2">
<bpmn2:conditionExpression xsi:type="bpmn2:tFormalExpression">${approved}</bpmn2:conditionExpression>
</bpmn2:sequenceFlow>

 

userTask :
UserTasks are tasks that is expecting the inputs from the user. There are incomming and outgoing sequences are mentioned. (value is the Sequence ID – id of the arrow). In addition to that, there can be added some extensionElements like form data as mentioned below.


<bpmn2:userTask id="UserTask_3" name="Approve Request">
<bpmn2:extensionElements>
<camunda:formData>
<camunda:formField id="firstname" label="Firstname" type="string">
<camunda:validation>
<camunda:constraint name="readonly"/>
</camunda:validation>
</camunda:formField>
<camunda:formField id="lastname" label="Lastname" type="string">
<camunda:validation>
<camunda:constraint name="readonly"/>
</camunda:validation>
</camunda:formField>
<camunda:formField id="amount" label="Amount" type="long">
<camunda:validation>
<camunda:constraint name="readonly"/>
</camunda:validation>
</camunda:formField>
<camunda:formField id="approved" label="Do you approve this request?" type="boolean"/>
</camunda:formData>
</bpmn2:extensionElements>
<bpmn2:incoming>SequenceFlow_5</bpmn2:incoming>
<bpmn2:incoming>SequenceFlow_9</bpmn2:incoming>
<bpmn2:outgoing>SequenceFlow_6</bpmn2:outgoing>
</bpmn2:userTask>

 

serviceTask :
This is a task that expect an input / process from a service. This can be a ejb call, call to camel routing etc etc. Just like in the user tasks, this also has the incomming and outgoing elements.


<bpmn2:serviceTask id="ServiceTask_1" camunda:expression="#{camel.sendTo('direct:syncService')}" name="call some service">
<bpmn2:incoming>SequenceFlow_5</bpmn2:incoming>
<bpmn2:outgoing>SequenceFlow_7</bpmn2:outgoing>
</bpmn2:serviceTask>

<bpmn:serviceTask id="Task_0129g8f" name="Get Server Dsl Trail" camunda:delegateExpression="${getServerDslTrail}">
<bpmn:incoming>SequenceFlow_047nv9z</bpmn:incoming>
<bpmn:outgoing>SequenceFlow_10bvmra</bpmn:outgoing>
</bpmn:serviceTask>

 

exclusiveGateway :
ExclusiveGateways can be added to connect one or more sequnces. Normally these are widely used to divert the path depending on the value of a condition. Conditions are mentioned in the SequenceFlow linked to the exclusiveGateway.


<bpmn2:exclusiveGateway id="ExclusiveGateway_1" name="Approved?">
<bpmn2:incoming>SequenceFlow_6</bpmn2:incoming>
<bpmn2:outgoing>SequenceFlow_7</bpmn2:outgoing>
<bpmn2:outgoing>SequenceFlow_8</bpmn2:outgoing>
</bpmn2:exclusiveGateway>

And sometime it is used to connect one outgoing flow from multiple incomming flows as mentioned below.


<bpmn2:exclusiveGateway id="ExclusiveGateway_1">
<bpmn2:incoming>SequenceFlow_3</bpmn2:incoming>
<bpmn2:incoming>SequenceFlow_4</bpmn2:incoming>
<bpmn2:outgoing>SequenceFlow_5</bpmn2:outgoing>
</bpmn2:exclusiveGateway>

sendTask :
When there are async tasks, you can use this element to send the task to an external party and contuniues without waiting.


<bpmn2:sendTask id="SendTask_1" camunda:expression="#{camel.sendTo('direct:asyncService')}" name="call some async service">
<bpmn2:incoming>SequenceFlow_7</bpmn2:incoming>
<bpmn2:incoming>SequenceFlow_16</bpmn2:incoming>
<bpmn2:outgoing>SequenceFlow_8</bpmn2:outgoing>
</bpmn2:sendTask>

 

startEvent :
This is the start event for the workflow. It has the outgoing element mentioning the Sequence id (arrow id) that it goes to.


<bpmn2:startEvent id="StartEvent_3" camunda:initiator="requestor" name="New LoanRequest received">
<bpmn2:extensionElements>
<camunda:formData>
<camunda:formField id="firstname" label="Firstname" type="string">
<camunda:validation>
<camunda:constraint name="required"/>
<camunda:constraint name="minlength" config="2"/>
<camunda:constraint name="maxlength" config="25"/>
</camunda:validation>
</camunda:formField>
<camunda:formField id="lastname" label="Lastname" type="string">
<camunda:validation>
<camunda:constraint name="required"/>
<camunda:constraint name="minlength" config="2"/>
<camunda:constraint name="maxlength" config="25"/>
</camunda:validation>
</camunda:formField>
<camunda:formField id="amount" label="Amount" type="long">
<camunda:validation>
<camunda:constraint name="required"/>
<camunda:constraint name="min" config="1000"/>
<camunda:constraint name="max" config="100000"/>
</camunda:validation>
</camunda:formField>
</camunda:formData>
</bpmn2:extensionElements>
<bpmn2:outgoing>SequenceFlow_5</bpmn2:outgoing>
</bpmn2:startEvent>

 

endEvent :

This is the termination of the workflow. It has the incomming element mentioning the sequence id (arrow id) that the flow is comming from


<bpmn2:endEvent id="EndEvent_2">
<bpmn2:incoming>SequenceFlow_7</bpmn2:incoming>
</bpmn2:endEvent>

Other impotant elements like timerEvents, boundaryEvent, eventBasedGateway and intermediateCatchEvent can be seen at [5].

 

Reference :

Written by Namal Fernando

December 5, 2017 at 12:12 pm

Posted in camunda

Tagged with ,

How to merge a branch safely to master in git

leave a comment »

1. Create a branch and checkout

Creates a new branch new-branch, based on master-branch.


git checkout -b new-branch

 

Creates a new branch new-branch, based on existing-branch.


git checkout -b new-branch existing-branch

 

2. — work-commit-work-commit-work-commit—-


git pull

git checkout master
git pull

 

3. Test merge before commit, avoid a fast-forward commit by –no-ff,


git merge --no-ff --no-commit new-branch

 

4. Resolve the conflicts if any.


git status

 

5. Commit and Push


git commit -m 'merge test branch'
git push

Reference :

Written by Namal Fernando

November 29, 2017 at 8:32 pm

Posted in GIT

Tagged with

Some important tips to fix the broken packages problem in Ubuntu 16.04

leave a comment »

1. Try with the aptitude instead of apt-get.

This is a really good tool that fixes the broken packages by itself and install the tools as intenteded.


sudo aptitude install <packagename>
sudo aptitude -f install <packagename>

If you don’t have aptitude, install if first using apt-get


sudo apt-get install aptitude

This has a UI console also.


sudo aptitude

 

2. Check the hold packages and remove/unhold them.


sudo apt-mark showhold
sudo apt-mark unhold <package name>

or


dpkg --get-selections | grep hold
apt-get remove <packagename>

 

3. Remove the unused pacages with the autoremove, (Use with care. Once I lost ubuntu desktop with this)


sudo apt-get autoremove

 

4. Sometimes when you install synaptics this will be automatically get resolved.


sudo apt-get install synaptic

 

5. Remove the last installed packages with the dpkg -r and reinstall correctcly.

Some other try-outs.

apt-get update
apt-get upgrade
apt-get dist-upgrade
apt-get install -f
apt-get clean
apt-get autoclean
dpkg --configure -a
sudo apt-get update --fix-missing
sudo apt-get autoclean $$ apt-get clear cache

 

Reference :

Written by Namal Fernando

November 28, 2017 at 9:32 am

Posted in Linux, TroubleShooting

Tagged with

Dealing with missing jar files in Maven

leave a comment »

You can have various options on this. Some of them are mentioned down here.

1. Add them as System scope. [1]


<dependency>
<groupId>resolve-mgmt</groupId>
<artifactId>resolve-mgmt</artifactId>
<version>1.0</version>
<scope>system</scope>
<systemPath>/home/namal/.m2/repository/is-deps/resolve-mgmt.jar</systemPath>
</dependency>

<dependency>
<groupId>ganymed-ssh2</groupId>
<artifactId>ganymed-ssh2</artifactId>
<version>1.0</version>
<scope>system</scope>
<systemPath>/home/namal/.m2/repository/is-deps/ganymed-ssh2.jar</systemPath>
</dependency>

2. Install them as local jars. Here provide the groupid, artifactid and packaging according to the dependency in the pom.

install:install-file -Dfile=/home/namal/.m2/repository/is-deps/ganymed-ssh2.jar -DgroupId=ganymed-ssh2 -DartifactId=ganymed-ssh2 -Dversion=1.0 -Dpackaging=jar -Dfile=/home/namal/.m2/repository/is-deps/resolve-mgmt.jar -DgroupId=resolve-mgmt -DartifactId=resolve-mgmt -Dversion=1.0 -Dpackaging=jar

3. Put the jar manually and rename according to the dependency structure.

eg : take the jar from /home/namal/.m2/repository/is-deps/resolve-mgmt.jar and put to /home/namal/.m2/repository/resolve-mgmt/resolve-mgmt/1.0.0/resolve-mgmt-1.0.0.jar creating folders/files according to the dependency structure mentioned in the pom file.

References :

Written by Namal Fernando

November 26, 2017 at 8:49 am

Posted in maven

Tagged with ,

ActiveMQ implementation with web based listener

leave a comment »

Installing and configuringInstalling and configuring

1. Download activemq from the ActiveMQ page and extract.  [link]

2. Start ActiveMq

$ cd /[path-to-activemq-base direcctory]/bin
$ ./activemq start

3. ActiveMQ default port will be 61616. And defual port for activemq UI will be 8161.

4. So, you can see the ActiveMQ UI via localhost:8161.

5. Queue details can be used using the below link http://localhost:8161/admin/queues.jsp

Creating the producer :

You can create an activemq connection and send a simple message to the queue as mentioned below

private void initializeNSendMessage(){

	Session session = null;
	Connection connection = null;
	try {

		ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory("tcp://localhost:61616");
		connection = connectionFactory.createConnection();
		connection.start();

		session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
		Destination destination = session.createQueue("mf_engine.demo-queue");

		MessageProducer producer = session.createProducer(destination);
		producer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);

		for (int i = 0; i < 225; i++) {

			String strMessage = "Test message #"+UUID.randomUUID()+"-"+i;
			TextMessage message = session.createTextMessage(strMessage);
			System.out.println(" !!! sendMessage(). : Sent message : " + strMessage);
			producer.send(message);	

		}

	} catch (Exception e) {
		
		System.err.println(e);
		
	}finally {

		try {session.close();	} catch (Exception e) {session = null;}
		try {connection.close();} catch (Exception e) {connection = null;}
		
	}

}

Creating the listener

Once you send the message from activemq producer it enques at the activemq. You need to create a listener to listen to those message.
To make the lister keep alive, I have created a ServletContextListener and start a n number of threads to create the connectin with the activemq instance. There, we plug a MessageListener to listen to the activemq messages. In this case, onMessage will be called when the message recieved to the activemq instance.

QueueConsumerInitializer.java

public class QueueConsumerInitializer implements ServletContextListener{

	@Override
	public void contextDestroyed(ServletContextEvent arg0) {
		System.out.println("QueueConsumerInitializer.contextDestroyed()");
	}

	@Override
	public void contextInitialized(ServletContextEvent arg0) {
		System.out.println("QueueConsumerInitializer.contextInitialized()");
		
		int noOfConsumers = 3;
		System.out.println("contextInitialized().noOfConsumers : " + noOfConsumers);
		
		for (int i = 0; i < noOfConsumers; i++) {
		
			System.out.println("contextInitialized().QueueConsumer #"+i+" started!!! ");
			
			QueueConsumer queueConsumer = new QueueConsumer();
			Thread thread = new Thread(queueConsumer);
			thread.start();
			
		}
		
	}
	
}

QueueConsumer.java

public class QueueConsumer implements Runnable{

	public void run() {
		System.out.println("run(). : Consuming the message..");
		consumeMessage();
		
	}
	
	private void consumeMessage() {
	
		try {

			String 	URL 		= "tcp://localhost:61616";
			String 	queueName 	= "mf_engine.demo-queue";
			
			javax.jms.Connection connection = null;
			Session session = null;
			Destination destination = null;
			MessageConsumer consumer = null;
			

			ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(URL);
			connection = connectionFactory.createConnection();
			connection.start();
			
			
			session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
			destination = session.createQueue(queueName);
			   				
			
			consumer = session.createConsumer(destination);
			consumer.setMessageListener(new QueueListener());
				

		} catch (Exception e) {
			
			System.out.println("QueueConsumer.consumeMessage() Error"+e);
		}

	}

}

QueueListener.java

public class QueueListener  implements MessageListener {
	
	private static	Logger 			logger 						= Logger.getLogger(QueueListener.class);	

	public void onMessage(Message message) {

		if (message instanceof TextMessage) {

			try {
				TextMessage 		textMessage 		= (TextMessage) message;
				String 				messageStr 			= textMessage.getText();
				System.out.println("onMessage()." + (messageStr != null ? messageStr.length() + " lengthed" : "NULL") + " Message received!" + " [body : " + messageStr+"]");
				
			} catch (Exception e) {
				e.printStackTrace();
			}
		}
		
	}
	
}

Source :

Written by Namal Fernando

August 25, 2017 at 7:31 am

Posted in ActiveMQ

Ruth's Reflections

Contemplations from quakey quirky Christchurch

TED Blog

The TED Blog shares interesting news about TED, TED Talks video, the TED Prize and more.

Ziplok

Learn and discover simple things

Meihta Dwiguna Saputra's Knowledge Base

~In learning you will teach and in teaching you will (re)learn~

The Java Blog

Thoughts, tips and tricks about the Java programming language

Sparkles

that were shone when I got tempered!