MIME Explained

Sometime it happens that we use or refer to a software term or technology a lot of time without being very familiar with it. MIME is one of those terms for me. We use MIME standards to exchange messages between various endpoints, for example in email communication, web services etc.

MIME is every where and we might have used it countless times during our software career. But what exactly is MIME?  I posed this question to some of my software developer friends and got ambiguous answers. Some refer to MIME as MIME type, some tried to quote the full form of MIME. It was clear that MIME is not very well understood concept. In this post we will try to shed some light on what is this MIME?

History

As per RFC 822, original mail protocols were built to support only standard US ASCII charset. This left a lot to be desired.

  1. What if sender wants to send a message in a different charset say Hindi, or Spanish or any other charset?
  2. What if sender wants to send a multipart message?
  3. What if sender wants to add some non text attachment?
  4. What if senders wants to set message header in some other charset?

To address these concerns The Internet Engineering Task Force (IETF) came up with new format for Mail Message. This was an extension to famous RFC822. This new format is referred to as MIME messages. 

What is MIME?

MIME stands for Multipurpose Internet Mail Extensions MIME is an Internet standard that extends the email messages to support, non ASCII text content, non text attachment, Multipart message body and non US-ASCII header.  The MIME was so successful that is was adopted as message format for general web and lots of other technologies.  MIME format are defined using following RFC docs.

  1. RFC 2045: Describes various headers used to describe the structure of MIME messages.
  2. RFC 2046: Defines an initial set of Media Types
  3. RFC 2047: Describes extensions to RFC 822 to allow non-US-ASCII text data in Internet mail header fields
  4. RFC 2048: Specifies various IANA registration procedures for MIME-related facilities
  5. RFC 2049: Provides MIME conformance criteria as well as some examples of MIME message formats, acknowledgements, and the bibliography.

Structure of a MIME message

    MIME-Version: 1.0
    From: Nathaniel Borenstein <nsb@nsb.fv.com>
    To: Ned Freed <ned@innosoft.com>
    Date: Fri, 07 Oct 1994 16:15:05 -0700 (PDT)
    Subject: A multipart example
    Content-Type: multipart/mixed;
                  boundary=unique-boundary-1

    --unique-boundary-1

      ... Some text appears here ...    

    --unique-boundary-1
    Content-type: text/plain; charset=US-ASCII

    --unique-boundary-1
    Content-Type: multipart/parallel; boundary=unique-boundary-2

    --unique-boundary-2
    Content-Type: audio/basic
    Content-Transfer-Encoding: base64

      ... base64-encoded 8000 Hz single-channel
          mu-law-format audio data goes here ...

    --unique-boundary-2
    Content-Type: image/jpeg
    Content-Transfer-Encoding: base64

      ... base64-encoded image data goes here ...

    --unique-boundary-2--

    --unique-boundary-1
    Content-type: text/enriched

    <b>this is a test</b>

    --unique-boundary-1
    Content-Type: message/rfc822

    From: (mailbox in US-ASCII)
    To: (address in US-ASCII)
    Subject: (subject in US-ASCII)
    Content-Type: Text/plain; charset=ISO-8859-1
    Content-Transfer-Encoding: Quoted-printable

      ... Additional text in ISO-8859-1 goes here ...

    --unique-boundary-1--

Above is an example of a MIME message. On close inspection you will find that it has following parts.

  1. Headers
  2. Multiple body part which are of different content types

Multipart

A MIME Multipart message can contain one or more body part, which can have different content-types, the body parts can be embedded in another body part and are enclosed within boundary specified in boundary param on content-type header of parent body part.

Dissecting MIME Headers

MIME Version

MIME-Version: 1.0

Presence of this header let us know that we have a mime email message. The original intention of this header was to support future versions of mime. But the way MIME is implemented makes it impossible to change the version. Now version is always fixed to 1.0 and signifies that we have a non US ASCII message with non text attachments.

Content Type Header

Content-Type: multipart/mixed;
              boundary=unique-boundary-1

Content Type header defines the data type present in the body and body parts of the messages.  This helps the client in choosing the appropriate mechanism by which they can display the message to user.  The type/subtype definition is generally followed by a boundary value. The boundary value represents a body part block and all the body part must start and end with that boundary.  For example

--unique-boundary-1--

body part goes here

--unique-boundary-1--

Content Disposition Header

content-disposition = "Content-Disposition" ":"
                              disposition-type *( ";" disposition-parm )
        disposition-type = "attachment" | disp-extension-token
        disposition-parm = filename-parm | disp-extension-parm
        filename-parm = "filename" "=" quoted-string
        disp-extension-token = token
        disp-extension-parm = token "=" ( token | quoted-string )
An example is

        Content-Disposition: attachment; filename="fname.ext"

The body type of a MIME message should be show as is unless a content disposition header is specified as attachment. When Content-Disposition : attachment header is specified then it means that body part should not be displayed normally, rather it should be displayed as attachment and clicking it should result in downloading of body part in file name specified by the filename param of the header.

Content-Transfer-Encoding

As we know that lots of protocol like SMTP allows messages only with 7BIT encoding. Now with MIME it is possible to send across 8-bit, binary data as well. This becomes possible only by encoding the 8-bit or binary data in a 7BIT format. To do this MIME provides Content-Transfer-Encoding header.  For example consider a body part consisting of an audio file.

Content-Type: audio/basic
Content-Transfer-Encoding: base64

Now since audio file is in binary format so it should be reencoded in 7BIT format. We use Content-Transfer-Encoding header to convert it in BASE 64 encoded 7BIT supported format. Apart for base 64 we also have following encoding.

  1. 7BIT – default
  2. Base64
  3. QUOTED-PRINTABLE
  4. 8BIT
  5. BINARY
  6. x-EncodingName

I  hope that this post sheds some extra light on what MIME is. This POST is a result of research and reading I have done in last few days and as I am human, this could have some errors as well.  If any of you find some vital basic points missing please let me know so that I can add it to the post.  If you find this post useful please drop a comment or two.

Warm regards

Niraj

Print Friendly, PDF & Email
Posted in General Software | Tagged | Comments Off on MIME Explained

Introduction to Apache Camel

Apache Camel is a open source implementation of famous Enterprise Integration Patterns.   Camel is a Routing and Mediation Engine and facilitates the developers to create routes and mediation rules in variety of Domain Specific language(DSL) such as java, Spring/XML, scala etc.

Camel is versatile

Camel uses URIs to supports large number of transport and messaging models such as HTTP, JMS, JBI, Mina, SCA, CXF  it also  works well with external components and dataformats. To get a  feel of versatility of Camel you can browse the list of Components and URIs it supports in the link below. http://camel.apache.org/components.html

Camel is easy to use

Camel allows us to use same set of APIs to create routes and mediate messages between various components. This makes it extremely easy to use

Unit Testing camel is a breeze

Unit testing is essential to writing any quality code. Camel makes this facade of software development extremely easy. It provides bunch of ready make components like CamelContextSupport,  camel-guice, camel-test-blueprint for easily testing the code. More of this in a future post.

The Camel Terminologies/Classes/Interfaces

Endpoint

Endpoints are places where the exchange of messages takes place. It may refer to an address, a POJO, email address, webservice uri, queue uri, file etc. In camel an endpoint is implemented by implemented Endpoint interface. The endpoints are wrapped by something called routes.

CamelContext

CamelContext is at heart of all camel application and it represents Camel run time system.

  1. Create camelcontext.
  2. Add endpoints or components.
  3. Add Routes to connect the endpoints.
  4. Invoke camelcontext.start() – This starts all the camel-internal threads which are responsible for receiving, sending and processing messages in the endpoints.
  5. Lastly invoking camelcontext.stop() when all the messages are exchanged and processed. This will gracefully stop all the camel-internal threads and endpoints.

CamelTemplate

This is a thin wrapper around the CamelContext object and it is responsible to sending exchange or messages to an endpoint.

Component

Component is really an endpoint factory. As camel supports lots of different kind of resources, each of these resources have different kind of endpoints. In practical cases application don’t create endpoints directly using Components. Instead CamelContext decideds which component to instantiate and then uses that component instance to create endpoints. So in app we will have. CamelContext.getEndpoint(“pop3://john.smith@mailserv.example.com?password=myPassword”); Now pop3 in this case is name of the component. CamelContext maps all the component name with the component classes and using the name it instantiates the instance. Once it has handle to the component it instantiates the endpoint by calling. Component.createInstance() method.

Message

Mesaage represents a single concrete message ie request, reply or exception. All concrete message class impements a message interface for example JmsMessage class.

Exchange

Exchange is a container of message. It is created when a message is received by a consumer during routing process.

Processor

Processor interface represents a class that processes a message. It contains a single method public void process(Exchange exchange) throws exception Application developers can implement this interface to preform business logic on the message when message is received by the consumer.

Routes and RouteBuilder

Route is the step by step movement of message from a source, through arbitrary types of decision by filters or routers to a destination. They are configured by help of DSL (Domain Specific language). Java DSL is created by implementing routebuilder interface. It has single method called configure() which defines the entire route of message. Routes can also be configured via xml file using spring.

A Small Example of Camel code.

Lets follow this with a small example to get a taste of what Camel can do. In this example we will move group of files present in a folder to a different folder. In this process we will do following

  1. Checkout the dependencies for Camel.
  2. Create a simple RouterBuilder.
  3. Registering CamelContext in a spring file.
  4. Injecting the routerbuilder in a the CamelContext Bean
  5. Executing the class by starting the Camelcontext and finally stopping it once the execution is done.

1. Dependencies – Add following dependencies in your pom.xml 

        <dependency>
            <groupId>org.apache.camel</groupId>
            <artifactId>camel-core</artifactId>
            <version>${camel-version}</version>
        </dependency>

        <dependency>
            <groupId>org.apache.camel</groupId>
            <artifactId>camel-spring</artifactId>
            <version>${camel-version}</version>
        </dependency>

        <dependency>
            <groupId>org.apache.camel</groupId>
            <artifactId>camel-aws</artifactId>
            <version>${camel-version}</version>
        </dependency>

2. Create RouterBuilder – RouterBuilder can be created by extending org.apache.camel.builder.RouterBuilder class and overriding configure() method. Here is an example

import org.apache.camel.builder.RouteBuilder;

/**
 * Created by IntelliJ IDEA.
 * User: Niraj Singh
 * Date: 7/28/13
 * Time: 10:29 AM
 * To change this template use File | Settings | File Templates.
 */
public class MyFirstRouterBuilder extends RouteBuilder {
     @Override
    public void configure() throws Exception {
        try{
            from( "file:d:/vids").to("file:d:/temp");
        }catch(Exception e){

        }
     }
}
  1. From() is the source endpoint and contains uri of file or directory which camel will be polling.
  2. to() represents the target endpoint and contains name of target file or directory.
  3. The file component uri is of form “file://nameOfFileOrDirectory“.

3. Registering CamelContext in spring and injecting RouterBuilder in spring.

<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
          http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
          http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd">

    <camelContext id="sqsContext" xmlns="http://camel.apache.org/schema/spring">
         <routeBuilder ref="myFirstRouter" />
    </camelContext>

    <bean id="myFirstRouter" class="com.aranin.aws.sqs.MyFirstRouterBuilder"/>

</beans>

4. Starting the camel context and executing the code and stopping the camel context.

import org.apache.camel.CamelContext;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.FileSystemXmlApplicationContext;

/**
 * Created by IntelliJ IDEA.
 * User: Niraj Singh
 * Date: 4/16/13
 * Time: 11:21 AM
 * To change this template use File | Settings | File Templates.
 */
public class CamelHello {
    public static void main(String args[]) throws Exception {
        try {
                ApplicationContext springcontext = new FileSystemXmlApplicationContext("D:/samayik/awsdemo/src/main/resources/hellocamel.xml");
                CamelContext context = springcontext.getBean("firstCamelContext", CamelContext.class);
                context.start();
                Thread.sleep(10000);
                context.stop();

            } catch ( Exception e ) {
                System.out.println(e);
            }

    }
}

If you run this class then first we load the camelcontext from the spring config file. Inject the router builder in it. After the context starts then all the file from source directory is copied to the target directory. Once all the file are copied then try copying a new file to the source directory, it will be copied to target as well until the context is running 10000 ms in this case.

I have few more advanced tutorials on camel. Perhaps you will find them useful. There links are listed in the reference sections.

References

  1. http://camel.apache.org/
  2. http://camel.apache.org/enterprise-integration-patterns.html
  3. http://architects.dzone.com/articles/enterprise-integration
  4. http://weblog4j.com/2013/05/14/amazon-sqs-listening-to-sqs-using-apache-camel-the-spring-dsl-way/
  5. http://weblog4j.com/2013/04/17/amazon-sqs-listening-to-amazon-sqs-queue-using-apache-camel/

That is all folks. Though no one will write comments, but I like to persevere, and still request folks to drop in a line or two if you like this tutorial. 🙂

Warm Regards

Niraj

Print Friendly, PDF & Email
Posted in Apache Camel, Enterprise Java | Tagged , , , | 3 Comments

Java Tutorial on Neo4j – A Next Generation Graph Database

Neo4j is a Graph Database. A Graph database stores data in graph. Graphs have nodes which have one or more properties. The two nodes are connected by relationships. Relationships have one or more properties and helps in organizing the graphs. The Graphs can be traversed by different graph algorithms.

Features

  1.  Neo4j is a graph database.
  2. Highly available.
  3. Supports ACID transactions
  4. Scales to billions of node.
  5. Very high speed quering and graph traversal algorithms.

Setting Up Neo4j

You can use neo4j as embedded entity in your code. But for this tutorial we will download the server and start it using our command line.

  1. Download the latest release from http://neo4j.org/download.
  2. Select the appropriate version for your platform
  3. Extract the contents of the archive. We will be referring to  to the top-level extracted directory as <neo4j-home>
  4. Open your command window and navigate to <neo4j-home>/bin
  5. for Linux/MacOS, run <neo4j-home>/bin/neo4j start
  6. for Windows, run <neo4j-home>\bin\Neo4j.bat
  7. Visit http://localhost:7474 to verify that your instance has started and running peacefully.

For more options visit this url

http://docs.neo4j.org/chunked/milestone/server-installation.html

Once our neo4j is up and running we will see how to add nodes, properties and relationships using its java apis.

Getting the code for this tutorial

You can get the code for this tutorial from following svn url

https://www.assembla.com/code/weblog4j/subversion/nodes/29/SpringDemos/trunk

The project contains lots of other example, for neo4j example check package com.aranin.spring.neo4j

Downloading dependencies

Maven users can include following in their POM.

<dependency>
    <groupId>org.neo4j</groupId>
    <artifactId>neo4j</artifactId>
    <version>1.9.1</version>
</dependency>

Neo4j exposes its functions via set of REST API’s so for that we will be using apache HttpCient libraries. You can use any rest client api for your purpose. For http client use following maven dependencies

<dependency>
      <groupId>commons-httpclient</groupId>
      <artifactId>commons-httpclient</artifactId>
      <version>3.1</version>
</dependency>

<dependency>
     <groupId>org.apache.httpcomponents</groupId>
     <artifactId>httpclient</artifactId>
     <version>4.1.3</version>
     <scope>compile</scope>
</dependency>

<dependency>
     <groupId>org.apache.httpcomponents</groupId>
     <artifactId>httpmime</artifactId>
     <version>4.1.3</version>
     <scope>compile</scope>
</dependency>

Getting started with code

Once the neo4j server is up and running we can start to add nodes, properties and relationships using its rest API. Please note that node4j is running at http://localhost:7474. We will be using it as SERVER_ROOT_URI. So lets get the ball rolling.

Checking the server status

We can make a get request to the server root uri to check if server is running or not.

public int getServerStatus(){
	int status = 500;
	try{
	    //SERVER_ROOT_URI = 'http://localhost:7474' 
		String url = SERVER_ROOT_URI;
		HttpClient client = new HttpClient();
		GetMethod mGet =   new GetMethod(url);
		status = client.executeMethod(mGet);
		mGet.releaseConnection( );
	}catch(Exception e){
	System.out.println("Exception in connecting to neo4j : " + e);
	}

	return status;
}

If you invoke this method and everything is fine then you will get back server status of 200.

Creating a Node

We can make a POST request to <server-root-uri>//db/data/node. This will create a node in the node4j database. Following things should be noticed here.

  1. The REST URL is <server-root-uri>//db/data/node
  2. It accepts POST request.
  3. It accepts and sends json data.
  4. Upon successful creation of node the web-service returns 201 Create response.
  5. It also sends back URI of node in location header of response.

So lets check out the createNode() method

public String createNode(){
        String output = null;
        String location = null;
        try{
            String nodePointUrl = this.SERVER_ROOT_URI + "/db/data/node";
            HttpClient client = new HttpClient();
            PostMethod mPost = new PostMethod(nodePointUrl);

            /**
             * set headers
             */
            Header mtHeader = new Header();
            mtHeader.setName("content-type");
            mtHeader.setValue("application/json");
            mtHeader.setName("accept");
            mtHeader.setValue("application/json");
            mPost.addRequestHeader(mtHeader);

            /**
             * set json payload
             */
            StringRequestEntity requestEntity = new StringRequestEntity("{}",
                                                                        "application/json",
                                                                        "UTF-8");
            mPost.setRequestEntity(requestEntity);
            int satus = client.executeMethod(mPost);
            output = mPost.getResponseBodyAsString( );
            Header locationHeader =  mPost.getResponseHeader("location");
            location = locationHeader.getValue();
            mPost.releaseConnection( );
            System.out.println("satus : " + satus);
            System.out.println("location : " + location);
            System.out.println("output : " + output);
        }catch(Exception e){
        System.out.println("Exception in creating node in neo4j : " + e);
        }

        return location;
    }

The method is quite simple. All we have done is to create a post connection to the node service and send an empty json object to it. You can also visit the webadmin and verify that node has been created. Check out the uri of node created. It should be in form 

http://localhost:7474/db/data/node/1

Adding property to the node

So now we have created an empty node. Now we can add properties to it. For this neo4j provides another rest api  NodeURI + “/properties/” + propertyName. For example if you want to add “name” property to above created node the service uri will look like

http://localhost:7474/db/data/node/1/properties/name

The properties service is a PUT service and upon successful completion it returns “204 No Content” response. Like its create pear this service also accepts and sends json data format. So lets check out code for adding property for node.

public void addProperty(String nodeURI,
                            String propertyName,
                            String propertyValue){
        String output = null;

        try{
            String nodePointUrl = nodeURI + "/properties/" + propertyName;
            HttpClient client = new HttpClient();
            PutMethod mPut = new PutMethod(nodePointUrl);

            /**
             * set headers
             */
            Header mtHeader = new Header();
            mtHeader.setName("content-type");
            mtHeader.setValue("application/json");
            mtHeader.setName("accept");
            mtHeader.setValue("application/json");
            mPut.addRequestHeader(mtHeader);

            /**
             * set json payload
             */
            String jsonString = """ + propertyValue + """;
            StringRequestEntity requestEntity = new StringRequestEntity(jsonString,
                                                                        "application/json",
                                                                        "UTF-8");
            mPut.setRequestEntity(requestEntity);
            int satus = client.executeMethod(mPut);
            output = mPut.getResponseBodyAsString( );

            mPut.releaseConnection( );
            System.out.println("satus : " + satus);
            System.out.println("output : " + output);
        }catch(Exception e){
         System.out.println("Exception in creating node in neo4j : " + e);
        }

    }

Please note how the url is created. We pass in the node url which is returned by createNode() method and then append it with properties/<propertyName>. The value of property is sent as json payload , jsonString = “\”” + propertyValue + “\”” , which is set in PUT request. Once you add a property you can go and check it in the webadmin.

Creating a relationship

Now that we have created couple of node lets create a relationship between them.

  1. For this we have to invoke following rest web service. <nodeurl>/relationships where nodeurl is the source node from where relationship arises.
  2. This web-service receives accepts data through POST request.
  3. It accept json data and sends json data back. Format of json payload is                    { “to” : “http://localhost:7474/db/data/node/2”, “type” : “friend”, “data” : { “married” : “yes”,”since” : “2005” } }
  4. Upon successful completion of request it sends back 201 status code.
  5. It also sends back url of created relation in location header of response.

Lets check out code for such request

public String addRelationship(String startNodeURI,
                                   String endNodeURI,
                                   String relationshipType,
                                   String jsonAttributes){
        String output = null;
        String location = null;
        try{
            String fromUrl = startNodeURI + "/relationships";
            System.out.println("from url : " + fromUrl);

            String relationshipJson = generateJsonRelationship( endNodeURI,
                                                                relationshipType,
                                                                jsonAttributes );

            System.out.println("relationshipJson : " + relationshipJson);

            HttpClient client = new HttpClient();
            PostMethod mPost = new PostMethod(fromUrl);

            /**
             * set headers
             */
            Header mtHeader = new Header();
            mtHeader.setName("content-type");
            mtHeader.setValue("application/json");
            mtHeader.setName("accept");
            mtHeader.setValue("application/json");
            mPost.addRequestHeader(mtHeader);

            /**
             * set json payload
             */
            StringRequestEntity requestEntity = new StringRequestEntity(relationshipJson,
                                                                        "application/json",
                                                                        "UTF-8");
            mPost.setRequestEntity(requestEntity);
            int satus = client.executeMethod(mPost);
            output = mPost.getResponseBodyAsString( );
            Header locationHeader =  mPost.getResponseHeader("location");
            location = locationHeader.getValue();
            mPost.releaseConnection( );
            System.out.println("satus : " + satus);
            System.out.println("location : " + location);
            System.out.println("output : " + output);
        }catch(Exception e){
             System.out.println("Exception in creating node in neo4j : " + e);
        }

        return location;

    }

    private String generateJsonRelationship(String endNodeURL,
                                            String relationshipType,
                                            String ... jsonAttributes) {
        StringBuilder sb = new StringBuilder();
        sb.append("{ "to" : "");
        sb.append(endNodeURL);
        sb.append("", ");

        sb.append(""type" : "");
        sb.append(relationshipType);
        if(jsonAttributes == null || jsonAttributes.length < 1) {
            sb.append(""");
        } else {
            sb.append("", "data" : ");
            for(int i = 0; i < jsonAttributes.length; i++) {
                sb.append(jsonAttributes[i]);
                if(i < jsonAttributes.length -1) { // Miss off the final comma
                    sb.append(", ");
                }
            }
        }

        sb.append(" }");
        return sb.toString();
    }

Please check the generateJsonRelationship method. This method generates the json payload that has to be sent with the request.

Creating properties for Relationship

Once the relationship is created we can assign it properties. This can be done in following manner

  1. Invoking following rest web-service <relationshipurl>/properties. eg http://localhost:7474/db/data/relationship/1/properties
  2. This web-service can b invoked by PUT request.
  3. It accepts and sends back json response.
  4. Upon successful completion of request it sends back 204 status code in response.

Lets check out the code for this

private void addPropertyToRelation( String relationshipUri,
                                        String propertyName,
                                        String propertyValue ){

        String output = null;

        try{
            String relPropUrl = relationshipUri + "/properties";
            HttpClient client = new HttpClient();
            PutMethod mPut = new PutMethod(relPropUrl);

            /**
             * set headers
             */
            Header mtHeader = new Header();
            mtHeader.setName("content-type");
            mtHeader.setValue("application/json");
            mtHeader.setName("accept");
            mtHeader.setValue("application/json");
            mPut.addRequestHeader(mtHeader);

            /**
             * set json payload
             */
            String jsonString = toJsonNameValuePairCollection(propertyName,propertyValue );
            StringRequestEntity requestEntity = new StringRequestEntity(jsonString,
                                                                        "application/json",
                                                                        "UTF-8");
            mPut.setRequestEntity(requestEntity);
            int satus = client.executeMethod(mPut);
            output = mPut.getResponseBodyAsString( );

            mPut.releaseConnection( );
            System.out.println("satus : " + satus);
            System.out.println("output : " + output);
        }catch(Exception e){
             System.out.println("Exception in creating node in neo4j : " + e);
        }

    }

    private String toJsonNameValuePairCollection(String name, String value) {
        return String.format("{ "%s" : "%s" }", name, value);
    }

Querying the database

Now the final piece of puzzle, we have all the data in the database and now we wan’t to query the database.  Neo4J uses Graph traversal algorithms to query the database. Lets check how to do that.

  1. The traversal is done by invoking following url                                                           <start-node-url>/traverse/node for example http://localhost:7474/db/data/node/1/traverse/node
  2. This is a POST rest web service.
  3. It receives and send json data.
  4. Once the request completes we get back array of nodes in json format.

To get the traversal work we first need to create two custom classes which will generate the json payload to be sent, they are TraversalDescription and Relationship. Please don’t confuse them with the interfaces defined in core neo4j code base. They can be found in example code here. http://grepcode.com/snapshot/repo1.maven.org/maven2/org.neo4j.examples/neo4j-server-examples/1.9.M04/ 

For your benefit I will paste these classes right here. 

Relationship.java

package com.aranin.spring.neo4j;

/**
 * Created by IntelliJ IDEA.
 * User: Niraj Singh
 * Date: 7/22/13
 * Time: 10:49 AM
 * To change this template use File | Settings | File Templates.
 */
public class Relationship {

    public static final String OUT = "out";
    public static final String IN = "in";
    public static final String BOTH = "both";
    private String type;
    private String direction;

    public String toJsonCollection() {
        StringBuilder sb = new StringBuilder();
        sb.append("{ ");
        sb.append(" "type" : "" + type + """);
        if(direction != null) {
            sb.append(", "direction" : "" + direction + """);
        }
        sb.append(" }");
        return sb.toString();
    }

    public Relationship(String type, String direction) {
        setType(type);
        setDirection(direction);
    }

    public Relationship(String type) {
        this(type, null);
    }

    public void setType(String type) {
        this.type = type;
    }

    public void setDirection(String direction) {
    }
}

TraversalDescription.java

package com.aranin.spring.neo4j;

/**
 * Created by IntelliJ IDEA.
 * User: Niraj Singh
 * Date: 7/20/13
 * Time: 9:18 PM
 * To change this template use File | Settings | File Templates.
 */

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;

public class TraversalDescription {

    public static final String DEPTH_FIRST = "depth first";
    public static final String NODE = "node";
    public static final String ALL = "all";

    private String uniqueness = NODE;
    private int maxDepth = 1;
    private String returnFilter = ALL;
    private String order = DEPTH_FIRST;
    private List<Relationship> relationships = new ArrayList<Relationship>();

    public void setOrder(String order) {
        this.order = order;
    }

    public void setUniqueness(String uniqueness) {
        this.uniqueness = uniqueness;
    }

    public void setMaxDepth(int maxDepth) {
        this.maxDepth = maxDepth;
    }

    public void setReturnFilter(String returnFilter) {
        this.returnFilter = returnFilter;
    }

    public void setRelationships(Relationship... relationships) {
        this.relationships =  Arrays.asList(relationships);
    }

    public String toJson() {
        StringBuilder sb = new StringBuilder();
        sb.append("{ ");
        sb.append(" "order" : "" + order + """);
        sb.append(", ");
        sb.append(" "uniqueness" : "" + uniqueness + """);
        sb.append(", ");
        if (relationships.size() > 0) {
            sb.append(""relationships" : [");
            for (int i = 0; i < relationships.size(); i++) {
                sb.append(relationships.get(i).toJsonCollection());
                if (i < relationships.size() - 1) { // Miss off the final comma
                    sb.append(", ");
                }
            }
            sb.append("], ");
        }
        sb.append(""return filter" : { ");
        sb.append(""language" : "builtin", ");
        sb.append(""name" : "");
        sb.append(returnFilter);
        sb.append("" }, ");
        sb.append(""max depth" : ");
        sb.append(maxDepth);
        sb.append(" }");
        return sb.toString();
    }
}

Now with this in place lets check how to query the neo4jdb. We will create a searchDatabase method for that.

public String searchDatabase(String nodeURI, String relationShip){
        String output = null;

        try{

            TraversalDescription t = new TraversalDescription();
            t.setOrder( TraversalDescription.DEPTH_FIRST );
            t.setUniqueness( TraversalDescription.NODE );
            t.setMaxDepth( 10 );
            t.setReturnFilter( TraversalDescription.ALL );
            t.setRelationships( new Relationship( relationShip, Relationship.OUT ) );

            System.out.println(t.toString());
            HttpClient client = new HttpClient();
            PostMethod mPost = new PostMethod(nodeURI+"/traverse/node");

            /**
             * set headers
             */
            Header mtHeader = new Header();
            mtHeader.setName("content-type");
            mtHeader.setValue("application/json");
            mtHeader.setName("accept");
            mtHeader.setValue("application/json");
            mPost.addRequestHeader(mtHeader);

            /**
             * set json payload
             */
            StringRequestEntity requestEntity = new StringRequestEntity(t.toJson(),
                                                                        "application/json",
                                                                        "UTF-8");
            mPost.setRequestEntity(requestEntity);
            int satus = client.executeMethod(mPost);
            output = mPost.getResponseBodyAsString( );
            mPost.releaseConnection( );
            System.out.println("satus : " + satus);
            System.out.println("output : " + output);
        }catch(Exception e){
 System.out.println("Exception in creating node in neo4j : " + e);
        }

        return output;
    }

Experience the power of neo4j

Now that we have all the methods in place for creating node, adding property to node, adding relationship, adding properties to relationship and querying the database. Let create a program that will perform all these activities. We will create a main method in following way.

Relationship relationship;
    final String SERVER_ROOT_URI = "http://localhost:7474";

    private static enum RelTypes implements RelationshipType
    {
        KNOWS,friend;
    }

    public static void main(String[] args){
        Neo4jHello neo4jHello = new Neo4jHello();

        /**
         * check if server is running
         */
        int status = neo4jHello.getServerStatus();

        System.out.println("neo4j server status : " + status);

        /**
         * create a node
        */

        String firstNodeLocation = neo4jHello.createNode();

        String secondNodeLocation = neo4jHello.createNode();

        /**
         * add properties to node
         */

        //neo4jHello.addProperty("http://localhost:7474/db/data/node/1", "name" , "Niraj");
        //neo4jHello.addProperty("http://localhost:7474/db/data/node/2", "name" , "Manisha");

        neo4jHello.addProperty(firstNodeLocation, "name" , "Niraj");
        neo4jHello.addProperty(secondNodeLocation, "name" , "Manisha");

        /**
         *  add relationship between nodes
         */
        String relationAttributes = "{ "married" : "yes","since" : "2005" }";
        String relationShipURI = neo4jHello.addRelationship("http://localhost:7474/db/data/node/1",
                                                            "http://localhost:7474/db/data/node/2",
                                                            "friend",
                                                            relationAttributes);

        /**
         * add properties to relationship
         */

         neo4jHello.addPropertyToRelation(relationShipURI, "weight", "5");

        /**
         * finally traverse all the nodes starting from node 1
         */

        neo4jHello.searchDatabase(firstNodeLocation, "friend");

    }

References

  1. http://www.neo4j.org/learn
  2. http://grepcode.com/snapshot/repo1.maven.org/maven2/org.neo4j.examples/neo4j-server-examples/1.9.M04/
  3. http://en.wikipedia.org/wiki/Neo4j
  4. http://www.javaworld.com/javaworld/jw-02-2013/130204-how-neo4j-beat-oracle-db.html
  5. https://github.com/neo4j
  6. http://java.dzone.com/articles/10-caveats-neo4j-users-should

That is all folks. Hope you enjoyed this post and found it useful. Don’t forget to drop a comment or two to keep me encouraged.

Warm Regards

Niraj

Print Friendly, PDF & Email
Posted in Graph Database, neo4j | Tagged , | 7 Comments

Searching made easy with Apache Lucene 4.3

Lucene is a Full Text Search Engine written in Java which can lend powerful search capabilities to any application. At heart of Lucene lies a file based Full Text Index. Lucene provides APIs to create this index and then add and delete contents to this index. Further it allows search and retrieval of information from this index using powerful search algorithms. The data stored can be pulled from disparate sources like a database, filesystem and as well as the websites. Before beginning let us ponder on few terms.

Inverted Index

Inverted index is a datastructure which stores a mapping of a content and the location of object that contains that content. To make it more clear here are some examples

  1. Book Index – The Index of book contains the important words and the pages that contain those words. So book index helps us in navigating to the pages that contain a particular word.
  2. Listing of wines using price ranges – The price range is content and winename is the object that has that price range
  3. Web Index – Listing of website address by keywords. For example list of all webpages containing keywords “Apache Lucene”
  4. Shopping Cart – Listing of items in shopping cart by categories. 

Faceted Search

Any object can have multiple properties, each of these properties are facet of that object. Faceted search allows us to search for collection of objects based on multiple facets. Faceted search is also known as faceted navigation or faceted browsing and it allows us to search on information that is organized according to faceted organization structure.

Consider an example of an item in shopping cart. Item can have multiple facets like category, title, price, color, weight etc. Now a facet search would allow us to search for all the items which are in garden category, has red color and is between price range of Rs.30 to Rs.40.

Lucene provides us an API

  1. To create an inverted index.
  2. Store information according to faceted classification.
  3. Retrieve information using faceted search.

All the above makes Lucene a super-fast search engine which returns super relevant search results.

Lucene Features

  1. Relevance Ranking search
  2. Phrase, proximity, wildcard search.
  3. Plug-gable analyzer.
  4. Faceted Search.
  5. Field based sorting
  6. Range queries
  7. Mutliple index searching.
  8. Fast indexing 150GB/hour.
  9. Easy Backup and restore.
  10. Small RAM requirement.
  11. Incremental addition and fast searches.

For full list visit here

http://lucene.apache.org/core/features.html

Lucene Concepts and Terminologies

  1. Indexing – Indexing involves adding a document to the Lucene index by help of a class called “IndexWriter“.
  2. Searching – Searching involves retrieval of a document from Lucene index by help of a class called “IndexSearcher
  3. Document – A Lucene Document is a single unit of search and index. For example item in a shopping cart. Lucene index can contain millions of documents.
  4. Fields – Fields are properties of any document. In other words fields are the facets of the document which is an object. For example category of an item in shopping cart. Each document can have multiple fields.
  5. Queries – Lucene has its own query language. This allows us to search for document based on mulitple fields. We can assign weight to a field and also use boolean expressions like and and or to the query. For example – Return all items in cart which belong to category garden or home and has color red and has price less than Rs.1000.
  6. Analyzers – When a field text is to be indexed then they need to be converted into its most basic form. First they are tokenized and then they are converted to lowercase, sigularized, depunctuated. These tasks are performed by Analyzers. Analyzers are complicted and we require a deep study on how to use them. Most often the built in analyzers don’t suffice for our requirement, in that case we can create a new one. For this tutorial we will be using StandardAnalyzer as they contain most of the basic features we require.

Tutorial objective

  1. Try creating a Lucene index.
  2. Insert book records in it.
  3. Performing various kinds of searches on this index.

The book item will have following Facets

  1.  Book Title(String
  2. Book Author(String)
  3. Book Catgory(String)
  4. #Pages(int)
  5. Price(float)

The code for this tutorial has been committed to SVN. It can be checked out from

https://www.assembla.com/code/weblog4j/subversion/nodes/24/SpringDemos/trunk

This is an extended project with more tutorials. The lucene classes are in com.aranin.spring.lucene package

  1. LuceneUtil – This class contains utitlity method to create index, create IndexWriter and IndexSearcher.
  2. MySearcherManager – This class uses LuceneUtil and performs searches on the index.
  3. MyWriterManager – This class uses LuceneUtil and performs writes on the index.

Step by step walk-through

1. Dependencies – The dependencies can be added via maven

<dependency>
        <artifactId>lucene-core</artifactId>
        <groupId>org.apache.lucene</groupId>
        <type>jar</type>
        <version>${lucene-version}</version>
      </dependency>

      <dependency>
        <artifactId>lucene-queries</artifactId>
        <groupId>org.apache.lucene</groupId>
        <type>jar</type>
        <version>${lucene-version}</version>
      </dependency>

      <dependency>
        <artifactId>lucene-queryparser</artifactId>
        <groupId>org.apache.lucene</groupId>
        <type>jar</type>
        <version>${lucene-version}</version>
      </dependency>

      <dependency>
        <artifactId>lucene-analyzers-common</artifactId>
        <groupId>org.apache.lucene</groupId>
        <type>jar</type>
        <version>${lucene-version}</version>
      </dependency>

      <dependency>
        <artifactId>lucene-facet</artifactId>
        <groupId>org.apache.lucene</groupId>
        <type>jar</type>
        <version>${lucene-version}</version>
      </dependency>

2. Creating the index – The index can be created by creating an IndexWriter in create mode.

public void createIndex() throws Exception {

    boolean create = true;
    File indexDirFile = new File(this.indexDir);
    if (indexDirFile.exists() && indexDirFile.isDirectory()) {
       create = false;
    }

    Directory dir = FSDirectory.open(indexDirFile);
    Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_43);
    IndexWriterConfig iwc = new IndexWriterConfig(Version.LUCENE_43, analyzer);

    if (create) {
       // Create a new index in the directory, removing any
       // previously indexed documents:
       iwc.setOpenMode(IndexWriterConfig.OpenMode.CREATE);
    }

    IndexWriter writer = new IndexWriter(dir, iwc);
    writer.commit();
    writer.close(true);
 }
  • indexDir is the directory where you want to create your index.
  • Directory is a flat list of files used for storing index. It can be a RAMDirectory, FSDirectory or a DB based directory.
  • FSDirectory implements Directory and saves indexes in files in file system.
  • IndexWriterConfig.Open mode creates a writer in create or create_append or appned mode. Create mode creates a new index if it does not exist or overwrites an existing one. For purpose of creation we create an existing one.
  • Calling above method creates an empty index.

3. Writing to the index – Once the index is created we can write documents to it. That can be done via following.

public void createIndexWriter() throws Exception {

     boolean create = true;
     File indexDirFile = new File(this.indexDir);

     Directory dir = FSDirectory.open(indexDirFile);
     Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_43);
<span style="color: #222222; font-family: 'Courier 10 Pitch', Courier, monospace; line-height: 21px;">IndexWriterConfig iwc = new IndexWriterConfig(Version.LUCENE_43, analyzer);</span>
     iwc.setOpenMode(IndexWriterConfig.OpenMode.CREATE_OR_APPEND);
     this.writer = new IndexWriter(dir, iwc);

    }

Above method creates a writer in create_append mode. In this mode if index is created then it will not be overwritten. You can note that this method does not close the writer. It just creates and returns it. Creating IndexWriter is an costly operation. Thus we should not create a writer everytime we have to write a document to the index. Instead we should create a pool of IndexWriter and use a thread system to get the writer from the pool write to the index and then return the writer to the pool.

public void addBookToIndex(BookVO bookVO) throws Exception {
     Document document = new Document();
     document.add(new StringField("title", bookVO.getBook_name(), Field.Store.YES));
     document.add(new StringField("author", bookVO.getBook_author(), Field.Store.YES));
     document.add(new StringField("category", bookVO.getCategory(), Field.Store.YES));
     document.add(new IntField("numpage", bookVO.getNumpages(), Field.Store.YES));
     document.add(new FloatField("price", bookVO.getPrice(), Field.Store.YES));
     IndexWriter writer =  this.luceneUtil.getIndexWriter();
     writer.addDocument(document);
     writer.commit();
 }

 We dont create a writer in the code while inserting. Instead we have used a precreated writer which was stored as a instance variable. 

4. Searching the index – This is again a done in two steps 1. Creating IndexSearcher 2. Creating a query and doing the search.

public void createIndexSearcher(){
    IndexReader indexReader = null;
    IndexSearcher indexSearcher = null;
    try{
         File indexDirFile = new File(this.indexDir);
         Directory dir = FSDirectory.open(indexDirFile);
         indexReader  = DirectoryReader.open(dir);
         indexSearcher = new IndexSearcher(indexReader);
    }catch(IOException ioe){
        ioe.printStackTrace();
    }

    this.indexSearcher = indexSearcher;
 }

Note – The Analyzer used in searcher should be same as the one used to create the writer as analyzer is responsible for the way in which data is stored in index. Again creating IndexSearcher is a costly operation hence it makes sense to pre create a pool of IndexSearcher and use it in similar way as IndexWriter.

public List<BookVO> getBooksByField(String value, String field, IndexSearcher indexSearcher){
     List<BookVO> bookList = new ArrayList<BookVO>();
     Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_43);
     QueryParser parser = new QueryParser(Version.LUCENE_43, field, analyzer);

     try {
         BooleanQuery query = new BooleanQuery();
         query.add(new TermQuery(new Term(field, value)), BooleanClause.Occur.MUST);

        //Query query = parser.Query(value);
        int numResults = 100;
        ScoreDoc[] hits =   indexSearcher.search(query,numResults).scoreDocs;
        for (int i = 0; i < hits.length; i++) {
             Document doc = indexSearcher.doc(hits[i].doc);
             bookList.add(getBookVO(doc));
        }

     } catch (IOException e) {
         e.printStackTrace(); 
     }

     return bookList;
}

The IndexSearcher was pre-created and passed on to the the method. The main part of searching is query formation. Lucene supports lots of different kinds of queires.

  1. TermQuery
  2. BooleanQuery
  3. WildcardQuery
  4. PhraseQuery
  5. PrefixQuery
  6. MultiPhraseQuery
  7. FuzzyQuery
  8. RegexpQuery
  9. TermRangeQuery
  10. NumericRangeQuery
  11. ConstantScoreQuery
  12. DisjunctionMaxQuery
  13. MatchAllDocsQuery

You can choose the appropriate queries for your searches. The query language syntax can be learnt from here 

http://lucene.apache.org/core/old_versioned_docs/versions/2_9_1/queryparsersyntax.pdf

References

  1. http://lucene.apache.org/core/old_versioned_docs/versions/2_9_1/queryparsersyntax.pdf
  2. http://lucene.apache.org/core/old_versioned_docs/versions/3_1_0/api/all/org/apache/lucene/index/IndexWriterConfig.OpenMode.html
  3. http://lucene.apache.org/core/old_versioned_docs/versions/3_5_0/api/all/org/apache/lucene/store/FSDirectory.html
  4. https://today.java.net/pub/a/today/2003/07/30/LuceneIntro.html
  5. http://www.lucenetutorial.com/lucene-query-syntax.html
  6. http://lucene.apache.org/core/4_3_0/core/org/apache/lucene/search/Query.html

Summary

Search remains a backbone of any content driven application. The traditional DB driven searches are not very powerful and leaves a lot to be desired. So there is a need of a fast, accurate and powerful search solution which can be easily incorporated in the application code. Lucene beautifully fills in that gap, it makes the search a breeze and is backed by a powerful array of search algorithms like relevance ranking, phrase, wildcard, proximity and ranged search. It is also space and memory efficient. No wonder so many applications have been built on top of Lucene. This article intends to provide a basic tutorial on empowering dear readers with tools for getting started with Lucene.  There is lot more to be said but then don’t you want to explore some on your own :-)?

If you find this article useful please drop a comment or two.

Warm Regards

Niraj

Print Friendly, PDF & Email
Posted in Apache Lucene, Search | Comments Off on Searching made easy with Apache Lucene 4.3

Simple Spring Memcached – Spring Caching Abstraction and Memcached

Caching remains the one of the most basic performance enhancing mechanism in any read heavy database application. Spring 3.1 release came up with a cool new feature called Cache Abstraction. Spring Cache Abstraction provides the application developers an easy, transparent and decoupled way to implement any caching solution. Memcached is one of the most popular distributed caching system used across apps. In this post we will focus on how to integrate memcached with a Spring enabled applications. Since Spring directly supports only Ehcache and ConcurrentHashMap so we will fall down to a third party library Simple Spring Memcache to leverage power of spring caching abstraction.

Getting The Code

Code for this tutorial can be downloaded from following SVN location. https://www.assembla.com/code/weblog4j/subversion/nodes/24/SpringDemos/trunk For the tutorial to work please create the following table in your db. Then modify the datasource in springcache.xml.

CREATE  TABLE IF NOT EXISTS `adconnect`.`books` (
  `book_id` INT NOT NULL AUTO_INCREMENT ,
  `book_name` VARCHAR(500) NULL ,
  `book_author` VARCHAR(500) NULL ,
  `category` VARCHAR(500) NULL ,
  `numpages` INT NULL ,
  `price` FLOAT NULL ,
  PRIMARY KEY (`book_id`) )
ENGINE = InnoDB;

Integration Steps

1. Dependencies – I also assume that you have your hibernate, spring and logs set up. So for downloading SSM dependencies add following to your POM. For full set of dependencies please download the project from SVN url above.

<dependency>
     <groupId>com.google.code.simple-spring-memcached</groupId>
     <artifactId>spring-cache</artifactId>
     <version>3.1.0</version>
</dependency>

<dependency>
     <groupId>com.google.code.simple-spring-memcached</groupId>
     <artifactId>xmemcached-provider</artifactId>
     <version>3.1.0</version>
</dependency>

2. Enable Caching – To enable caching in your spring application add following to your spring context xml.

<cache:annotation-driven/>

3. Configure Spring to enable Memcached based caching  – Add following to your application context xml.

<bean name="cacheManager" class="com.google.code.ssm.spring.SSMCacheManager">
     <property name="caches">
         <set>
             <bean class="com.google.code.ssm.spring.SSMCache">
                 <constructor-arg name="cache" index="0" ref="defaultCache"/>
                 <!-- 5 minutes -->
                 <constructor-arg name="expiration" index="1" value="300"/>
                 <!-- @CacheEvict(..., "allEntries" = true) doesn't work -->
                 <constructor-arg name="allowClear" index="2" value="false"/>
             </bean>
         </set>
     </property>

    </bean>

<bean name="defaultCache" class="com.google.code.ssm.CacheFactory">
     <property name="cacheName" value="defaultCache"/>
     <property name="cacheClientFactory">
        <bean name="cacheClientFactory" class="com.google.code.ssm.providers.xmemcached.MemcacheClientFactoryImpl"/>
     </property>
     <property name="addressProvider">
         <bean class="com.google.code.ssm.config.DefaultAddressProvider">
            <property name="address" value="127.0.0.1:11211"/>
         </bean>
     </property>
     <property name="configuration">
         <bean class="com.google.code.ssm.providers.CacheConfiguration">
             <property name="consistentHashing" value="true"/>
         </bean>
     </property>

</bean>

SSMCacheManager extends org.springframework.cache.support.AbstractCacheManager – It is an  abstract class and is a manager for underlying Cache.

SSMCache implements org.springframework.cache.Cache – This is actual  wrapper round underlying cache client api.

4. Annotation Driven caching – Spring uses annotation to mark a method that it is to be managed by  cache.  These are the annotations defined by spring caching framework

  1. @Cacheable – This annotation is used to mark a method whose results are to be cached. If a cacheable method is called then spring first looks if result of the method is cached or not. If it present in cache then result is pulled from there else it the method call is made.                                                                                                                         
  2. @CachePut – Methods marked with cacheput annotations are always run and their results are pushed to cache. You should not place both Cacheput and Cacheable annotation on same method as they have different behaviour. Cacheput will result in method getting executed all the time while cacheable results in method getting executed only once.
  3. @CacheEvict – This annotation results in eviction of objects from the cache. This is generally used when the result object is updated hence the old object from cache needs to be purged.
  4. @Caching – This annotation is used if multiple annotations of same type is to be put on a method.

@Cacheable Demo 

@Cacheable(value = "defaultCache", key = "new Integer(#book_id).toString().concat('.BookVO')")
    public BookVO get(int book_id) throws Exception {
        BookVO bookVO = null;
		try{
			Query query = getSession().createQuery("from BookVO bookVO where bookVO.book_id=:book_id");
			query.setLong("book_id", book_id);
			bookVO =  (BookVO)query.uniqueResult();
		}catch(HibernateException he){
			log.error("Error in finding a bookVO : " + he);
            throw new Exception("Error in finding adPicVO by book_id for book_id : " + bookVO, he);
		}
		return bookVO;
    }

Please note the key attribute of the annotation. This is an example of Spring Expression Language. You can use SePL use to create memcache key according to your requirement. In this example I want a key which should be of form <book_id>.BookVO. 

Another Example – Lets say I want to store a list of bookVO from a given author in that case I can a unique key of form <author_name>.BookVOList so for that I can use following key

@Cacheable(value = "defaultCache", key = "#author.concat('.BookVOList')")
    public List<BookVO> getList(String author) throws Exception {

@CachePut Demo

@CachePut(value = "defaultCache", key = "new Integer(#bookVO.book_id).toString().concat('.BookVO')")
    public BookVO create(BookVO bookVO) throws Exception {
        try{
			getSession().save(bookVO);
			getSession().flush();
		}catch(HibernateException he){
			log.error("Error in inserting bookVO : " + he);
            throw new Exception("Error in inserting bookVO", he);
		}

		return bookVO;
    }

CachePut can be used while inserting data where data inserted can be put in cache after insertion is done

@CacheEvict Demo

@CacheEvict(value = "defaultCache", key = "new Integer(#bookVO.book_id).toString().concat('.BookVO')")
    public BookVO update(BookVO bookVO) throws Exception {
        try{
            Query query = getSession().createQuery("update BookVO bookVO set bookVO.book_name=:book_name, bookVO.book_author=:book_author,bookVO.category=:category,bookVO.numpages=:numpages,bookVO.price=:price " +
                                                   "where bookVO.book_id=:book_id");
            query.setString("book_name", bookVO.getBook_name());
            query.setString("book_author", bookVO.getBook_author());
            query.setString("category", bookVO.getCategory());
            query.setInteger("numpages", bookVO.getNumpages());
            query.setFloat("price", bookVO.getPrice());
			query.setLong("book_id", bookVO.getBook_id());
            query.executeUpdate();
		}catch(HibernateException he){
			log.error("Error in updating bookVO : " + he);
            throw new Exception("Error in updating bookVO", he);
		}

		return bookVO;
    }

References

  1. https://code.google.com/p/simple-spring-memcached/
  2. http://static.springsource.org/spring/docs/3.2.x/spring-framework-reference/html/cache.html
  3. http://static.springsource.org/spring/docs/3.2.x/spring-framework-reference/html/expressions.html
  4. http://static.springsource.org/spring/docs/3.1.0.M1/javadoc-api/index.html?org/springframework/cache/CacheManager.html
  5. http://doanduyhai.wordpress.com/2012/07/01/cache-abstraction-in-spring-3/
  6. http://viralpatel.net/blogs/cache-support-spring-3-1-m1/

That is all folks. I hope you enjoyed the post, don’t forget to post in some comments.

Warm Regards

Niraj

Print Friendly, PDF & Email
Posted in Caching, Memcached, Scalibility, Simple Spring Memcached, Spring Caching Abstraction | Tagged , , , , , | 10 Comments

Amazon SQS – Listening To SQS Using Apache Camel The Spring DSL Way

In my previous post Amazon SQS – Listening to amazon SQS queue using Apache Camel we saw how we can leverage Apache Camel to listen to a Amazon SQS queue. The example we created was simple one. We used Java DSL to route messages from SQS and process it in our anonymous Processor class. While the example worked, it was a very basic one. There were two cons immediately visible.

  1. Manually Starting camelcontext – The camelcontext was initialized directly in code. While there is no problem with that, in production system we would like to do that automatically. For this we can take help of Spring. We can register our camelcontext as a bean and then load the beans during server startup using spring context listener.
  2. Tight Coupling – We directly consumed the messages within our RouteBuilder class by creating an instance of processor. In ideal case, routebuilder should not be used to process messages. We should have a separate class for these processing. RouteBuilder should be used just to register endpoints, apply filters and endpoint chaining, any processing should be delegated to a separate class, so that we can have a clean, loosely coupled code.

In today’s post we will see how we can use bean binding for same purpose.

In today’s example we will use Spring and Spring DSL to produce, route and consume messages from SQS.  To set the agenda we will try to perform following to get a working spring DSL listener.

  1. Send a message to sqs. We won’t be using camel for this, but it will be fairly simple task to achieve this part.
  2. Register our camelcontext in a spring config xml.
  3. Create a POJO to consume the SQS message and register it as spring bean.
  4. Create a custom routebuilder class with sqs and bean endpoints and register it with camelcontext in spring config file.
  5. Start the camel context and have fun.

Download the code for this tutorial from following SVN location

https://www.assembla.com/code/weblog4j/subversion/nodes/19/awsdemo/trunk

1. Maven dependencies. Following dependencies are required to be downloaded

     <dependency>
        <groupId>org.slf4j</groupId>
        <artifactId>slf4j-api</artifactId>
        <version>1.5.6</version>
     </dependency>
     <!-- concrete Log4J Implementation for SLF4J API-->
     <dependency>
        <groupId>org.slf4j</groupId>
        <artifactId>slf4j-log4j12</artifactId>
        <version>1.5.6</version>
     </dependency>

     <dependency>
          <groupId>org.springframework</groupId>
          <artifactId>spring-core</artifactId>
          <version>${spring.version}</version>
      </dependency>

      <dependency>
          <groupId>org.springframework</groupId>
          <artifactId>spring-web</artifactId>
          <version>${spring.version}</version>
      </dependency>

      <dependency>
          <groupId>org.springframework</groupId>
          <artifactId>spring-beans</artifactId>
          <version>${spring.version}</version>
      </dependency>

      <dependency>
          <groupId>org.springframework</groupId>
          <artifactId>spring-context</artifactId>
          <version>${spring.version}</version>
      </dependency>

      <dependency>
          <groupId>org.springframework</groupId>
          <artifactId>spring-jdbc</artifactId>
          <version>${spring.version}</version>
      </dependency>

      <dependency>
          <groupId>org.springframework</groupId>
          <artifactId>spring-orm</artifactId>
          <version>${spring.version}</version>
      </dependency>

       <dependency>
            <groupId>org.imgscalr</groupId>
            <artifactId>imgscalr-lib</artifactId>
            <version>4.2</version>
            <type>jar</type>
            <scope>compile</scope>
        </dependency>

        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-java-sdk</artifactId>
            <version>1.3.33</version>
        </dependency>

        <dependency>
            <groupId>org.apache.camel</groupId>
            <artifactId>camel-core</artifactId>
            <version>${camel-version}</version>
        </dependency>

        <dependency>
            <groupId>org.apache.camel</groupId>
            <artifactId>camel-spring</artifactId>
            <version>${camel-version}</version>
        </dependency>

        <dependency>
            <groupId>org.apache.camel</groupId>
            <artifactId>camel-aws</artifactId>
            <version>${camel-version}</version>
        </dependency>


2. Create a spring config xml with basic camelcontext element in it. Let us name it as camelconfig.xml. We will be modifying this file till end to get a working example.

<camelContext id="sqsContext" xmlns="http://camel.apache.org/schema/spring">
.....
</camelContext>

3. Create a Bean processor class which will receive and process messages from SQS queue. com.aranin.aws.sqs.BeanProcessor. We have a single method in this bean

public void processSQSMessage(Exchange exchange)

Exchange is a container of message. It is created when a message is received by a consumer during routing process. The camelcontext creates an exchange when it receives a message and passes it to the binded bean which is BeanProcessor in this case. Here is the complete class.

package com.aranin.aws.sqs;

import com.aranin.aws.s3.PhotoProcessor;
import org.apache.camel.Exchange;

import java.util.StringTokenizer;

/**
 * Created by IntelliJ IDEA.
 * User: Niraj Singh
 * Date: 5/14/13
 * Time: 11:21 AM
 * To change this template use File | Settings | File Templates.
 */
public class BeanProcessor {

    public void processSQSMessage(Exchange exchange){
        System.out.println("processSQSMessage");
        String messagestring = exchange.getIn().toString();
        System.out.println("messagestring : " + messagestring);
        StringTokenizer photoTokenizer = new StringTokenizer(messagestring, ",");
        String source = null;
        String target = null;
        String path = null;

        source = photoTokenizer.nextToken();
        source = source.substring("Message: ".length());
        System.out.println("source : " + source);
        target = photoTokenizer.nextToken();
        path = photoTokenizer.nextToken();
        System.out.println("source : " + source);
        System.out.println("target : " + target);
        System.out.println("path : " + path);
        /**
         * generate thumbmail within 150*150 container
         */
        PhotoProcessor.generateImage(path, source, target, 150);
    }
}

4. Register BeanProcessor in camelconfig.xml. So you will have an entry like.

<bean id="sqsBeanProcessor" class="com.aranin.aws.sqs.BeanProcessor"/>

Please note that id of bean is “sqsBeanProcessor”

5. Spring DSL – Register you routes in camelcontext. Once this is done your camelcontext will look like

<camelContext id="sqsContext" xmlns="http://camel.apache.org/schema/spring">
<route>
<from uri="aws-sqs://PhotoQueue?accessKey=abcd&amp;secretKey=abcd&amp;amazonSQSEndpoint=https://sqs.ap-southeast-1.amazonaws.com"/>
<to uri="bean:sqsBeanProcessor?method=processSQSMessage"/>
</route>
</camelContext>
  1. Here aws-sqs://PhotoQueue is uri of the queue.
  2. aws-sqs tell camel context to use aws-sqs component to return an sqs endpoint.
  3. Photoqueue is name of the queue we are operating on
  4. secretKey and accessKeys are your amazon api keys which you should not share with anyone.
  5. amazonSQSEndpoint is the region where your key is present

The to uri is specially interesting bean:sqsBeanProcessor?method=processSQSMessage.

  1. This tell camelcontext that we have a bean named sqsBeanProcessor which will act as consumer of incoming message.
  2. This bean has a method named processSQSMessage which will consume the message. Camel is responsible for parameter binding of the method. The processSQSMessage has exchange as parameter. This parameter is supplied by camel.

6. Your whole camelconfig.xml will now look like

<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
          http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
          http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd">

    <camelContext id="sqsContext" xmlns="http://camel.apache.org/schema/spring">
        <route>
            <from uri="aws-sqs://PhotoQueue?accessKey=abcd&amp;secretKey=abcd&amp;amazonSQSEndpoint=https://sqs.ap-southeast-1.amazonaws.com"/>
            <to uri="bean:sqsBeanProcessor?method=processSQSMessage"/>
        </route>
    </camelContext>

    <bean id="sqsRouter" class="com.aranin.aws.sqs.SQSBeanRouterBuilder"/>

    <bean id="sqsBeanProcessor" class="com.aranin.aws.sqs.BeanProcessor"/>

</beans>

7. So now we create a manager which we will name as SpringCamelPhotoManager. SpringCamelPhotoManager will help us load the beans and start the camelcontext.

package com.aranin.aws.sqs;

import com.aranin.aws.s3.PhotoFile;
import org.apache.camel.CamelContext;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.FileSystemXmlApplicationContext;

/**
 * Created by IntelliJ IDEA.
 * User: Niraj Singh
 * Date: 5/14/13
 * Time: 11:30 AM
 * To change this template use File | Settings | File Templates.
 */

public class SpringCamelPhotoManager {
    public void startCamelServer() {
    try {
        ApplicationContext springcontext = new FileSystemXmlApplicationContext("D:/samayik/awsdemo/src/main/resources/camelconfig.xml");
        CamelContext context = springcontext.getBean("sqsContext", CamelContext.class);
        context.start();
        Thread.sleep(10000);
        context.stop();

    } catch ( Exception e ) {
        System.out.println(e);
    }

    }

    public void sendMessage(){
        AWSSimpleQueueServiceUtil awssqsUtil =   AWSSimpleQueueServiceUtil.getInstance();
        /**
         * 1. get the url for your photo queue
         */
        String queueUrl  = awssqsUtil.getQueueUrl(awssqsUtil.getQueueName());
        System.out.println("queueUrl : " + queueUrl);

        /**
         * 2. Add a photo to the queue to be processed
         */

        PhotoFile photo = new PhotoFile();
        photo.setImagePath("d:/vids");
        photo.setOrigName("Dock.jpg");
        photo.setTargetName("dock_thumb.jpg");

        /**
         * 3. set the photofile in queue for processing
         */

         awssqsUtil.sendMessageToQueue(queueUrl, photo.toString());
    }

    public static void main(String[] args){
        SpringCamelPhotoManager springCamelPhotoManager = new SpringCamelPhotoManager();

         /**
         * send a message
         */

        springCamelPhotoManager.sendMessage();

         /**
         * start camel as standalone and keep on receiving and processing messages asynchrounously
         */

         springCamelPhotoManager.startCamelServer();

    }

}

If you check out the main method we make two calls

  • springCamelPhotoManager.sendMessage(); – This sends a message to SQS queue.
  • springCamelPhotoManager.startCamelServer(); – This will start the camel context which in turn will start to listen to sqs and consume the messages in background.

8. Now you are ready – Supply the secret and access keys to the routes. Modify the file names in SpringCamelPhotoManager and run the class. You should have output like this

com.aranin.aws.sqs.SpringCamelPhotoManager
log4j:WARN No appenders could be found for logger (com.amazonaws.auth.AWS4Signer).
log4j:WARN Please initialize the log4j system properly.
queueUrl : https://sqs.ap-southeast-1.amazonaws.com/282733326245/PhotoQueue
{MD5OfMessageBody: 2cf93320514509eb56e960f999b5cd1f, MessageId: ff683b8c-03bb-45ba-8c60-4937116593cf, }
processSQSMessage
messagestring : Message: Dock.jpg,dock_thumb.jpg,d:/vids
source : Dock.jpg
source : Dock.jpg
target : dock_thumb.jpg
path : d:/vids
Process finished with exit code 0

I hope you enjoyed this post. If this post helped you in any way please post some comments and luv to encourage me to write more. Till then good bye.

Warm Regards

Niraj Singh

 

 

Print Friendly, PDF & Email
Posted in Amazon Web Services, Cloud Computing, SQS | Comments Off on Amazon SQS – Listening To SQS Using Apache Camel The Spring DSL Way

Updating WordPress Application Installed on Godaddy

Currently weblog4j gets anywhere between 50 to 100 page views a day. This is a low number but I am not worried very much as I have only 12 posts to my credit and few of them really suck. But inspite of this I, as any good ol blogger, am obsessed with the traffic. So I decided to leverage the social media to boost up my traffic. As the first step I did some research and decided to install the “Simple Facebook Connect” plugin. I dutifully created a facebook app, saved the secret key and app id in my local system. And installed the plugin. But as soon as I activated the plugin I got an error/warning message

facebookconnect

facebookconnect

To give some background weblog4j.com is a wordpress application hosted on godaddy and it was installed using the automatic application install feature in godaddy. It runs on twenty ten theme which was modified a bit to give the site its current front-end.

As per above warning I had to upgrade my wordpress from 3.0 to 3.5.1, to give me the  chance to use the facebook plugin. I did not had any local set up and hence no code base to test the latest wordpress and finally push the files on to the hosting server. This was a scary situation. I had no way to verify if the the latest release would be successful. To compound the problem I saw few posts which told that any twenty ten changes would be lost in case of update.  So bottomline was that I was about to release an untested software directly on a production system just using few clicks of mouse.  With lot of thinking I decided to go ahead with the upgrade and here are my release steps.

1. Backup database

I installed a plugin called “WP-DBManager“.  Once you install the wp-dbmanager and activate it you will get a warning   Your backup folder MIGHT be visible to the public

To correct this issue, move the .htaccess file from wp-content/plugins/wp-dbmanager to /home/websitename/public_html/wp-content/backup-db

to fix this simply ssh to your account and run the following command

mv -i /html/wp-content/plugins/wp-dbmanager/htaccess.txt /html/wp-content/backup-db/.htaccess

this command will copy htaccess.txt from /html/wp-content/plugins/wp-dbmanager/ and move it to /html/wp-content/backup-db and rename it to .htaccess

http://wordpress.org/support/topic/db-backup-folder-visible-to-the-public

Once this error is fixed login to your wordpress admin console. Look at database listed in

WP-DBManager

Backup your db and download the zip file saved in  /html/wp-content/backup-db folder. You can use any FTP client for this purpose.

2. Backup files

It is very important to backup your files as they may be overwritten during upgrade process. Godaddy saves your files in history but still to be on safer side you can archive the files and download to your computer. Here is how it all works.

  • Login in to your Godaddyconsole and click on launch center of your blog.
  • Click on file manager in the console
  • select all the directories and files in your filemanager and click on archive link
    archiving on godaddy

    archiving on godaddy

  • If your site size is more that 20 MB then you will have to archive the files into chunks.
  • Now open any FTP client of your choice. Create a FTP connection with your godaddy server.
  • The host name would be IPAdress of the server in lower part of right rail of your launch center.
    ipaddress

    ipaddress

  • The username password is same as your wordpress admin user you set up during install of wordpress.
  • Save the archives in your laptop.

With Backup of file and database done you are ready to set sail. In case of problem you can simply roll back the files and database and your site will work as before. 

3. Set your site in maintenence mode

Download a plugin which allows to set your site for upgrade downtime. There are tons of them available. I use “Dashboard Maintenance Mode“. Set your site in maintenance mode and move on.

4. Deactivate plugins.

Deactivate all the plugins as per wordpress codex.

http://codex.wordpress.org/Updating_WordPress

5. Update php

– As we saw that we need php 5.0 or later for the simple facebook connect to work. You can upgrade the php from your Godaddy launch centre. On the hosting console search for tool called “Programming Langauges”. Click on that and you will see the current php versions listed. Choose 5.3 which is latest on Godaddy and click on save.

6. Automatically update wordpress

– In your site admin go to dashboard -> updates. There click on automatic update which will install the latest wordpress. I avoided updating the theme as I had made changes to the theme files directly. So I just updated the plugins and wordpress version.

7. Enable all your plugins.

8. Disable the maintenance mode

and you are ready to rock and roll.

9. Finally do QA

on your blog to make sure that everything is working fine. It could be possible that some of your plugin may become incompatible, that you will have to fix on your own.

I updated my blog today to 3.5.1 and had no issue whatsoever. I am now able to use the Simple Facebook Connect plugin on my site and am excited to find out how it turns out.

Thats all folk, hope some of you will be benefited by this post. Don’t forget to show some comment luv, it will be greatly appreciated.

Till Then Good Bye and good luck

Niraj Singh

Leap Before you look. 🙂

Print Friendly, PDF & Email
Posted in Uncategorized | Tagged , , | 1 Comment

Amazon SQS – Listening to amazon SQS queue using Apache Camel

In my previous post, Working with Amazon Simple Queue Service using java, I discussed how to post and retrieve messages from Amazon SQS queue using amazon SDK. There was one major problem in the code. Instead of listening to the queue we used a simple thread which polled on the queue and retrieved the message. This takes out the fun in using message oriented middle-ware. So let us take this one step further and create a listener which will be listing to the SQS and retrieve the message as soon as it is posted to the queue. We will be using Apache Camel for this purpose.

What is Apache Camel? For the readers not familiar with Apache Camel, it is a open source java framework based on Enterprise Integration Patterns. Camel is a rule based routing and mediation engine written in java and can be used across various transports and messaging models like HTTP, JMS Queues, Web Services etc. Camel allows you to declare endpoints using URIs and facilitates the delivery of the messages between the endpoints. For more knowledge you can visit following links

  1. http://architects.dzone.com/articles/apache-camel-integration
  2. http://camel.apache.org/
  3. http://stackoverflow.com/questions/8845186/what-exactly-is-apache-camel
  4. http://camel.apache.org/tutorials.html

You can also buy a book Camel in Action 

Setting up SQS? Please revisit my previous post on SQS to get you started with SQS. Working with Amazon Simple Queue Service using java

So once you are through with the post you should have following

  1. Live SQS queue called PhotoQueue.
  2. A working AWSSimpleQueueServiceUtil.java file which contains utility methods to connect and perform send, receive and delete operation on queue.
  3. SQSPhotoManager.java file which makes actual connection and send photo processing messages to SQS queue. This class also defines and starts a thread which retrieves messages from SQS via polling.

Purpose of this tutorial is to replace the SQSPhotoManager with a CamelPhotoManager which will leverage Apache Camel to act as mediator and router of photo processing messages.  CamelPhotoManager  will be responsible for posting message to SQS and then use Camel framework to retrieve and process the message.

Getting started part 2

1. Download Camel – It is always good to have maven manage our dependencies so you can add following to you pom.xml. Your version should be higher that 2.6 else it will not work.

 <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-core</artifactId>
      <version>${camel-version}</version>
 </dependency>

 <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-spring</artifactId>
      <version>${camel-version}</version>
 </dependency>

<dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-aws</artifactId>
       <version>${camel-version}</version>
</dependency>

2. Creating CamelContext  – CamelContext is a container which provides camel run time system. Most important thing to know about camel context is that they have a hold on your routes. Routes are  from and to endpoints of the message exchange. There are two ways of creating a camel context. One is registering them in spring and other one is to instantiate in java. I will be using java for this post. 

public void asyncProcess() {
    try {
    // create CamelContext
        SimpleRegistry registry = new SimpleRegistry();
        AWSSimpleQueueServiceUtil awssqsUtil =   AWSSimpleQueueServiceUtil.getInstance();
        AmazonSQS sqsClient =  awssqsUtil.getAWSSQSClient();
        registry.put("amazonSQSClient" , sqsClient);
        CamelContext context = new DefaultCamelContext(registry);

        // add our route to the CamelContext
        context.addRoutes(new MySQSRouterBuilder());

        context.start();
        Thread.sleep(100000);
        context.stop();

    } catch ( Exception e ) {
        System.out.println(e);
    }

    }
  • Here as a first step we have created a simple registry where we have stored the Amazon SQS client instance. You can get the code for AWSSimpleQueueService from my previous post Working with Amazon Simple Queue Service using java
  • Then we create a CamelContext and pass the registry to it.                      CamelContext context = new DefaultCamelContext(registry);
  • Then we register our route with the CamelContext.
  • Finally we start the CamelContext using context.start()

3. Creating the Route – RouteBuilder is the base class which implements the routing rules using DSL. We have to extend RouteBuilder and add its instance to camelContext. The complete discussion of camel routes is beyond the scope of this post. We can create the Route using Spring DSL but for simplicity we will be using java DSL.

package com.aranin.adconnect.util.aws;

import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.builder.RouteBuilder;

import java.util.StringTokenizer;

/**
 * Created by IntelliJ IDEA.
 * User: Niraj Singh
 * Date: 4/12/13
 * Time: 12:42 PM
 * To change this template use File | Settings | File Templates.
 */
public class MySQSRouterBuilder extends RouteBuilder {
    @Override
    public void configure() throws Exception {
        try{
            //Properties properties = new Properties();
            //properties.load(new FileInputStream("D:/samayik/adkonnection/src/main/resources/AwsCredentials.properties"));
            String sqs = "aws-sqs://PhotoQueue?amazonSQSClient=#amazonSQSClient";
            from( sqs).process(new Processor() {
                public void process(Exchange exchange)
                        throws Exception {

                    String messagestring = exchange.getIn().toString();
                    System.out.println("messagestring : " + messagestring);
                    StringTokenizer photoTokenizer = new StringTokenizer(messagestring, ",");
                    String source = null;
                    String target = null;
                    String path = null;

                    source = photoTokenizer.nextToken();
                    source = source.substring("Message: ".length());
                    System.out.println("source : " + source);
                    target = photoTokenizer.nextToken();
                    path = photoTokenizer.nextToken();
                    System.out.println("source : " + source);
                    System.out.println("target : " + target);
                    System.out.println("path : " + path);
                    /**
                     * generate thumbmail within 150*150 container
                     */
                    PhotoProcessor.generateImage(path, source, target, 150);

                }
            });

        }catch(Exception e){

        }
    }

}

There are three things to be highlighted over here.

  1. The class extends RouterBuilder and overriders configure method.
  2. The from route contains the uri for our sqs queue which is                                                aws-sqs://PhotoQueue?amazonSQSClient=#amazonSQSClient
  3. The call of process method where we pass an instance of org.apache.camel.Processor class. Here we are using a anonymous class but you can as well extend the Processor class and implement the process method.

The uri for sqs queue is

aws-sqs://queue-name[?options]

Options are parameter we want to pass to the SQS for making connection. For this we can either pass on the access and secret keys or pass in an instance of SQSClient which is present in jndi repository. For complete reference please visit http://camel.apache.org/aws-sqs.html

Tying things up. 

We saw the asynchProcess method which instantiates and starts the camel context. So here is the full CamelPhotoManager  class with main method which will send the message to the SQS queue and retrieve it via camel.

<strong>CamelPhotoManager  class</strong>
package com.aranin.adconnect.util.aws;

import com.amazonaws.services.sqs.AmazonSQS;
import org.apache.camel.CamelContext;
import org.apache.camel.impl.DefaultCamelContext;
import org.apache.camel.impl.SimpleRegistry;

/**
 * Created by IntelliJ IDEA.
 * User: Niraj Singh
 * Date: 4/12/13
 * Time: 3:27 PM
 * To change this template use File | Settings | File Templates.
 */
public class CamelPhotoManager {
    public void asyncProcess() {
    try {
    // create CamelContext
        SimpleRegistry registry = new SimpleRegistry();
        AWSSimpleQueueServiceUtil awssqsUtil =   AWSSimpleQueueServiceUtil.getInstance();
        AmazonSQS sqsClient =  awssqsUtil.getAWSSQSClient();
        registry.put("amazonSQSClient" , sqsClient);
        CamelContext context = new DefaultCamelContext(registry);

        // add our route to the CamelContext
        context.addRoutes(new MySQSRouterBuilder());

        context.start();
        Thread.sleep(10000);
        context.stop();

    } catch ( Exception e ) {
        System.out.println(e);
    }

    }

    public void sendMessage(){
        AWSSimpleQueueServiceUtil awssqsUtil =   AWSSimpleQueueServiceUtil.getInstance();
        /**
         * 1. get the url for your photo queue
         */
        String queueUrl  = awssqsUtil.getQueueUrl(awssqsUtil.getQueueName());
        System.out.println("queueUrl : " + queueUrl);

        /**
         * 2. Add a photo to the queue to be processed
         */

        PhotoFile photo = new PhotoFile();
        photo.setImagePath("d:/vids");
        photo.setOrigName("Dock.jpg");
        photo.setTargetName("dock_thumb.jpg");

        /**
         * 3. set the photofile in queue for processing
         */

         awssqsUtil.sendMessageToQueue(queueUrl, photo.toString());
    }

    public static void main(String[] args){
        CamelPhotoManager camelPhotoManager = new CamelPhotoManager();

         /**
         * send a message
         */

        camelPhotoManager.sendMessage();

         /**
         * start camel as standalone and keep on receiving and processing messages asynchrounously
         */

         camelPhotoManager.asyncProcess();

    }

}

Run this class and see the camel in action. One point to note is Camel is still polling to retrieve the message but it can be changed to asynchronous receiver. For that you need to change your routebuilder is following way

from(SqsEndpoint(String uri, SqsComponent component, SqsConfiguration configuration)).createConsumer(Processor)

I am still researching on that and if any of the readers can crack it then please let me know. I would love to see that solution.

So here it is, we have created our own SQS receiver powered by Apache Camel, but we have only scratched the surface, there is so much more we can do with Camel and SQS. I have really struggled to get a working tutorial in place. First things I wanted to use was spring DSL, then I wanted to use SQSConsumer which comes with camel sqs api. But I could not make them work fast enough(Simple POCs should not take more than one day in my opinion). So I would love to hear what breakthrough my dear young readers come up with. But with this tutorial I hope that atleast I can get some of you started.

Please share your comments for this post. I would love to hear from you.

Regards

Niraj

Print Friendly, PDF & Email
Posted in Amazon Web Services, Cloud Computing, SQS | Tagged , , , | 2 Comments

Introduction to Cloud Computing

Cloud Computing is a delivery of computing as a service rather than a product. I will come back to this after a few lines of introduction. Still then stay put.

Introduction

In very recent past if a company decided to create and host an application they would need to do following:

Traditional infrastructure

  1. Create a physical infrastructure Layer – They will have to buy 100s of servers, networking equipments, storage devices on which they will have to build their infrastructure. On top of that they will have to  buy space to accommodate the servers and equipments, power with backup, cooling. They will also need to upgrade their servers after every few years.
  2. Create Platform layer –  To utilize the hardware resources they will have to acquire software like OS, web-servers, application servers, databases, CDN, caching, etc. These software will have to  installed on each of the groups of servers and maintained regularly.
  3. Create the Application – Then the company will have to hire a dedicated team of developers who will develop the actual system to be installed on platform created above.

This is the traditional way of deploying applications. This traditional way has lots of drawbacks. Let us dig a few of them.

  1. Large Upfront Cost – Company will have to procure all the hardware for creating infrastructure even before the application is developed. Cost of this procurement is to the tunes of 100s of million dollars.  You can throw in more millions to buy space, power, cooling, security etc.
  2. Hiring Infrastructure team – They will have to hire a highly qualified team of infrastructure engineers to create and maintain  such a complex infrastructure. This will add to overall upfront and recurring cost to deploy the application.
  3. Software Licencing Cost – The software required to deploy the application on the infrastructure need to installed.  The licensing cost of these software are huge.
  4. Resource Wastage – The companies do planning in advance about how much peak traffic they will get. Based on peak traffic they purchase the hardware. Most of the time the traffic is much less than the peak. So large amount of resources in terms CPU, Bandwidth, memory etc remain underutilized hence waste.
  5. Scaling not guaranteed- Even with such huge infrastructure it is not guaranteed that servers will scale up if there is sudden spurt of traffic (say 100 times more).
  6. Less Fail-over capability – The servers are physically present in a single datacenter. In case there is an problem with datacenter like fire, earth quake, tsunami etc the datacenter is bound to go down. In that case the application will go down too. So the fail-over capability is reduced.
  7. Production level testing not possible – The testing or development environments are scaled down versions of production, this is because the cost of creating and maintaining production duplicate can be prohibitive. So the actual testing of application is not done in production settings.

 

To summarize the problem

 

Infrastructure is not the core business.

So all in all the traditional way of delivering application is not that great.  I would not be wrong if I state that 70-80% of effort in terms of time and cost are spent in just creating and maintaining the infrastructure and platform on which final application would run. If it were me I would rather spend time is developing my application and adding new feature in it. This sucks!

Everything looks grim? Not quite 🙂 Since last few years there has been lot of talk about Cloud Computing. Cloud Computing is the answer to all the problems that are listed above.

What is cloud computing?

Simply put Cloud Computing is delivery of computing as a service rather than as product.

We know that computing is sum of CPU, memory, storage, network and bandwidth which coupled with necessary software provides an ecosystem where our applications can live. In traditional approach, computing had to be assembled from scratch or bought from a data-center businesses, and we have already seen the disadvantages of this approach . Cloud Computing caters to these very pain points, it promises to deliver computing as service and it has indeed delivered that promise. Today you will find thousands of companies which claim to have cloud offerings and why not? Cloud Computing is the new cool kid in the town, and players want to take all the advantages they can get by claiming that they have a cloud offering. But how true is their claim? There are certain characteristics of a cloud service.

  1. Offered online and self serving – The cloud service product should be offered online and a user can access the entire range just by clicking on the browser without any intervention of human being.
  2. Pay-per-use model – There should be no upfront cost while using the cloud services. Users should pay only for amount of resources they have used.
  3. Scalability on demand – The cloud should allow users to easily scale up and down their application by few clicks of the mouse.
  4. Elasticity on demand – Cloud should allow user to quickly use and discard resources. This should be easy to do and should be online.
  5. Delivered over internet protocol – Any cloud services should be deliverable over internet protocols like HTTP, REST, SOAP etc.

So now we can check who are the genuine cloud players. If you need to call the service desk of the company to create an account with a cloud product then it is not a cloud product. If you need to call the service desk to enable any service for your account then it is not a cloud product.

Cloud Segments:

As we saw that there are three distinct layer of any application infrastructure. These are physical layer, platform layer, and application itself. Cloud products falls exactly in these three layers and thus cloud services are classified into three segments.

  1. Infrastructure as a service or IaaS – This is delivering of hardware ie, CPU, memory, storage, networking as a service. In other words users can provision these hardware resources without buying the actual device, they can use the online application provided the cloud to get these delivered as a service. For example Amazon EC2 allows users to create compute units with desired OS, RAM, CPU and storage via their console. So Amazon EC2 is a IaaS.  Other examples are Google Compute Engine, AWS S3, AWS ELB, Route 53, IBM Smartcloud, Microsoft Azure IaaS.
  2. Platform as Service or PaaS – Platform as a service provides the software environment which is required to run an application, as a service . For example to run a simple java web-application you require to install Tomcat on your computer. In addition you may also require a database. So a computer with tomcat and database becomes a platform in which a java web-application can run. Any cloud service which delivers this platform over the network can be classified as PaaS. Example of PaaS is Amazon Elastic Beanstalk. Beanstalk comes pre-configured with apache/tomcat/php and a database. It provides APIs using which you can deploy your application on it without worrying about what is installed there. Examples of PaaS providers are Amazon Elastic Beanstalk, Elastic Cache, SQS, SES, SNS, Google App Engine, Microsoft Azure, VMware Cloudfoundry etc.
  3. Software as a Service or SaaS – Last but not the least we have SaaS.  SaaS delivers application as a service. For example http://zencoder.com/ is a SaaS application which delivers video transcoding service over the web. As a user all you need is to create an account and access the service. You don’t need to pay any money upfront and pay only for resources you consume while transcoding. Some examples of SaaS are Google Apps, salesforce.com, Amazon Elastic Transcoder, Kikapps, Gigya, Janrain etc.

The image below explains the various segments of cloud products and also the major players/product operating in that space.

Mapping Cloud Segments with Traditional InfrastructureSo now we see how different layer of cloud computing maps with different layer of traditional infrastructure

Physical Layer = Infrastructure as a Service (IaaS)

Platform Layer = Platform as a Service (PaaS)

Application Layer = Software as a Service.

Various Advantages of Cloud Computing

  1. No upfront cost of buying maintaining hardware resources
  2. No upfront cost of buying software licences. Though cost of using oracle in cloud may be more than using mysql in cloud.
  3. Pay only for resources used.
  4. Scale up and down on demand. So less resources are wasted and also more resources are available instantly, when required.
  5. More available – Cloud Data-centers are generally deployed in different places and in different continents. It is easy to to replicate your application in more than one location. So if your primary application is down then you can easily route your traffic to the backup data-center.
  6. Easy to take backups – Since cost of resources are cheap so it is easy and cost effective to take backups.
  7. Testing on production level infrastructure – You can easily scale your test environment to have as many servers as production. After the testing is done you can release the extra servers. You only have to pay for the time duration for which you had used the resource. Easy and cost effective.
  8. Security – Cloud data-centers goes to extreme when it comes to physical security. Also they makes sure that data integrity, confidentiality is maintained at all costs. Finally they make sure that your service is available 24/7. Maintaining such kind of security in traditional data-center is very costly and difficult.
  9. Creating and maintaining the infrastructure and deploying the application on it much easier than on the traditional server. This becomes even more important if you are a start-up as you don’t have team of qualified infrastructure engineers to guide you through this.

Cloud Deployment model – Based on deployment model, clouds can be classified in following way

  1. Public Cloud – These are the cloud in which the cloud provider provides all the cloud infrastructure to general public over the internet. They can be provided free or as pay per use.
  2. Private Cloud – These clouds are maintained solely for a single organization. They can be created by organizations themselves or by any third party.
  3. Community Cloud – When two or more organization have similar cloud requirements then they can club together to create an cloud infrastructure. These kind of clouds are called community cloud.
  4. Hybrid Cloud – This is combination of more than one type of cloud. For example lets say that a private cloud has need of more resources then it can collaborate with public cloud player like amazon and avail their resources for any duration of time. Such clouds are called hybrid cloud.

I was always skeptical of cloud as I did not understand what it was. But last month I had an opportunity to attend an AWS workshop. I was able to create an infrastructure model for my application using EC2, RDS, Elastic Cache, S3 and Elastic Load Balancer. After that I was able deploy and scale my application with few clicks of mouse. It was amazing and almost magical.  Indeed in my mind, cloud has arrived with a bang, slowly but surely I feel that most application would move to cloud and it is only matter of time before this happens. When US government can host their apps on private cloud maintained by Amazon then there should be something to it. Right?

When I think about this I remember a series of short stories written by Issac Asimov. They were about a giant fictional supercomputer called Multivac.  Multivac was something like cloud, only more, Apart from having cloud like character it was also an AI. As the series went by the Multivac evolved to a Cosmic  AC which resided in hyperspace. For dear readers here goes the story http://filer.case.edu/dts8/thelastq.htm. Are we moving in that direction? I don’t know I am only an average Human?

Please let me know what you think about this article. Feel free to post questions, together we can answer some of those.

Happy Reading,

Regards

Niraj

 

Print Friendly, PDF & Email
Posted in Amazon Web Services, Cloud Computing, IaaS, SaaS, Scalibility | Tagged , , , | 3 Comments

Working with Amazon Simple Queue Service using java

Amazon Simple Queue Service or SQS is a highly scalable hosted messaging queue provided by Amazon Webservice stack. Amazon SQS can be used to completely decouple operations of different components within the system which otherwise exchange data to  perform independent tasks. Amazon SQS also helps us in saving the data which would be lost in case the application is down or if one of the component becomes unavailable.

Amazon SQS Features (Copied directly from amazon website)

  1. Redundant infrastructure—Guarantees delivery of your messages at least once, highly concurrent access to messages, and high availability for sending and retrieving messages
  2. Multiple writers and readers—Multiple parts of your system can send or receive messages at the same time. SQS locks the message during processing, keeping other parts of your system from processing the message simultaneously.
  3. Configurable settings per queue—All of your queues don’t have to be exactly alike. For example, one queue can be optimized for messages that require a longer processing time than others.
  4. Variable message size—Your messages can be up to 65536 bytes (64 KiB) in size. For even larger messages, you can store the contents of the message using the Amazon Simple Storage Service (Amazon S3) or Amazon SimpleDB and use Amazon SQS to hold a pointer to the Amazon S3 or Amazon SDB object. Alternately, you can split the larger message into smaller ones.
  5. Access control—You can control who can send messages to a queue, and who can receive messages from a queue
  6. Delay Queues—A delay queue is one which the user sets a default delay on a queue such that delivery of all messages enqueued will be postponed for that duration of time. You can set the delay value when you create a queue with CreateQueue, and you can update the value with SetQueueAttributes. If you update the value, the new value affects only messages enqueued after the update.
With above knowledge in place let us try to use SQS for creating a simple photo processing service. 

 

Problem Defination for the tutorial

We will be creating a simple photo-processing application with following components.

1. Photo Uploader serivce – This is a webservice which allows users to upload a photo to the system. Once the photo is uploaded they are stored in a temporary storage. To keep it simple we will assume that user has already uploaded the photo and stored it in a predefined location.

2. AWSSimpleQueueServiceUtil – This is a utility class which wraps a Amazon SQS client and performs basic CRUD operations on the SQS queue.

3.  PhotoProcessingManager – Manages the entire show. It will invoke AWSSimpleQueueServiceUtil  to send/receive messages to SQS and invoke PhotoProcessor to process the photo and finally delete the message from the queue.  Mostly we should intend this class to act as Listener to the SQS but for simplicity we will just be using a Poll mechanism to pull the messages from SQS.

4. PhotoProcessor – Gets a photo message from SQS through PhotoProcessingManager and generates a thumbnail.

Before beginning it would be great if you go through video in the following link.

http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/Welcome.html

Steps for getting started

1. Create a amazon account. You will need a credit card for that

2. Log on to your console console.aws.amazon.com.

3. On console dashboard search for SQS and click on it. It will take you to your SQS home.

4. Create a new SQS queue and name it PhotoQueue. Leave the rest of the setting as default. We can also create and delete a SQS queue dynamically but in this tutorial I have a pre-created queue which I will be using in my code.

5. Now we have a queue and now we will create a simple java project in our favorite java editor and see how we can leverage this queue.

6. Once you are done you need to download your security credentials. For this go to “My Account”/”security credentials”. What we are after is access credentials. You will see that there are 3 types of access credentials one of them is “Access Keys”. We need this to access and work on PhotoQueue we just created. We will create a new set of access keys and store the access key and secret key is a safe location.

7. Now download the  sdk for java from here. http://aws.amazon.com/sdkforjava. In lib folder of sdk copy the aws-java-sdk-1.3.33.jar to your project classpath.

Maven users can add following dependency in their POM

<dependency>
	<groupId>com.amazonaws</groupId>
	<artifactId>aws-java-sdk</artifactId>
	<version>1.3.33</version>
</dependency>

Create a file called “AwsCredentials.properties” store this in your project. This file will contain following properties

accessKey =
secretKey =

The values of these properties are the access key you generated in the step 6.

8. For photo processing I am using imgscalr. It is a lightweight and awesome photo processing library in java for doing simple tasks like resize, rotate, crop etc. You can download the jar from http://www.thebuzzmedia.com/software/imgscalr-java-image-scaling-library/#download. Maven users can add the following to their dependency list.

<dependency>
        <groupId>org.imgscalr</groupId>
        <artifactId>imgscalr-lib</artifactId>
        <version>4.2</version>
        <type>jar</type>
        <scope>compile</scope>
 </dependency>

Now we are ready to rock and roll and get our hands dirty with some code.

AWSSimpleQueueServiceUtil.java

package com.aranin.adconnect.util.aws;

import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.sqs.AmazonSQS;
import com.amazonaws.services.sqs.AmazonSQSClient;
import com.amazonaws.services.sqs.model.*;

import java.io.FileInputStream;
import java.util.List;
import java.util.Properties;

/**
 * Created by IntelliJ IDEA.
 * User: Niraj Singh
 * Date: 3/19/13
 * Time: 10:44 AM
 * To change this template use File | Settings | File Templates.
 */
public class AWSSimpleQueueServiceUtil {
    private BasicAWSCredentials credentials;
    private AmazonSQS sqs;
    private String simpleQueue = "PhotoQueue";
    private static volatile  AWSSimpleQueueServiceUtil awssqsUtil = new AWSSimpleQueueServiceUtil();

    /**
     * instantiates a AmazonSQSClient http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/sqs/AmazonSQSClient.html
     * Currently using  BasicAWSCredentials to pass on the credentials.
     * For SQS you need to set your regions endpoint for sqs.
     */
    private   AWSSimpleQueueServiceUtil(){
        try{
            Properties properties = new Properties();
            properties.load(new FileInputStream("D:/samayik/adkonnection/src/main/resources/AwsCredentials.properties"));
            this.credentials = new   BasicAWSCredentials(properties.getProperty("accessKey"),
                                                         properties.getProperty("secretKey"));
            this.simpleQueue = "PhotoQueue";

            this.sqs = new AmazonSQSClient(this.credentials);
            /**
             * My queue is in singapore region which has following endpoint for sqs
             * https://sqs.ap-southeast-1.amazonaws.com
             * you can find your endpoints here
             * http://docs.aws.amazon.com/general/latest/gr/rande.html
             *
             * Overrides the default endpoint for this client ("sqs.us-east-1.amazonaws.com")
             */
            this.sqs.setEndpoint("https://sqs.ap-southeast-1.amazonaws.com");
            /*
               You can use this in your web app where    AwsCredentials.properties is stored in web-inf/classes
             */
            //AmazonSQS sqs = new AmazonSQSClient(new ClasspathPropertiesFileCredentialsProvider());

        }catch(Exception e){
            System.out.println("exception while creating awss3client : " + e);
        }
    }

    public static AWSSimpleQueueServiceUtil getInstance(){
        return awssqsUtil;
    }

    public AmazonSQS getAWSSQSClient(){
         return awssqsUtil.sqs;
    }

    public String getQueueName(){
         return awssqsUtil.simpleQueue;
    }

    /**
     * Creates a queue in your region and returns the url of the queue
     * @param queueName
     * @return
     */
    public String createQueue(String queueName){
        CreateQueueRequest createQueueRequest = new CreateQueueRequest(queueName);
        String queueUrl = this.sqs.createQueue(createQueueRequest).getQueueUrl();
        return queueUrl;
    }

    /**
     * returns the queueurl for for sqs queue if you pass in a name
     * @param queueName
     * @return
     */
    public String getQueueUrl(String queueName){
        GetQueueUrlRequest getQueueUrlRequest = new GetQueueUrlRequest(queueName);
        return this.sqs.getQueueUrl(getQueueUrlRequest).getQueueUrl();
    }

    /**
     * lists all your queue.
     * @return
     */
    public ListQueuesResult listQueues(){
       return this.sqs.listQueues();
    }

    /**
     * send a single message to your sqs queue
     * @param queueUrl
     * @param message
     */
    public void sendMessageToQueue(String queueUrl, String message){
        SendMessageResult messageResult =  this.sqs.sendMessage(new SendMessageRequest(queueUrl, message));
        System.out.println(messageResult.toString());
    }

    /**
     * gets messages from your queue
     * @param queueUrl
     * @return
     */
    public List<Message> getMessagesFromQueue(String queueUrl){
       ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest(queueUrl);
       List<Message> messages = sqs.receiveMessage(receiveMessageRequest).getMessages();
       return messages;
    }

    /**
     * deletes a single message from your queue.
     * @param queueUrl
     * @param message
     */
    public void deleteMessageFromQueue(String queueUrl, Message message){
        String messageRecieptHandle = message.getReceiptHandle();
        System.out.println("message deleted : " + message.getBody() + "." + message.getReceiptHandle());
        sqs.deleteMessage(new DeleteMessageRequest(queueUrl, messageRecieptHandle));
    }

    public static void main(String[] args){

    }

}

PhotoProcessor.java

package com.aranin.adconnect.util.aws;

import org.imgscalr.Scalr;

import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.io.File;

/**
 * Created by IntelliJ IDEA.
 * User: Niraj Singh
 * Date: 3/19/13
 * Time: 12:32 PM
 * To change this template use File | Settings | File Templates.
 */
public class PhotoProcessor {

    public static void  generateImage(String imagePath, String origName, String targetName, int scalabity){
        String origImage =   null;
        String targetImage = null;
        File origFile = null;
        BufferedImage buffImg = null;
        File targetFile = null;
        try{
            origImage =   imagePath + "/" + origName;
            targetImage = imagePath + "/" + targetName;
            origFile = new File(origImage);
            buffImg = ImageIO.read(origFile);
            buffImg = Scalr.resize(buffImg, Scalr.Method.SPEED, scalabity);
            targetFile = new File(targetImage);
            ImageIO.write(buffImg, "jpeg", targetFile);

        }catch (Exception e){
            System.out.println("Exception in processing image : " + e);
        }finally {
            buffImg = null;

        }
    }
}

PhotoFile.java

package com.aranin.adconnect.util.aws;

/**
 * Created by IntelliJ IDEA.
 * User: Niraj Singh
 * Date: 3/19/13
 * Time: 12:29 PM
 * To change this template use File | Settings | File Templates.
 */
public class PhotoFile {
    private String origName;
    private String targetName;
    public String imagePath;

    public String getOrigName() {
        return origName;
    }

    public void setOrigName(String origName) {
        this.origName = origName;
    }

    public String getTargetName() {
        return targetName;
    }

    public void setTargetName(String targetName) {
        this.targetName = targetName;
    }

    public String getImagePath() {
        return imagePath;
    }

    public void setImagePath(String imagePath) {
        this.imagePath = imagePath;
    }

    public String toString(){
        return origName + "," +  targetName + "," + imagePath;
    }
}

SQSPhotoManager.java

package com.aranin.adconnect.util.aws;

import com.amazonaws.services.sqs.model.Message;

import java.util.List;
import java.util.StringTokenizer;

/**
 * Created by IntelliJ IDEA.
 * User: Niraj Singh
 * Date: 3/20/13
 * Time: 11:38 AM
 * To change this template use File | Settings | File Templates.
 */
public class SQSPhotoManager implements Runnable{
    private String queueUrl;
    public static void main(String[] args){
        AWSSimpleQueueServiceUtil awssqsUtil =   AWSSimpleQueueServiceUtil.getInstance();
        /**
         * 1. get the url for your photo queue
         */
        String queueUrl  = awssqsUtil.getQueueUrl(awssqsUtil.getQueueName());
        System.out.println("queueUrl : " + queueUrl);

        /**
         * 2. Add a photo to the queue to be processed
         */

        PhotoFile photo = new PhotoFile();
        photo.setImagePath("C:/Users/Public/Pictures/Sample Pictures");
        photo.setOrigName("Tree.jpg");
        photo.setTargetName("Tree_thumb.jpg");

        /**
         * 3. set the photofile in queue for processing
         */

         awssqsUtil.sendMessageToQueue(queueUrl, photo.toString());

        /**
         * get the messages from queue
         */

        Thread managerthread = new Thread(new SQSPhotoManager(queueUrl),"T2");
        managerthread.start();

    }

    public SQSPhotoManager(String queueUrl){
        this.queueUrl = queueUrl;
    }

    @Override
    public void run() {
        AWSSimpleQueueServiceUtil awssqsUtil =   AWSSimpleQueueServiceUtil.getInstance();
        boolean flag = true;
        while(flag){
            List<Message> messages =  awssqsUtil.getMessagesFromQueue(this.queueUrl);
            if(messages == null || messages.size() == 0){
                try {
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    e.printStackTrace();  //To change body of catch statement use File | Settings | File Templates.
                }
            }else{
                flag = false;
                for (Message message : messages) {
                    String messagePhoto = message.getBody();
                    System.out.println("photo to be processed : " + messagePhoto);
                    StringTokenizer photoTokenizer = new StringTokenizer(messagePhoto,",");
                    String source = null;
                    String target = null;
                    String path = null;

                    source = photoTokenizer.nextToken();
                    target = photoTokenizer.nextToken();
                    path = photoTokenizer.nextToken();
                    System.out.println("source : " + source);
                    System.out.println("target : " + target);
                    System.out.println("path : " + path);
                    /**
                     * generate thumbmail within 150*150 container
                     */
                    PhotoProcessor.generateImage(path, source, target, 150);
                }

                /**
                * finally delete the message
                */
                for (Message message : messages) {
                      awssqsUtil.deleteMessageFromQueue(this.queueUrl, message);
                }

            }
        }
    }
}

This will form the core for your PhotoProcessor application using SQS. This code has a glaring drawback. It polls on the SQS using a thread, it would be great if you can create a listener in your code which can subscribe to your queue and take necessary action when new message arrives. That is indeed the subject of my next post. Till then feel free to bombard me with questions, together we can find answer to those.

Regards

Niraj

 

 

 

 

Print Friendly, PDF & Email
Posted in Amazon Web Services, Cloud Computing, tutorial, web service | Tagged , , , | 3 Comments