Tuesday, February 26, 2013

Understanding Java NIO : My Notes

Why this blog post?

Recently I was working with a team to improve the mediation performance of Apache Synapse ESB. Java NIO was new to me when i took up the challenge. I spent few days learning I/O concepts and NIO particularly. Soon after my background reading period, I was pulled out to optimize Carbon kernel. Tough luck!. Then again I learnt few things and thought of compiling a blog post based on the initial research.

Basic steps in a request/response system.

In a typical request/response system eg: servers, messages undergo the following steps,

1. Input read/listening
2. Input Decode
3. Perform business Logic
4. output encode
5. output write/sending












The steps can overlap depending on the implementation details.

Blocking I/O



















In a blocking I/O scenario, as the name implies, the processing thread is blocked on the message processing until the output is written back to the wire. In a typical scenario, each client connection get assigned to a separate thread. Since the threads are dependent on I/O performance, no matter how efficient you process the message, the system throughput is dependent on the I/O behaviour. If the network latency is very high, the system won’t scale with regard to number of requests it can serve even with a lower concurrency level.



Sample Code for Blocking I/O



        ServerSocket serverSocket = null;
        serverSocket = new ServerSocket(4444);
        Socket clientSocket = null;
        clientSocket = serverSocket.accept();
        PrintWriter out = new PrintWriter(clientSocket.getOutputStream(), true);
        BufferedReader in = new BufferedReader(
                new InputStreamReader(clientSocket.getInputStream()));
        String inputLine, outputLine;
        CustomProtocol customProtocol = new CustomProtocol();

        while ((inputLine = in.readLine()) != null) {
            outputLine = customProtocol.processInput(inputLine);
            out.println(outputLine);



Non Blocking I/O

In a non blocking I/O scenario, the partial read of the input is possible and the acceptor triggers an event when the input is available. Because of the eventing model, now the processing thread doesn’t have to blocking wait on the input.

Eg: A typical client worker thread blocks on I/O like below,

while (in.readLine()) != null)

Since input/output readiness is triggered by events, the worker threads can go back to the worker thread pool and handle another request/task which is in a ready state. Reactor acts as the event dispatcher. All the other components are handlers that registers themselves with the reactor for interested event.  For an example, Acceptor registers itself with the reactor for the event, ‘Operation_Accept’.















Some core semantics


Selectors are responsible for querying available events and keeping track of calling objects (Listeners). For an example non blocking socket channel registers itself with the selector for "OP_ACCEPT" event.

If a connection occurs (that is ACCEPT) the selector will pick it up and appropriate listener will get called.
        selector = Selector.open();
        serverSocketChannel = ServerSocketChannel.open();
        serverSocketChannel.socket().bind(new InetSocketAddress(port));
        serverSocketChannel.configureBlocking(false);
        SelectionKey selectionKey = serverSocketChannel.register(selector,SelectionKey.OP_ACCEPT);
        selectionKey.attach(new Acceptor(serverSocketChannel, selector));



During an event trig, the selector will return all the relevant selection keys, and we can retrieve the attached listener for the key and execute.

 
 while (!Thread.interrupted()) {
                selector.select();
                Set selected = selector.selectedKeys();
                Iterator it = selected.iterator();
                while (it.hasNext()) {
                    SelectionKey selectionKey = (SelectionKey)it.next();
                    // Our attached object implements runnable
                    Runnable runnable = (Runnable)selectionKey.attachment();
                    runnable.run();

Scaling
The above architecture diagram only depicts the one thread version of reactor pattern. It is possible to scale the above system by means of multiple reactors, thread pools for worker threads, etc. The configuration highly dependent on the target hardware platform.


Libraries support for NIO
Even though one can write a complete service platform using the basic NIO constructs found in Java, there are open-source libraries that abstract out some of the underlying complexities. For an example Apache Http-core project provides abstraction layer for HTTp based NIO usage. Apache synapse make use of Http-core for NIO transport.

Main resource
Scalable IO in Java (presentation slides) - by Doug Lea, State University of New York at Oswego


Thursday, February 14, 2013

Fine Grained XACML Based Authorization With FuseSource ESB and WSO2 IS

Authorization - The bad way

In a typical organization that makes use of Service Oriented Architecture (fully/partially) to drive their IT requirements, chances are more to find tens/hundreds of services scattered across departments. If the legacy services are works fine for the organization, changing them is not a wise decision. However one always has to consider the maintainability of the system in the long run. If the services are coherent, that is, if they are doing only what they are supposed to do(known as only the business logic) then the changes are that, they may never wanted to be modified ever.
However we are not living in a ideal world. If i take the authorization as an example, I have seen people burning down authorization logic within their service implementation. This is a bad practice, since.

1. If the authorization mechanism(verifying against a JDBC user/permission store) changes you have to change the service implementation or introduce a hack to cover it up.
2. If the authorization policy(from now on we allow clients from X organization with security lever Y to access the service) changes then you have to change the actual service.

In simple words its not future proof. If the organization wants to monitor the authorization activities they you might end up modifying all your existing services.


What is XACML 

With XACML we can declaratively define our authorization policy using XML. Once you come up with the policy XACML engine evaluates the incoming message against the store XACML policy and spits the final decision (allow/deny). Since XACML is accepted by OASIS and is the de-facto standard which is supported by many industry leaders, there is little chance for vendor locking in the long run.

Sample Scenario

   
1. Clients sends in the request to access back-end service/resource. Here the request is directed to Fuse ESB proxy service. 
2. The proxy service intercepts(acts as the Policy Enforcement Point - PEP) the message and the entitlement bean extracts message properties and create the XACML request, which then forwarded to XACML engine. (Here we are using the XACML engine provided by WSO2 Identity Server).
3. Identity server validates (acts as the Policy Decision Point- PDP) the request against its stored XACML policy/policies and respond back with the decision.
4. Based on the received decision proxy service either forward the original client request to actual backend service or sends and not-authorized fault message to the client.
5. In the former case in the step 4, client receives the response from the actual back-end.

Note: The actual implementation (code) setup and the execution will be explained in a later post.

Sunday, February 10, 2013

Patterns in OSGi programming : White Board Pattern

A common solution for a recurring problem can be named as a pattern. Patterns are everywhere. More often than not, we adopt patterns for computer science from our day to day life experiences. White board pattern is one such common pattern. The pattern allows you to decouple two parties from each other (consumer/provider). I came across this pattern for the first time during my final year project. We worked on a distributed Tuple space implementation and end user-apps used white-board pattern to publish/consume data-strings to/from the tuple space.

The pattern is analogous to a notice board. When ever one wants to publish something to the world/audience, he/she goes and stick the note on the notice board. The interested party can pick up the note. This makes provider and consumer decoupled from each other.


White-board in OSGi ?
In OSGi programming model we do have notice board like mechanism. The OSGi service registry provided by the runtime can act as the whiteboard for publishers/consumers. In OSGi jargon we called them services. We can publish and lookup services that resides in OSGi service registry.





Why not Observer pattern to consume services ?
One common way to consume OSGi services is to use Observer pattern. Here we acquire the service using an API look-up from the service registry and register our implementation against the exposed service.

Sample use-case
Think about a web-app that runs inside an OSGi environment. One of the bundle publishes the standard http.service (http service is the link between servlet transports and the OSGi environment) to the environment and service consuming bundles should hookup their servlets to the published http.service.

Here bundle foo has a servlet implementations that has to be registered under context /foo and likewise bundle bar wants to register a servelet under context /bar. Http.Service allows you to register a servlet using the API method,

registerServlet(String context, Servlet servlet); // example method signature.

Hooking in to the Http.Service using Observer pattern
When using observer pattern, the foo and bar bundles become observers. The individually lookup for http.service from the OSGi service registry and registers their respective servlets against preferred contexts using http.service API.










As depicted above, each bundle has to acquire the http.service through service lookup and register their servlets against http.service.

The same using white board pattern














In this scenario, instead bundles looking up for http.service, they themselves publish their servlet implementation with OSGi service registry. Then who does the http.service and servlet hooking up part ?. There we have another bundle to take care of that. It looks up both http.service implementations and servlet implementations from the OSGi service registry and hook them up together.

Benefits
  • The consumers are no longer depend on the service providers interface. Instead they just have to adhere to servlet.api interface. 
  • The change in the http.service API will affect only the service binding bundle implementation. It will not affect the consumers.
  • The life-cycle of the consumer bundle does not get affected by the http.service. That is, it can survive in a OSGi runtime where http.service is not available.
We use the above pattern during servlet registration process in the WSO2 Carbon servers. Recently while doing some code hacking, I saw the same pattern in pax-logging code. They use it to hookup hookup layouts, appenders and filters to the logging engine.




Thursday, February 7, 2013

BoF Session on Carbon at WSO2Con 2013

WSO2 has organized a Birds of a Feather(BoF) session targeting next major efforts on WSO2 Carbon platform. We all can discuss/brainstorm/etc and decide on the future of Carbon Server Framework. The session will be held in London Conference premises and I am going to lead the session.









Agenda can be found here

Sunday, February 3, 2013

Demystifying OSGi : Java Colombo Meetup

Java Colombo is the largest Java developer group in Sri Lanka. Recently, we (me and my collegue Sameera) did a introductory session on OSGi during one of their meetups. It was very well received. Below are the presentation slides we used during the meetup.



Me presenting at the meetup. :) The photo credit for below photo goes to Isuru Perera.