Friday, 19 December 2014

CDI for Java SE already standardised with DeltaSpike

Introduction

One of the things which are scheduled for the upcoming CDI 2.0 release, is the standardisation of the usage in a Java SE environment.

The Context part of CDI, the different scopes like RequestScoped or ApplicationScoped, isn’t the most useful thing in the Java SE environment.  But having a dependency injection and event mechanism on the other hand is very handy.

Weld and OpenWebBeans

But you don’t have to wait until the release of a CDI 2.0 compatible implementation before you can use it in a Java SE environment.

Weld and OpenWebBeans are at the moment the 2 most important implementations.  They have already the possibility to use CDI in a Java SE environment.

But both frameworks have different ways to start up the CDI environment because in CDI 1.x it isn’t standardised yet.

DeltaSpike is a collection of CDI extensions, and one of the things it provides, is a uniform way of starting CDI in a Java SE environment. And you can use OWB or Weld as your implementation.

DeltaSpike Container Control module

Here the uniform startup is defined;  You have one api module which defines the CDI implementation neutral (so not related to Weld or OWB) classes. And then there exists 2 Implementation modules, one for each CDI implementation.

Other things you need are
- Deltaspike core api and implementation modules
- OWB or Weld implementation with there transitive dependencies if any.

A sample maven project file can be derived from one of the DeltaSpike examples or you can use the one I have assembled, see further on.

When the maven config is in place, you can start for example the CDI container from your main method as follows:

public static void main(String[] args) {
    CdiContainer cdiContainer = CdiContainerLoader.getCdiContainer();
    cdiContainer.boot();
    ContextControl contextControl = cdiContainer.getContextControl();
    contextControl.startContext(ApplicationScoped.class);
    //containerControl.startContexts();
}

Uber JAR

When you create a Java SE application, most of the times you wil create an Uber jar with a proper manifest file so that you can start your application easily with the command (the executable jar)

java -jar myProgram.jar

This can be achieved by using the shade plugin of maven.  You can find various resources on the internet how you can integrate and configure it in your project.

But using this procedure for distributing your CDI based application with DeltaSpike has a few pitfalls but workarounds are available. However, they arent related to DeltaSpike, nor OWB or Weld. It is a consequence of the deployment format.

The first issue that you should be aware of is that some files can be in multiple dependency jar files. Files like beans.xml and javax.enterprise.inject.spi.Extension are present multiple times in dependencies of your maven project.

If you don’t specify a certain configuration of the shade plugin, these files will overwrite each other and thus your program will not function.

You should use :
<transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>

Another issue that I found is that the asm transitive dependency, used in OWB, isn't properly packed into the Uber jar file.
So you need to add the asm:asm:3.3.1 dependency to your own pom as dependency otherwise the application isn’t starting due to some missing classes.

And the last pitfall is that a lot of frameworks aren’t CDI compatible.  In a Java EE application, this isn’t a problem since there is no beans.xml file in the jar files of those frameworks.  This means that the classes in these jar files aren’t considered as CDI bean and thus no problem occurs during the startup of the application.
But in an Uber jar, all classes are in the same jar file which has a beans.xml file.  Those classes, or better some packages, can be excluded the easiest way when you use Weld, as it has support for a custom configuration in the beans.xml file which allows you to exclude some packages.

<weld:scan>    <weld:exclude name="org.jboss.weld.**" /></weld:scan>

Starter project

To get you started easily with a Java SE application which uses CDI 1.x, I created a basic maven application which has everything configured correctly.
You can download it here.

It has 2 profiles, one for OWB and the other is for Weld.  There exists also a third profile, called shade, which is needed in the case you are using the shade plugin on a project which uses OWB.  It makes sure that the asm transitive dependency is included in your final jar file.

Conclusion

So you don’t have to wait for CDI 2.0 to use CI in Java SE, you can use it already today. And with the use of DeltaSpike SE support module, you can even hide the details of starting the OWB or WELD container which makes it even easier.


Have fun with it.

Wednesday, 26 November 2014

Application module configuration with CDI 1.1

Introduction

This text will tell you how you can use CDI itself to configure some portable extension of CDI you write yourself.  So first a little bit of introduction what a portable extension is and why you should use it.

When using CDI in your application, almost all the time you can just use what is available, like @Inject, @Named, the scopes and events to name the most important ones.

But sometimes, you must be able to programmatically define new CDI artefacts like scopes or beans.  This can be done by using the portable extension. The required steps are
- Create a class which implements the marker interface javax.enterprise.inject.spi.Extension
- Define the (full name) of the class in a file javax.enterprise.inject.spi.Extension defined in the META-INF/services directory of your application (or jar file)
- Define some method which has a CDI event parameter and has the @Observes annotation.

An example of this can be
public class ModuleConfigExtension implements Extension {

    void configModule(@Observes AfterBeanDiscovery afterBeanDiscovery, BeanManager beanManager) {
        System.out.println("Startup extension");
    }
}

This method is then called when the CDI container is initialising.

The following image shows the different steps in the container initialisation.



Configuration

So, now that we know how we can extend the CDI system, you can start creating additional beans and scopes for instances.  But also define some interceptors that applied to every bean like logging without the need to configure it in XML.

So far the introduction, now we can go to the configuration of your extension.
You have created the extension and has put it into a jar which you can reuse for all your projects you develop. 
And in most situations, the default you have defined are good enough, except for some applications that need a little bit of tweaked values.

The first question you can ask yourself is why you should CDI for the configuration here. Well because it is very convenient to override some defaults you specify yourself. And most of all, it is done in a type safe way.  You define the values in code and not as strings in XML, plain text or JSON.
Of course, if you want to be able to specify the values outside of your code, you have no other alternative then using some file with your values and reading that one in.

Lets have a CDI bean which define the defaults 

@ApplicationScopedpublic class BasicModuleConfig {


    @PostConstruct    public void init() {
        System.out.println("PostConstruct of BasicModuleConfig");    }

    @Override    public void doConfig() {
        System.out.println("Basic config for module");    }
}
(PS The idea is of course to have methods which return the default value. The above code is just to make it clear what is happening)

And when you need some other values as the default ones, just create an extends of this class which you annotate with @Specialized. This version will now be used.

@Specializespublic class CustomModuleConfig extends BasicModuleConfig {

    @Override    public void doConfig() {
        System.out.println("Custom specialized config");    }
}

CDI 1.0

Well there is a catch. With CDI 1.0 (Java EE 6), you where allowed to use the supplied beanManager in the after bean discovery phase to retrieve the CDI bean.

But this is a bit awkward as you are using CDI features before the CDI container is completely initialised.

So therefor they did some additional clarifications for the CDI 1.1 spec and it is now no longer valid (you receive an exception) when you are trying to do this in a Java EE 7 container.

CDI 1.1 solution

But they came up with another solution, although partially, but you need to do some additional work manually in your code.

They have introduced the Unmanaged class where you can receive an instance of a CDI bean, but which is not maintained by the container.

The following code gives you an example:
void configModule(@Observes AfterBeanDiscovery afterBeanDiscovery, BeanManager beanManager) {
    Unmanaged<BasicModuleConfig> unmanagedConfig = new Unmanaged<>(beanManager, BasicModuleConfig.class);    Unmanaged.UnmanagedInstance<BasicModuleConfig> configInstance = unmanagedConfig.newInstance();    ModuleConfig config = configInstance.produce().inject().postConstruct().get();
    config.doConfig();
    configInstance.preDestroy().dispose();}

So you can use an initialisation method annotated with @PostConstruct, but you can’t use any injection into this bean. If you try this, you will receive some exception that a resolved bean resulted in null (since the container is not ready, the system doesn’t find the bean it needs to inject)

And there is another issue, the @Specialized isn’t picked up. 
But that you can fix in your code.  There is another event that is useful in this case.  When the CDI container scans the class path, every bean which is eligible for CDI management, is handed to a method which observe the @ProcessAnnotatedType event (Process bean phase in the above image)

Here we can keep the bean which has our specialised configuration, and use that one in our configuration.

The code of the extension could look like this then:

public class ModuleConfigExtension implements Extension {

    private Class<? extends BasicModuleConfig> configClass = BasicModuleConfig.class;
    <T> void collectImplementations(@Observes ProcessAnnotatedType<T> pat, BeanManager beanManager) {
        AnnotatedType<T> annotatedType = pat.getAnnotatedType();
        if (BasicModuleConfig.class.equals(annotatedType.getJavaClass().getSuperclass())) {
            configClass = (Class<? extends BasicModuleConfig>) annotatedType.getJavaClass();        }

    }

    void configModule(@Observes AfterBeanDiscovery afterBeanDiscovery, BeanManager beanManager) {

        Unmanaged<? extends BasicModuleConfig> unmanagedConfig = new Unmanaged<>(beanManager, configClass);        Unmanaged.UnmanagedInstance<? extends BasicModuleConfig> configInstance = unmanagedConfig.newInstance();        ModuleConfig config = configInstance.produce().inject().postConstruct().get();
        config.doConfig();
        configInstance.preDestroy().dispose();    }
}

Conclusion


There is a very convenient way of using CDI itself for the configuration of your portable CDI extension, because all your values are typesafe. When the default configuration values are defined in a CDI bean, you can easily specify custom values in a specialised bean when needed.  

Monday, 29 September 2014

Java EE / GlassFish future. (JavaOne 2014)

Introduction

On JavaOne, there was a session about the roadmap for GlassFish but it discussed also the main topics for Java EE 8 now that it is approved by the JCP EC.

In this blog text, I will summarise the main points of this session.

You can find the session info here GlassFish Roadmap and Executive Panel [UGF9120]

Java EE 8

There are various things which are on the roadmap for Java EE 8 and some of them form a theme, can be grouped together.

One of them is around JSON communication and contains

  • The support for the HTTP 2.0 protocol as described in Servlet 4.0.  This is a continuation of the HTTP 1.1 spec, first proposed by Google as SPDY and now standardised in HTTP 2.0
  • More JSON support as it is replacing XML nowadays in enterprise cases. There will be a JSON Binding specification which defines how we can Java models for JSON structures.  Today we can do it already for example with the Jackson and Google GSon frameworks.
  • Security considerations for the JSON communication like how can the OAuth 2.0 be integrated.
  • And since JAX-RS is the base for all communications, improvements can be expected in that area too.

Another theme is security. There will be many places where security will be handled

  • The JSON communication with OAuth 2.0 as described earlier.
  • New security interceptors for CDI beans like we have already the @Transactional one.
  • Easier definition of resources for User management, like an annotation for defining the LDAP source, and handling things like password aliasing, role mapping and authentication.
But there will be other improvements or new features coming up.  Some of the most likely candidates are

  • Server sent events, as a lightweight version for the WebSocket protocol.
  • The action based MVC 
  • Splitting up CDI so that it can also be used in Java SE
  • Cloud and multi tenancy support.
  • ...

GlassFish

And what role will GlassFish play in all of this?  Well, each JSR needs a Reference Implementation (RI). So Oracle will continue to develop GlassFish as the RI for Java EE.  So there will be new releases but not commercially supported as they have announced at the end of last year.

And they promise that the quality, stability and security of the product stay as important as the implementations of the features. Because they see it as a tool to test out the new feature of any Java EE version (by the customers) but that you should choose for WebLogic for your production environments.

Why is webLogic always so much behind?

This was a question of one of the attendees of the session. Because for Java EE 7, it is just recently that some of the features became available.  One year and a half after the spec went final.

This long delay had to do with other priorities within Oracle and the time delay should be smaller in future versions.
In the past, the delay had to do with the internal differences between the code base of GlassFish and WebLogic.  Nowadays they are more aligned, but will never share the same code base.  So the plan is to have a time delay of about 6 months for WebLogic to make sure that the Java EE features works well in their enterprise level server.

So lets hope that it will be already the case for Java EE 8.

Time frame

They expect that Java EE 8 will be final in september 2017.  This is a very long period, but luckily we don’t have to wait that long.  By january 2016, there should be a proposed final version of the spec, so we should see not long after that a GlassFish 5.0 version which we can use to test out Java EE 8.

Sunday, 3 August 2014

Concurrency aspects of a Singleton EJB bean

Introduction

In the previous blog text, I demonstrated some aspects of a Stateless EJB bean. It is guaranteed that each client receives his copy of the bean and that there is no concurrent access.  But it is possible, and here there where differences between the application servers, that the same bean instance is reused in different calls.

This text will handle the Singleton EJB beans. And at first glance, you just have to change the @Stateless annotation with @Singleton.  But there are a few consequences regarding the concurrent access of methods.

With the same set of examples, I’ll explain them to you.

Scope of Singleton

Now it is obvious how long the bean is kept alive. Singleton means that there is only one instance created and reused for all calls.

So if you change the annotation on the SomeEJB class to @Singleton, you are ready to go to run the code for scenario1.

@Singleton 
public class SomeEJB { 

The results are now, for GlassFish and WildFly, the same and indicates that we received each time the same instance.

click 1 -> value = 1
click 2 -> value = 2
click 3 -> value = 3

For scenario2, we get the following results

click 1 -> value = 1 - 2 - 3 - 4 - 5
click 2 -> value = 6 - 7 - 8 - 9 - 10
click 3 -> value = 11 - 12 - 13 - 14 - 15

So it is very clear that as the name indicates, we have one instance of the bean. In all cases.

Concurrency aspects

Since we now have only one instance, what happens when we access the singleton bean in a concurrent way.  What is the default behaviour when multiple client access the bean instance?

This we can investigate with the code we wrote for the scenario3.  Look at the previous blog text for the actual code.

click -> value :   1 - 2 - 3 - 4 - 5 
elapsed time : 10018

As it is a singleton instance, each call increment the counter value.  But the interesting figure here is the time it took to complete the test.  It took just over 10 seconds to finish.  This is the 5 calls multiplied by the 2 seconds delay we have put into the method call.
And if you investigate the code execution more in depth, you should find that the second invocation only starts when the first one is finished.
The call to the first invocation immediate returns due to the @Asynchronous annotation and the Future return value. The for loop in the scenario3() method continues and starts with the second invocation of the Singleton bean method.  But this one doesn’t start, and thus also control is not given back to the scenario3 code, until the first invocation is finished.

There is obvious no parallel code execution anymore as we saw in the examples of the Stateless EJB bean.
And it looks look there is an invisible synchronised keyword placed on the method.

Concurrency protection

This default behaviour is specified in the EJB specification to protect the values which are kept in the singleton bean.  Since we now have only one instance available, multiple clients access this same instance and thus could potential read partial updated values.

The easiest solution was to guarantee that only one client can execute some method in the Singleton bean and that all other invocations must wait until the current one is finished.

In our example we proved it for the same method invocation, but in fact, any method call to the same Singleton bean is placed on hold.

Scaling Singleton beans

This default behaviour is easy for the developer and guarantees the correct concurrent access to the values kept in the Singleton bean but it is obvious not a good situation for the scaling properties of our application when we have a Singleton bean which is access by many clients in a concurrent fashion.

In that scenario, we have some additional annotations so that we can handle the concurrency aspects our selves. But of course, it gives the programmer a greater responsibility to do the correct thing.

The @ConcurrencyManagement annotation can be placed on the class and here we can indicate that the bean itself is responsible for correct multithread access handling.

@Singleton 
@ConcurrencyManagement(ConcurrencyManagementType.BEAN) 
public class SomeEJB { 
 
    private long counter = 0; 
 
    @Asynchronous 
    public Future<Long> asyncIncAndGet() { 
           ...
    } 
 
} 

The other annotation we can use is the @Lock annotation (in case we do container managed concurrency management).  And there exists 2 types of locks, a READ lock and a WRITE lock.

You can obtain a READ lock, and thus execute the method, when no other thread has a WRITE lock at that time.  If there is a WRITE lock taken, your invocation has to wait until the lock is released.

To obtain a WRITE lock, no other threads should hold a READ or WRITE lock on that Singleton bean.

So if you change the code of the SomeEJB class to include the @ConcurrencyManagment annotation, you can have the following results if you run scenario3 again.

click -> value : 4 - 4 - 4 - 4 - 4 
elapsed time : 2002

The elapsed time indicates now that we have again parallel execution of the method but the values we receive for the counter are not correct.

Optimal Singleton pattern

So how should we use the @Lock annotation properly?

You should place the @ConcurrencyManagement(ConcurrencyManagementType.CONTAINER) and @Lock(LockType.READ) on the class definition.
This guarantees that we can access all method in a parallel way in the Singleton Bean.

And those methods that change the value of some instance variables we keep in the Singleton bean, should be annotated with
@Lock(LockType.WRITE)

Then we know that no other thread will be reading the variables and we can safely change the values without the risk there will be read of some wrong values.

Conclusion

You can just annotate a POJO with @Singleton and there will be only one instance of the bean for your whole application.  By default however, the server guarantees that no threads are accessing some method of the bean.
This is a very safe way but also a very slow situation as performance can degrade because clients have to wait to access the bean.
With the use of @ConcurrencyManagement and @Lock annotations we can create the optimal situation where we can have concurrent access if we only read values and make sure that no other thread is reading or writing values when we make changes.

Wednesday, 16 July 2014

Some facts about Stateless EJB beans

Introduction

With Java EE you can annotate simple java POJO classes with @Stateless and @Singleton markers and they become full fledged EJB beans with middleware services like transactions and security.

But on the contrary to the CDI bean scopes like @RequestScoped or @ApplicationScoped it is not immediate clear how long the EJB bean lives.

Of course, the term Singleton indicates that there is just 1 bean created, but how is then concurrency handled.  

This is the first part of a two part series about EJB beans.

Warning, in this text I use a Stateless EJB bean to store some state, here a counter which is not good practice as you also see throughout the text.  It is used to better indicate and explain what is going on.

Code

The code I use through out this text is quite simple.  I keep a counter instance variable and have a method to increment this counter and return the current value.

@Stateless 
public class SomeEJB { 
 
    private long counter = 0; 
 
    public long incrementAndGetCounter() { 
        counter++; 
        return counter; 
    } 
 ...
}

For the thread safety aspects, I use an asynchronous EJB method call where I put in a wait of 2 seconds. Due to the Future return type of the method, the caller of this method gets the control back before the method has finished.

    @Asynchronous 
    public Future<Long> asyncIncAndGet() { 
        counter++; 
        try { 
            Thread.sleep(2000); 
        } catch (InterruptedException e) { 
            e.printStackTrace(); 
        } 
 
        return new AsyncResult<Long>(counter); 
    } 

This code will be used in three EJB beans (Singletons beans will be explained in a next text item).  For this text we use the @Statless annotation as shown in the above code example.

The ‘client’ of the EJB beans is a CDI bean with scope @Model.  This means that each request from the browser we get a new instance and thus a newly injected EJB bean.
In the browser we see the output of our test and how long it took through the use of an JSF page.

@Model 
public class DemoBean { 
    @EJB 
    private SingletonEJB someEJB; 
 
    private String data; 
    private long elapsedTime; 
 
    public void testEJB() { 
        long start = System.currentTimeMillis(); 
        scenario1(); // or 2 or 3
        elapsedTime = System.currentTimeMillis() - start; 
    } 
 ...
}

Servers

I use GlassFish 4 and WildFly 8 to verify the results of the tests.  And as you will see, both servers has a different approach but both are compliant with the specifications.

Scope of @Stateless

Lets start by calling the scenario 1 method on the Glassfish server.  If we click the button on the JSF page, we get the following result (timing is omitted here as only relevant for multithreaded scenarios)

    public void scenario1(){ 
        data = String.valueOf(someEJB.incrementAndGetCounter()); 
    } 
 

click 1 -> value = 1
click 2 -> value = 2
click 3 -> value = 3

Each click increases the value of the counter and since the value keeps incrementing, it is clear that we get the same EJB injected in the CDI bean each time we click on the button.

If we run scenario 2 on GlassFish, we get results that follow the same pattern.

    public void scenario2() { 
        data = ""; 
        for (int i = 0; i < 5; i++) { 
            data += " - " + String.valueOf(someEJB.incrementAndGetCounter()); 
        } 
    } 
 
click 1 -> value = 1 - 2 - 3 - 4 - 5
click 2 -> value = 6 - 7 - 8 - 9 - 10
click 3 -> value = 11 - 12 - 13 - 14 - 15

So is @Stateless behaving as a @Singleton? Lets investigate more.

If we run the same scenarios on WildFly server, we get different result and some of them may surprise you.

click 1 -> value = 1
click 2 -> value = 1
click 3 -> value = 1

Now it is clear that we receive each time a new instance of the EJB bean. And thus the value we get back is 1 each time.

Is this correct? Yes.  As the names says, it is a stateless session bean and thus we shouldn’t make any assumptions about which instance of the bean we receive the following invocation time.

This are the results if we run scenario 2 on WildFly

click 1 -> value = 1 - 1 - 1 - 1 - 1
click 2 -> value = 1 - 1 - 1 - 1 - 1
click 3 -> value = 1 - 1 - 1 - 1 - 1

And this is a surprise, no? We execute the method 5 times on the same injected EJB method.  So you could think that it is 5 times the same bean. 

But in fact, we get a proxy injected into the CDI bean, not the actual EJB bean (check the getClass().getName() outcome).  This is because there are possibly additional ‘interceptors’ executed for instance to handle the transactional aspects of database access.

So WildFly decided that the proxy is allowed to access another EJB bean instance each time you ask for a method call.

And after clicking a lot of times on the JSF button, I even have the impression that you receive each time a new instance and that there is no reuse ( or the pool of EJB beans must be very very large)

Exact lifetime of EJB beans on GlassFish

What happens if we concurrently access a stateless EJB bean on GlassFish? This is scenario 3. Let me first explain a bit what it does.

    public void scenario3() { 
        data = ""; 
        List<Future<Long>> results = new ArrayList<>(); 
 
        for (int i = 0; i < 5; i++) { 
            results.add(someEJB.asyncIncAndGet()); 
        } 
        for (Future<Long> result : results) { 
            try { 
                data = data + " - " + String.valueOf(result.get(3, TimeUnit.SECONDS)); 
            } catch (Exception e) { 
                e.printStackTrace(); 
            } 
        } 
 
    } 

The EJB method is asynchronous and returns immediate an object where we can retrieve the result of the method execution in some near future. So our client is able to call this asynchronous method, and immediately afterwards, calling that same method again (5 times in total in our example).

After we have called it enough times, we wait for the result to return with the result.get().

And what do you think the result will be, the value 5, 5 times in a row? Not exactly, again a surprise isn’t it?

click -> value :   1 - 1 - 1 - 1 - 1 
elapsed time : 2008

The second call to the asynchronous method takes place when the first execution is still ‘active’.  Again, the proxy which is injected, decides to call another EJB instance.

Use of instance variables

So how should we use Stateless session EJB beans?  The specification states that only one client at a time has access to the EJB bean instance. If another clients needs to access some functionality of the bean, it gets access to another instance.
But once the action is performed, the bean can be returned to the pool and used by another client (or same client the next time).

So it is safe to use instance variables to transfer information between private method calls.  But we shouldn't store information instance variables that we expect to linger around for a next call to the EJB bean.

Conclusion

When we inject an EJB bean, we get a proxy to this bean.  And this proxy bean will select a free instance of our EJB bean from the pool.  Of course, when there is no bean available, it will create a new instance. 

In GlassFish, the EJB bean, will return to the pool when it is no longer accessed by the client and can be used during some next call by the same or any other client.

In WildFly, the EJB bean is always destroyed and thus never reused.

It is safe to store information into instance variables for the duration of one call be it is wrong to use it to keep information between different calls.

Next time we discuss the @Singleton EJB bean.

Monday, 12 May 2014

Chat Application in Java EE 7

Introduction

One of the new technologies which are introduced in Java EE version 7, is the WebSocket protocol.  With this protocol, it becomes easy to push any message from the server to the connected clients.  You no longer need the request - response model of HTTP but can send direct messages to the other peer.

The typical application for that protocol is a chat application. Any message send from a client to the server is pushed to all connected clients on that moment.

This text explains how you can create such a chat application with Java EE 7 with very few code lines.

WebSocket

First a bit of background about the protocol.  It uses the possibility of the Upgrade facilities of the HTTP protocol. This option was already foreseen in 1999 when the HTTP 1.1 spec was finalised.  Of course, at that time there was no one thinking about something like the WebSocket protocol but the spec has a part about the Upgrade of the HTTP protocol.

When the client sends an upgrade request to the server, by specifying the correct header values, it can ask for the switch to a certain protocol. Below is an example of a request for the upgrade to the WebSocket protocol.

GET HTTP/1.1
Upgrade: websocket

Connection: Upgrade


If the server accepts the request, it sends the confirmation of it to the client and from the on, the communication between the client is longer performed by the HTTP protocol but by the new one.

In case of the WebSocket protocol it is a bi-directional, full duplex communication over a single tcp connection.

Bi-directional means that the client and the server, although you don't use the terminology within WebSocket, can take the initiative to send information to the other party. And by default, there is no response expected in return.  This is in contrast to the HTTP protocol which is based on the request - response model where the client sends a request to the server and expect an answer for his question. With the WebSocket protocol it is just a push of data to the other side.

Full duplex means that there can be simultaneous communication between the 2 sides without interfering with each other.  The client, in HTTP terminologies before the upgrade started, can send information on the same time the server decides the send some bytes over the wire.  And both data arrives without any problems at the other side.

Server side

So let us create the server side of our chat application.  The amount of code we have to write is very minimal.  We can mark any POJO class as a WebSocket endpoint by using the annotation @ServerEndpoint. There is no need to implement any interface or extent of a parent class.

In this class we can mark a method which takes a String and javax.websocket.Session as parameter as the method which gets executed when the endpoint receives a message.

Below is all the code which is required for the server side of the chat application.

@ServerEndpoint("/bejug") 
public class ChatServer { 
 
    @OnMessage 
    public void onMessage(String message, Session session) { 
        for (Session s : session.getOpenSessions()) { 
            try { 
                s.getBasicRemote().sendText(message); 
            } catch (IOException e) { 
                e.printStackTrace(); 
            } 
        } 
    } 
} 

The method just redistribute the incoming message to all connected clients.  Those can be retrieved from the Session parameter.
With the Basic version I used here, you can send the message synchronously.  There exists also an asynchronous version which is not needed here since we are just sending small messages back and forth.

Client side

The typical client side of the chat application is a browser, but can also be any other client which support the WebSocket protocol.  Internet Explorer supports it since version 10, Firefox and Chrome are supporting it already some time.  Around 70% of the people who is surfing today does this with a browser which supports the WebSocket protocol (see http://caniuse.com/websockets)

So let us create a very simple and minimalistic WebPage which uses plain JS to have our client.

A WebSocket connection can be created by using the new WebSocket(“”) statement.  We can specify the callback functions which are executed when the connection is established, the onopen callback and the onmessage callback is called when the client receives a message.

The onopen is used in our example to send the message to all other clients that a new person has joined the group.   And the on message callback needs to display the new message on the screen.

The send() function is used to send some text to the server and thus to all connected client.  The complete code of the chat client is shown below.

<head lang="en"> 
    <meta charset="UTF-8"> 
    <title>BEJUG chat</title> 
    <script type="text/javascript"> 
        var connection; 
        function startChat() { 
            if ("WebSocket" in window) { 
                connection = new WebSocket("ws://localhost:8080/chat/bejug"); 
                connection.onopen = function () { 
                    connection.send(document.getElementById("name").value + " has joined"); 
 
                }; 
                connection.onmessage = function (evt) { 
                    document.getElementById("msgs").innerHTML += evt.data + '<br/>' 
                }; 
 
            } 
            else { 
                // The browser doesn't support WebSocket 
                alert("WebSocket NOT supported by your Browser!"); 
            } 
        } 
        function sendMessage() { 
            connection.send(document.getElementById('name').value + ' says : ' + document.getElementById('msg').value); 
        } 
    </script> 
</head> 
<body> 
 
    <h1>BeJUG chat</h1> 
    Messages 
    <div id="msgs"></div> 
    <hr/> 
    <input type="text" id="name"/> 
    <button type="button" onclick="startChat()">Join</button> 
    <br/> 
    <input type="text" id="msg"/> 
    <button type="button" onclick="sendMessage()">Send</button> 
</body> 

Conclusion

With the addition of the WebSocket protocol, there are now 3 ways of interacting with a Java EE 7 server.
Servlets which are already used since the first days of web development with Java but still very important as they are the base of many technologies which uses the front controller pattern like JavaServer Faces.
The second option, JAX-RS or rest style way of working with JSON, is basically also servlet based so you could classify it with the previous category.  But since it importance for the communication with smartphones, it deserves a category of there own.
And the latest addition is the push type of communication with the WebSocket protocol where you need independent communication between peers.

Tuesday, 4 March 2014

Octopus framework

What is it?

With the Octopus framework, you have a permission based security framework for Java EE which is highly customisable, CDI integrated and focussed on type-safety. You can protect URL’s, JSF components, CDI methods and EJB invocations with one and the same permission check code. 
Since security is one of those cross cutting concerns, we don’t want to code it next to our statements performing the business requirements. So you will be able to use annotations to protect methods and custom tags for JSF components.

It is licensed under the Apache License V2 and is sponsored by C4J.

Permission based

The Java EE ecosystem has his own security mechanism but it is role based. And this way of working has some major disadvantages. 
To better illustrate this, lets examine an example. 

Suppose we have an HR application where we can manage the employees of the company. And we have 4 roles, linked to 4 types of people. The employee itself, their manager, the HR assistant and the big boss. Each of them can see different type of information and perform different actions.  The salary for instance, as employee you can see your own salary but not the one of your colleagues. But your manager can see the salary of all the people who are working in his department. But not the salary of an employee in another department. The HR assistant can see them all, just as the big boss.  But updating the salary is restricted to the HR and big boss.

Since for each action of information that is shown, a certain role is required, it becomes a problem when you need to change something. Suppose we need to replace functionality, like some action can be performed now by all employees or you want to define a 5th role and rearrange the allowed actions. You need to recompile your application in those situations. And you need to test it again to see if there is no security issue created.  You want to make sure that people can’t see information which they aren’t allowed to see.

With a permission based approach, on the contrary, this is much easier.  Each action or data information needs a certain permission to execute or view it. The user gets assigned those permissions that he needs. When you need to reassign some security rules, it is sufficient to assign different permissions to the user to have your desired effect. There is no need to recompile your code and thus in theory you also don’t need to retest your application. Since you already verified that only users which have the required permission can perform the action.

Build on top of

Permission based security is not available by default in Java EE, but there is an excellent framework that can assist you. Apache Shiro is designed to do permission based security in any kind of application, from a Java SE application like a swing based one, to any web application.
For JSF there is no specific support but as described in the blogpost of BalusC, you can use it also for JSF with some small changes.
In the past, I already prototyped the possibility to have security on JSF components, which was based on the CDI extension CODI. ( see http://jsfcorner.blogspot.be/2011/03/jsf-security.html)
So I took those two sources of information and bundled it in the Octopus framework.

Setup

In order to add the framework, you need to add the Octopus maven artefact to your project.

  <dependency> 
     <groupId>be.c4j.ee.security</groupId> 
     <artifactId>octopus</artifactId> 
     <version>0.9.2</version> 
  </dependency>

It brings also Shiro web and Extval core into your application. The latter is used for intercepting the JSF rendering so that we can add security for specific JSF components as I’ll show you later in this text.
It uses also CODI (MyFaces Extensions CDI), especially the modules Messaging and Web.  But you need to specify the dependency in your project as I wanted to give you the freedom of which modules you want to include.

The next thing you need to do is to define in the file securedURLs.ini, place it in the WEB-INF folder,  the URLs that needs to be secured.

/pages/** = user

With the above information, we secure all the URL’s in the pages path and they can only be visited by authenticated users. All the other pages can be visited anonymous.

The last thing we need now, is the page which will be shown when the Octopus frameworks needs the credentials of the users.  It will open by default the /login.xhtml page (suffix is defined by the JSF servlet definition) where you can have input fields for username and password.
You can use any JSF component library in combination with the Octopus framework. Also the layout and structure of the login page is completely under control of the developer. You can link to #{loginBean.userName} and #{loginBean.password} for the fields value and the button can execute #{loginBean.doLogin} method to perform the authentication.

If the authentication succeeds, the user is redirected to the original requested page, but now authenticated.

Configuration

In the previous chapter there was already a small configuration step, the securedURLs.ini, but there is one other very important configuration requirement.
Octopus is designed to handle all the security features but leaves the retrieval and definition of users and there rights to the developer. This allows to use any backend system, database, LDAP or any other system you like, for the storage of the security related data. And there is one exchange point where we give that information in the Octopus required format.
So define a bean which extends be.c4j.ee.security.realm.SecurityDataProvider. By implementing the 2 methods of the interface we can provide Octopus with all the required data. 

The method getAuthenticationInfo() is called when Octopus executes the login method we have specified behind our button on the login page.

The parameter AuthenticationToken contains the username (AuthenticationToken.getPrincipal() -> String ) and password (AuthenticationToken.getCredentials() -> char[]).
You don’t need to verify yourself if the supplied password matches the one entered by the user. This is because Octopus, in combination with Shiro also supports hashed credentials. Therefor you supply all the information Octopus needs to perform the verification itself.

You can use the AuthenticationInfoBuilder to build the required return object, an example is shown below.

    @Override 
    public AuthenticationInfo getAuthenticationInfo(AuthenticationToken token) { 
 
        AuthenticationInfoBuilder infoBuilder = new AuthenticationInfoBuilder(); 
        infoBuilder.principalId(principalId++).name(token.getPrincipal().toString()); 
        // TODO: Change for production. Here we use username as password 
        infoBuilder.realmName("MyApp").password(token.getPrincipal()); 
 
        return authenticationInfoBuilder.build(); 
    }

When the user name is not known, you are allowed to return null.

The second method, getAuthorizationInfo(), is called whenever the system needs authorization information about the user. Again, you can use a builder, AuthorizationInfoBuilder, to supply Octopus the permissions assigned to the user.  This information is cached so this method is only called once for each user.
A more extensive description about the usage of permissions is given in another blog post.

Securing methods

The most important methods that you like to secure, are the EJB methods.  They contain your business logic and you want to protect them so that only the allowed users can execute the method. Octopus contains the class be.c4j.ee.security.interceptor.AppSecurityInterceptor which can be used as interceptor on EJB methods. The preferred way is that you define the interceptor on all EJB’s with the aid of the ejb descriptor file (ejb-jar.xml)

<?xml version="1.0" encoding="UTF-8"?> 
<ejb-jar xmlns="http://java.sun.com/xml/ns/javaee" 
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
        xsi:schemaLocation="http://java.sun.com/xml/ns/javaee 
        http://java.sun.com/xml/ns/javaee/ejb-jar_3_1.xsd" 
        version="3.1" > 
    <interceptors> 
        <interceptor> 
            <interceptor-class>be.c4j.ee.security.interceptor.AppSecurityInterceptor</interceptor-class> 
        </interceptor> 
    </interceptors> 
    <assembly-descriptor> 
        <interceptor-binding> 
            <ejb-name>*</ejb-name> 
            <interceptor-class>be.c4j.ee.security.interceptor.AppSecurityInterceptor</interceptor-class> 
        </interceptor-binding> 
    </assembly-descriptor> 
</ejb-jar>

If you do this, then all methods are secured and as it should be, by default they aren’t accessible anymore (default is permission denied). There are various annotations to allow users (or requires certain permissions) to execute the method.  The Java EE annotation @PermitAll is also supported but of course use it with care.
More about the securing of EJB methods can be read in the users guide (under construction) and some further posts.

Securing JSF components

The idea of securing JSF components is already described in my other post ( http://jsfcorner.blogspot.be/2011/03/jsf-security.html). The functionality is extended within the Octopus framework.  The securedComponent tag received 2 other attributes, permission and role where you can specify the named permission and the named role that needs to be checked. More examples on this will be available on the user guide and in following posts.

An example of a secured commandButton
<p:commandButton action="createDepartment" value="Create department" ajax="false"> 
    <sec:securedComponent permission="DEPARTMENT_CREATE" /> 
</p:commandButton>

Tight CDI integration

Octopus uses some advanced CDI stuff and overriding of the default functionality can be done by using a @Specialized CDI bean.
A lot of artefacts can be injected using the @Inject annotation and there are automatically named beans created that can verify if users have some certain permission or role.
But I also hit the boundaries of CDI.  Because I wanted to make everything as type safe as possible, I used typed beans with generics.  An example is the lookup between and enums which lists the correspondence between named permission and the actual Permission object, PermissionLookup<? extends NamedPermission> . But within the CDI spec there are restrictions imposed and thus I need to resort to some manual lookup.
More about some of the CDI stuff will be presented in another post. 

Compatibility

The first version of Octopus is tested with Glassfish 3.1.2.2 and TomEE 1.6.  But is the idea to make it compatible with all Java EE 6 and Java EE 7 servers.  Maybe we need some small tweaks to make it work on other servers, but since we only use standard Java EE, it should work on any server.

Status

As this is a first version of the framework, a lot of small issues can pop up.  Also there is still a lot of work that needs to be done regarding the testing and documentation of it.  These will be addressed in the coming months. Also a lot of the possible issues will be resolved as the framework will be used to create various small show case applications, they will be publicly available, and in the production application which will be started soon.
But for now, consider it as a beta.

Conclusion

Octopus is still in the early days of development but it has already almost all the features that you need to implement a flexible permission based security solution for Java EE.  You can find the code on bitbucket https://bitbucket.org/contribute-bitbucket/javaeesecurityfirst (as C4J is an Atlassian Partner) and any feedback is of course very welcome.

Stay tuned for more detailed information about Octopus later on.