Monday, 22 August 2016

Java EE Security Workshop

Earlier this month, I organized a workshop on Java EE Security. Because a lot of the developers don't know all the features and possibilities we have or how we can use them. It didn't cover just the spec related stuff, but I tried to make it as practical as possible and includes some popular framework as PrimeFaces, ScribeJava, Nimbus-Jose etc ...

The workshop contained the explanation of various concepts and examples regarding 'Information security'. It is the term for the classic security in web applications and REST style endpoints where we need to establish the identity of the user and determine the actions what he is allowed to do. Or what data he is allowed to see.

For the attendees, the step by step instructions for the examples are gathered in a 'book'. Together with a short explanation of the concepts and a description why and how the example works.
Of course, there was more material for the attendees who followed the workshop, but the book is available for free and can be downloaded from https://leanpub.com/javaeesecurityworkshop

Here a short overview what can be found in the document

  • What are authentication and authorization
  • The difference between encryption, encoding and hashing.
  • A comparison of a simple BASIC authentication usage, The Do it Yourself way, standard Java EE and popular framework usage like Apache Shiro.
  • What are the authentication methods?
  • How to integrate FORM basic authentication with PrimeFaces
  • The goals of the Java EE Security API JSR (JSR-375)
  • An LDAP example with Soteria, the RI of JSR 375
  • OAuth2 explained
  • Google OAuth2 authentication with the help of ScribeJava
  • Roles versus permissions, and why the latter is better.
  • What is JWT and how can we use it for security purposes
  • Using JWT to uniquely, securely identify the other party in a REST style communication using Nimbus-JOSE.
  • Introduction to the features of the Octopus framework (http://javaeesquad.blogspot.be/2014/03/octopus-framework.html)

I hope that a lot of you, just like the attendees who were very enthusiastic, learn some new things related to security in Java based web applications and REST style endpoints.

Monday, 4 April 2016

Using JWT for Process authentication of JAX-RS endpoint

Introduction

Most of the time, when we have to deal with authentication, we have a real person on the other side, our end user.
But there is another group of application interactions where the other party is another process. Another process who wants to use the data we have available and which they need.

So how can we make sure that we can trust a request we receive and that we can send the data?

With JWT (JSON Web Tokens) which are signed with an RSA key, we have a simple way of authenticating the process.

JWT

JWT, maybe you heard already about it, if not, this is also a quick introduction together with the explanation how you can use it to authenticate a process.

The JSON Web Token seems a long string of characters, but they have 3 parts each separated by a . (dot) like xxxxx.yyyyy.zzzzz

Each part is base64URL encoded so that we can safely transfer it over an HTTP connection.

Header

The first part, the xxx's, are the header and contain general information like the type of token (there exists a bunch of related concepts) and the used hashing algorithm. 

The JSON representation of the header (before it is base64URL encoded) could look like this:

{
  "alg":"RS512"
  ,"typ":"JWT"
  ,"kid":"cbeba027-39e1-4c70-a584-77081422e16a"
}

the 'kid' property will become clear when we talk about the signing of the token.

PayLoad or claims

The second part, the yyy's in the above example, is the payload or the claims. There are a few properties that you can use in this section and you can also use your own keys to transfer data.

The JSON for the application I made looked something like this

{
   "exp":1458918709
   ,"sub":"xDataScience"
   ,"aud":"MyApp"
   ,"iat":1458918649
}

The 'sub' property defines the subject, 'aud' (audience) is informative and indicates the application the token can be used for. And the 'iat' (issued at) and 'exp' (expiration) are a few time stamps that can be used to limit the reuse of the token. 
And thus, limit the chance a token is captured and reused by a malicious third party. (replay / playback attacks)

Signature

The first 2 parts are just encoded values and thus readable by anyone who can see the token. That is the reason why we have the last part, the zzz's in our example.

When the other process uses the private part of an RSA key to sign the header and payload, you get the 3th part if the JWT which completes it.

The signing makes sure that
  • The payload can't be changed between the sender and receiver because the verification of the signature will then fail.
  • We can trust the other party because the JAX-RS endpoint can verify the signature with the public part of the RSA key used to sign it.

The 'kid' property in the header is an additional check to determine which RSA key is used for the signing.

Usage scenario

So let us review step by step what you can do, as the developer of the JAX-RS endpoint and client to setup a secure way of authenticating.

  1. As JAX-RS endpoint creator, I create an RSA key.
  2. The JAX-RS endpoint creator chooses an identifier to refer to this key (I call this the api key) because we possibly need to support multiple processes which read information.
  3. The JAX-RS endpoint creator gives this key to the other party which will create the client. The public part of the RSA key will be used by the JAX-RS endpoint creator.
  4. The JAX-RS client creator generates a JWT (according to the specifications of the creator regarding the claims) and signs it with the private part of the RSA key.
  5. The api key and JWT are send to the JAX-RS endpoint in the header
    x-api-key : cbeba027-39e1-4c70-a584-77081422e16a
    Authorization : Bearer ey......
  6. The JAX-RS endpoint can use the x-api-key header value to lookup the corresponding RSA key and check the signing if the JWT.
  7. If the check passes, the JAX-RS endpoint knows who sent the request and can verify the claims part to see if the other requirements are met (for example the timestamps to reduce the replay / playback attacks.

Conclusion

By using a JWT, we can easily determine which process made the request to the JAX-RS endpoint and allow the information exchange between processes.  There are various frameworks available for all kind of programming language so that we can even use it across language / technology transfer of the data.


Have fun.

Monday, 29 June 2015

CDI event when transaction is finished

Introduction

CDI events are a very handy way of loosely coupling event driven code. At some point in your code you raise the event, and in some other area, you execute some functionality when this event occurs.
And there is no relation or code dependency between the 2 areas except the CDI event payload class.

For example

public class DataUpdatedEvent {
}

@Inject
private Event<DataUpdatedEvent> dataUpdatedEvent;
public void doSomething() {
    dataUpdatedEvent.fire(new DataUpdatedEvent());
}

public void onDataUpdate(@Observes DataUpdatedEvent dataUpdatedEvent) {
    // React on event
}

Synchronous

By default, the code is executed synchronous. But there are some other use case scenarios already possible without the need for the new, under development, CDI 2.0 spec which will allow also asynchronous execution.

A scenario that I used recently was to have some Cache mechanism available for data stored in a database.  I know there are a lot of full blown solutions like Infinispan, EHCache or TerraCotta to name a few. But I wanted to have a simple Map instance which also contains the data from the database table which changes very rarely.

When you place the event fire within a EJB stateless method, you don't have the required behaviour.


@Stateless
public class DataBoundary {

    @PersistenceContext
    private EntityManager em;
    @Inject
    private Event<DataUpdatedEvent> dataUpdatedEvent;
    public void saveData(Data data) {
        // Obviously, some more clever things to do
        em.persist(data);
        dataUpdatedEvent.fire(new DataUpdatedEvent());
    }
}


Since the execution of the CDI event mechanism is synchronous, the reading of the database table is done before the new data is committed and thus we are reading the old situation.

@Observer(during)

The Observer annotation has an element called during, which you can use to define when the observer method (the one with the @Observer annotation parameter) is executed.
By default, as already mentioned, it is immediately after the rase of the event. But you can also specify here the value TransactionPhase.AFTER_SUCCESS

public void onDataUpdate(@Observes(during = TransactionPhase.AFTER_SUCCESS) DataUpdatedEvent dataUpdatedEvent) {
    // Load the cache again
}
When the event is fired within a transaction, the observer method is executed after the transaction is committed successful to the database.

That is what we need in our simplistic caching example. The code will run after all the data is in the database table and thus the cache will be using the latest data.


Have fun

Monday, 18 May 2015

Miniature post: calling super in HttpServlet methods

Introduction

Lately I was writing some servlets for the Octopus SSO modules which I'm developing right now. Not used to do this anymore, I made a silly mistake which took quit some time to find out what I did wrong.

Servlets

Although they are still the basis of all web frameworks today, we don't write them that much anymore (at least, I don't do it)

So, I needed to have a simple servlet which performs some actions when it is called through the GET method.  Something like this (some code left out, see further on)


@WebServlet(urlPatterns = "/octopusSSOCallback")
public class SSOCallbackServlet extends HttpServlet {
   @Override
   protected void doGet(HttpServletRequest httpServletRequest, HttpServletResponse httpServletResponse) throws ServletException, IOException {
      // Some things to keep track of the user
      SavedRequest savedRequest = WebUtils.getAndClearSavedRequest(httpServletRequest);
      httpServletResponse.sendRedirect(savedRequest != null ? savedRequest.getRequestUrl() : getRootUrl(httpServletRequest));
   }

}

When I did some tests, I got the following exception in the log.

java.lang.IllegalStateException: UT010019: Response already committed

So, the redirect couldn't be performed because something else closed the response and informed the browser what to do.

So I tried all obvious things what I possibly could have done wrong.  And I started removing code parts in the area marked with the comment // Some things to ...  to find what caused the behaviour.

But nothing solved the issue.

The next thing I tried was to remove the redirect itself, because that was the statement which gave me the exception. And I replaced it with some hello world message stuff.

The exception was gone, of course, but now I received an Http Status 405: method not allowed.

But how could the method not be allowed as I have written some code for it and verified that it gets executed by debugging the code? I was totally lost and didn't have any clue what was going on. I even thought for a moment that the application server had a bug. But that was absurd because I wrote some other servlets a few months ago on the same server.

And then some question on a forum lead me to the solution.

As a good practice, and because the IDE generated me the code as such, I always call the super method when I do override a method.  That is mostly the idea when you override some method.  Do the standard thing and then something additional you want.

So my code contained the following statement as the first line:

super.doGet(httpServletRequest, httpServletResponse);

And the default functionality of the servlet methods is to answer with a Http status 405.

Conclusion

You should always verify what the default functionality of a method is, also when it is defined in your coding standards that you should call the super method. :)

Sunday, 8 March 2015

Getting notified when Java EE Application is ready

Introduction

Within the Java EE ecosystem, you have various options when you want to perform some kind of action when the application is just deployed or server is up and running again.

This text gives a review of them. Or how to get ready when Java EE is up and running.

EJB

One of the most known options, probably because it is the easiest option, is the @Startup annotation for the @Singleton EJB bean (since EJB 3.1; December 2009 included in Java EE 6)

Singleton beans can have the indication that they need to be created and initialised when the container is booting up.  This gives us the option to perform some initialisation for the application.

@Singleton
@Startup
public class StartupEJB {

    @PostConstruct
    public void init() {
        // EJB Ready    }

}

Servlet

The oldest option is using the servlet infrastructure.  The ServletContextListener, available since Servlet 2.3; September 2001, allows you to perform the initialisation steps you want.

public class MyServletContextListener implements ServletContextListener {

    @Override
    public void contextInitialized(ServletContextEvent sce) {
        // Servlet ready    }

    @Override
    public void contextDestroyed(ServletContextEvent sce) {

    }
}

The downside of this approach is that you need to register the class in the web.xml configuration file.

What are your options if you want to have a solution with annotations only where you don’t need any configuration in an XML file?

You can annotate a HttpServlet with the @WebServlet annotation where you indicate the loadOnStartup member (Servlet 3.0; december 2009 - Java EE 6)

@WebServlet(loadOnStartup = 2, urlPatterns = "/test")
public class ServletStartup extends HttpServlet {

    @Override
    public void init(ServletConfig config) throws ServletException {
        // Servlet Ready    }
}

But why should we create a servlet which we only use for the init method and not for some real functionality.  This is not a real option.

Another options is including DeltaSpike in your application. Or use it because there is a good chance that you can use some other goodies from the CDI framework. They have created a ‘bridge’ between the ServletContextListener and a CDI event.  They register a listener, with the help a a web.xml fragment which can be located within a jar file, and fire a CDI event. (compatible with Servlet 3.0 containers; works with any Java EE 6 application server)

public class DSServletStartup {
    
    public void onCreate(@Observes @Initialized ServletContext context) {
        // Servlet Ready (DS version)    }
}

PS. The @Initialised is not the CDI one because it is not available in Java EE 6.

This way, you can have a configuration less way to get notified when the Servlet system is ready.

CDI

The CDI specification was the last one of the major 4, who has defined the possibility to get notified of the availability of the system.
Only recently, with the CDI 1.1 version; may 2013 (Java EE 7); you have the possibility to receive a CDI event when the container is ready.

public class CDIStartup {

    public void postConstruct(@Observes @Initialized(ApplicationScoped.class) Object o) {
        // CDI Ready    }
}

JSF

The last framework that I will include in this overview, has again since quite some time the possibility to have some feedback on startup.
You have the system event PostConstructApplicationEvent (together with SystemEventListener JSF 2.0; July 2009)

This listener, with the correct event, must be defined within the faces configuration file (faces-config.xml) to have it executed when JSF system is ready.

<application>
    <!-- Application is started -->    <system-event-listener>
        <system-event-listener-class>be.rubus.squad.startup.JsfStartup</system-event-listener-class>
        <system-event-class>javax.faces.event.PostConstructApplicationEvent</system-event-class>
    </system-event-listener>
</application>

public class JsfStartup implements SystemEventListener {
    @Override
    public void processEvent(SystemEvent systemEvent) throws AbortProcessingException {
        // JSF Ready    }

    @Override
    public boolean isListenerForSource(Object o) {
        return true;    }
}

Since it requires some configuration in an XML file, I also created some kind of bridge so that a CDI event is emitted instead.  (see Jerry SystemStartup)

Order?

Just out of curiosity, I have created a web application where I coded the 4 feedback mechanism.  And I compared the order in which they occurred  on WildFlay 8.2 and GlassFish 4.1.  The 2 main Java EE 7 application servers available today.


WildFlyGlassFish
EJB
CDI
Servlet
JSF
@WebServlet(loadOnStartup)
EJB
JSF
CDI
Servlet
@WebServlet(loadOnStartup)

It is no surprise that the order is different on the 2 servers, but only JSF is ready at a different moment. Because there is, to my knowledge, never described in any specification which is the order of initialisation of the different subsystem in the Java EE server.
But it really doesn’t matter, I don’t think there is any useless where you need to rely on the order of the startup.

What initialisation to choose

Whenever the initialisation needs some database access, the EJB is the natural choose because we have transaction support available. In those situations, @Startup is easy to use.

The other way that I use quit often, is the JSF initialisation since a lot of the applications I’m involved in are JSF based.  That is the reason why I created some small utility class to convert the PostConstructApplicationEvent of JSF to a CDI event.

Have fun.



Monday, 2 February 2015

Auditing with JPA EntityListener

Introduction


A lot of the projects need some kind of audit trail. They want to know who and when the last time a record was changed. Or recently we got a question how you could keep record of who read some data from the database.

Most of the time, there exists a database solution to these answers, but in other cases, it is easier to achieve with the JPA implementation.

This text describes 2 scenarios where you can use JPA to do some auditing on your users with the help of EntityListener.

EntityListener

The JPA EntityListener defines callbacks for the lifecycle of the entity. And as with any callback mechanism of a system, it allows you to extend the system with some generic functionality. 
There are 7 callbacks defined, a before and after for persist, update and remove and one method called after the load is done.
  • PrePersist
  • PostPersist
  • PreUpdate
  • PostUpdate
  • PreRemove
  • PostRemove
  • PostLoad

Keep record of reads

We recently got a request from a client who wanted to know which user at what time has read some data from specific tables.  
One of the lifecycle callbacks is the javax.persistence.PostLoad.  It gets called when JPA has read the record from the database and after it has been added to the context. 
As we wanted to separate the code for this auditing requirement from the Entity code, we placed the annotation @PostLoad in a callback listener class.  Something like the following code.

public class Audit {

    @PostLoad    public void auditUsage(GenericEntity entity) {

    }
}

The GenericEntity is an abstract class (see further on why it is not an interface) that all of our Entity classes implements.  It contains, among others, a method getId() to retrieve the value of the Id field (primary key).

This Audit class is then defined on the entities we need to track with the use of the @EntityListener annotation.

@Entity
@Table(name = "t_user")
@EntityListeners({Audit.class})
public class Employee extends GenericEntity implements Serializable {

Our first idea was to separate the write of the audit information from the Audit class itself.  A loose coupling could be achieved by using the CDI event mechanism.

But it turns out the CDI injection is not available in callback listener classes. The spec (JSR 338) specifies that CDI injection must be available, but both GlassFish and WildFly have issues with it.

But ‘injection’ of the EntityManager is possible and thus we can persist the information to the database from within the Audit class.

@PersistenceContext private EntityManager em;
@PostLoad public void auditUsage(GenericEntity entity) {
    AuditRead auditRead = new AuditRead(entity.getClass().getSimpleName(), entity.getId());
    em.persist(auditRead);
}

Small remark, using the entity manager during a callback method is discouraged in the specification.  But you are allowed to perform some JMS operation.  That way you can also guarantee the correct logging of all read operations.

Record update information

With triggers on the database we can store the last time a record is changed.  But the username (user which last updated the record) can’t be stored like this, because the application server connects with a generic application user to the database. 
So the user name needs to be filled in by your application.

The @PreUpdate and @PrePersist lifecycle callbacks can be used for this purpose with the help of your security/user information. 

The logged in user information is most of the time available in some CDI bean placed on the Session scope.  As we already mentioned, CDI injection is buggy in the callback listener class.  

You can retrieve your bean using one of the following ways
- Retrieve the BeanManager from JNDI and get your bean directly from it (various sources on the internet shows you the 4 lines of code that you need to retrieve a line from the BeanManager)
- Write a CDI extension that captures the bean manager for you, so that you don’t have to retrieve it every time from JNDI. And then perform the 4 lines mentioned above. 
- Use the BeanProvider class from DeltaSpike where you can retrieve with a single line, the bean instance which implements a certain class (type to be exact)

Our GenericEntity abstract class has methods and fields to keep track of the user who created or last modified the entity. The database tables needs to have corresponding fields of course.

Conclusion

With the JPA EntityListeners we can handle most of the auditing requirements which we need in our applications.  We can keep track of who is reading some records or who updated some data for the last time. 
But beware of using of the callback listener classes.  CDI injection is most of the time not implemented according to the specification document and EntityManager should be used wich caution here.



Friday, 19 December 2014

CDI for Java SE already standardised with DeltaSpike

Introduction

One of the things which are scheduled for the upcoming CDI 2.0 release, is the standardisation of the usage in a Java SE environment.

The Context part of CDI, the different scopes like RequestScoped or ApplicationScoped, isn’t the most useful thing in the Java SE environment.  But having a dependency injection and event mechanism on the other hand is very handy.

Weld and OpenWebBeans

But you don’t have to wait until the release of a CDI 2.0 compatible implementation before you can use it in a Java SE environment.

Weld and OpenWebBeans are at the moment the 2 most important implementations.  They have already the possibility to use CDI in a Java SE environment.

But both frameworks have different ways to start up the CDI environment because in CDI 1.x it isn’t standardised yet.

DeltaSpike is a collection of CDI extensions, and one of the things it provides, is a uniform way of starting CDI in a Java SE environment. And you can use OWB or Weld as your implementation.

DeltaSpike Container Control module

Here the uniform startup is defined;  You have one api module which defines the CDI implementation neutral (so not related to Weld or OWB) classes. And then there exists 2 Implementation modules, one for each CDI implementation.

Other things you need are
- Deltaspike core api and implementation modules
- OWB or Weld implementation with there transitive dependencies if any.

A sample maven project file can be derived from one of the DeltaSpike examples or you can use the one I have assembled, see further on.

When the maven config is in place, you can start for example the CDI container from your main method as follows:

public static void main(String[] args) {
    CdiContainer cdiContainer = CdiContainerLoader.getCdiContainer();
    cdiContainer.boot();
    ContextControl contextControl = cdiContainer.getContextControl();
    contextControl.startContext(ApplicationScoped.class);
    //containerControl.startContexts();
}

Uber JAR

When you create a Java SE application, most of the times you wil create an Uber jar with a proper manifest file so that you can start your application easily with the command (the executable jar)

java -jar myProgram.jar

This can be achieved by using the shade plugin of maven.  You can find various resources on the internet how you can integrate and configure it in your project.

But using this procedure for distributing your CDI based application with DeltaSpike has a few pitfalls but workarounds are available. However, they arent related to DeltaSpike, nor OWB or Weld. It is a consequence of the deployment format.

The first issue that you should be aware of is that some files can be in multiple dependency jar files. Files like beans.xml and javax.enterprise.inject.spi.Extension are present multiple times in dependencies of your maven project.

If you don’t specify a certain configuration of the shade plugin, these files will overwrite each other and thus your program will not function.

You should use :
<transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>

Another issue that I found is that the asm transitive dependency, used in OWB, isn't properly packed into the Uber jar file.
So you need to add the asm:asm:3.3.1 dependency to your own pom as dependency otherwise the application isn’t starting due to some missing classes.

And the last pitfall is that a lot of frameworks aren’t CDI compatible.  In a Java EE application, this isn’t a problem since there is no beans.xml file in the jar files of those frameworks.  This means that the classes in these jar files aren’t considered as CDI bean and thus no problem occurs during the startup of the application.
But in an Uber jar, all classes are in the same jar file which has a beans.xml file.  Those classes, or better some packages, can be excluded the easiest way when you use Weld, as it has support for a custom configuration in the beans.xml file which allows you to exclude some packages.

<weld:scan>    <weld:exclude name="org.jboss.weld.**" /></weld:scan>

Starter project

To get you started easily with a Java SE application which uses CDI 1.x, I created a basic maven application which has everything configured correctly.
You can download it here.

It has 2 profiles, one for OWB and the other is for Weld.  There exists also a third profile, called shade, which is needed in the case you are using the shade plugin on a project which uses OWB.  It makes sure that the asm transitive dependency is included in your final jar file.

Conclusion

So you don’t have to wait for CDI 2.0 to use CI in Java SE, you can use it already today. And with the use of DeltaSpike SE support module, you can even hide the details of starting the OWB or WELD container which makes it even easier.


Have fun with it.