Review: Go Programming Blueprints – Solving Development Challenges with Golang

Go Programming Blueprints - Solving Development Challenges with Golang
Go Programming Blueprints – Solving Development Challenges with Golang by Mat Ryer
My rating: 4 of 5 stars

Real world solutions in GO
This book is not a primer on GO Language (here I suggest “The GO Programming Language” by Donovan & Kernighan, Addison-Wesley) but it should be read just after learning the basic concepts of the language and its tool chain.
The author introduces and describes in details several important concepts about the GO way to program, to structure the code and to organize the projects.
The examples are clear and simple to be easily understood but, at the same time, they can be used in our own projects.
The first project is a web based chat application which introduces main web application concepts like HTML templates, requests routing, Websocket and Oauth protocols, JSON, images plus some GO specifics like how to use the channels to handle client server communication.
The second project is a WHOIS client which shows how to interact with a RESTful API and how to create a command line utility in GO.
The third and main project is a multi application system which analyze Twitter data streams to count specific tags, using a MongoDB as storage solution and a messaging system to decouple the applications and exposing a REST API for a web based client. With this project the author shows how to integrate a NoSQL database and a Queue Messaging system and how to create a REST API.
The book includes other projects with cover additional topics like how to interact with the file system.
All projects and their code are well described and, again, it is something we can really use in our projects.
Highly suggested.

View all my reviews

Review: Learning IBM Bluemix

Learning IBM Bluemix
Learning IBM Bluemix by Sreelatha Sankaranarayanan
My rating: 3 of 5 stars

Interesting introduction to the Bluemix world
This book is a showcase, with working examples, of what we can do with IBM Bluemix cloud environment and its integrated IBM and third party services.
The main focus is to show what can be done, not how it works. Most of the technical backgrounds are left to several links to external resources on the web.
First step is how to create an application using Bluemix templates (“boilerplates”).
The following chapters are on how to integrate the application to the available services like security, SQL and NoSQL databases and IBM Watson functions, how to use the Bluemix development environment (git repository, continuous integration and deployment, test..) and how to monitor and tune application performances.
Other chapters are devoted to hybrid (cloud and on-premise) solutions and to mobile applications.
Most of the examples are based on Node.js or Java using IBM Liberty application server.
A couple of warnings:
– Bluemix IaaS (Infrastructure as-a-service) features like Docker and Open Stack virtual machines support are mentioned but not described. The book focus is on PaaS (Platform as-a-service) features, based on Pivotal’s Cloud Foundry solution.
– Bluexmix is a fast evolving environment so book screen shots and features lists can be easily obsoleted after few months

View all my reviews

Unit testing Java data classes immutability with the Mutability Detector

In all our project, we use data classes which, by definition, contain data (fields) but no (business) logic.

According to the best coding practices, a data class should preferably be immutable because immutability means thread safety. Main reference here is Joshua Bloch’s Effective Java book; this Yegor Bugayenko’s post is also very interesting reading.

An immutable class has several interesting properties:

  • it should be not sub-classable (i.e. it should be final or it should have a static factory method and a private constructor)
  • all fields should be private (to prevent direct access)
  • all fields should be written once (at instance creation time) (i.e. they should be final and without setters)
  • all mutable type (like java.util.Date) fields should be protected to prevent client write access by reference

An example of immutable class is the following:

    public final class ImmutableBean {

      private final String aStr;
      private final int anInt;

      public ImmutableBean(String aStr, int anInt) {
        this.aStr = aStr;
        this.anInt = anInt;
      }

      public String getAStr() {
        return aStr;
      }

      public int getAnInt() {
        return anInt;
      }
    }

Note: as frequent in Java, there is a lot of boilerplate code which hides the immutability definitions.

Libraries like Project Lombok makes our life easier because we can use the @Value annotation to easily define an immutable class as follows:

    @Value
    public class LombokImmutableBean {
        String aStr;
        int anInt;
    }

which is a lot more more readable.

Should we (unit) test a class to check its immutability?

In a perfect world, the answer is no.

With the help of our preferred IDE automatic code generation features or with libraries like Lombok it is not difficult to add immutability to a class.

But in a real world, human errors can be happen, when we create the class or when we (or may be a junior member of the team) modify the class later on. What happen if a new field is added without final and a setter is generated by using IDE code generator? The class is no more immutable.

It is important to guarantee that the class is and remains immutable along all project lifetime.

And with the help of the Mutability Detector we can easily create a test to check the immutability status of a class.

As usual, Maven/Gradle dependencies can be found on Maven Central.

To test our ImmutableBean we can create the following jUnit test class:

    import static org.mutabilitydetector.unittesting.MutabilityAssert.assertImmutable;

    public class ImmutableBeanTest {

      @Test
      public void testClassIsImmutable() {
        assertImmutable(ImmutableBean.class);
      }
    }

the test will fail if the class is not immutable.

For example, if a field is not final and it has a setter method, the test fails and the error message is very descriptive:

org.mutabilitydetector.unittesting.MutabilityAssertionError: 
Expected: it.gualtierotesta.testsolutions.general.beans.ImmutableBean to be IMMUTABLE
 but: it.gualtierotesta.testsolutions.general.beans.ImmutableBean is actually NOT_IMMUTABLE
Reasons:
    Field is not final, if shared across threads the Java Memory Model will not guarantee it is initialised before it is read. 
        [Field: aStr, Class: it.gualtierotesta.testsolutions.general.beans.ImmutableBean]
    Field [aStr] can be reassigned within method [setaStr] 
        [Field: aStr, Class: it.gualtierotesta.testsolutions.general.beans.ImmutableBean]

The complete project can be found on my Test Solutions gallery project on GitHub. See module general.

The approach I suggest is to use Lombok without any immutability test. If Lombok cannot be used (for example in a legacy project), use the Mutability Detector to assert that the class is really immutable.

Vaadin dependencies in Maven projects

The Vaadin framework has several dependencies but not all of them should be included in our war/ear artifacts.

The following table shows all Vaadin version 7.6/7.7 main modules and their meaning and usage

Module Description and usage
server This is the core of the framework. It has the following (transitive) dependencies: vaadin-shared and vaadin-sass-compiler
themes Compiled version of the standard Vaadin themes
client-compiled Compiled version of the standard Vaadin widgets set
client Vaadin and GWT classes for widgets
client-compiler Widgets compiler based on GWT Google Web Toolkit
push Optional module. It includes the support for push protocols (server to client) thanks to the Atmosphere framework
shared Common modules code. It is included as dependency in the server module
sass-compiler SASS to CSS compiler, used at build time and at run-time (“on-the-fly” compilation). It is included as dependency in the server module

Depending on the project requirements, the above modules should be included or not as project dependencies. We can identify two possible scenarios:

  1. Project without a custom widget set. It can have a custom theme
  2. Project with a custom widget set

In the first case (without a custom widget set) we need the following modules:

  • server
  • themes
  • push (optional)
  • client-compiled

while, if we have a custom widget set, we need to compile the widgets so the dependencies become:

  • server
  • themes
  • push (optional)
  • client (for build only)
  • client-compiler (for build only)

Note: the compiled custom widgets are included in our artifact

The following table summarizes the Maven dependencies:

Module ArtifactId Scope Required?
server vaadin-server compile yes
themes vaadin-themes compile yes
client-compiled vaadin-client-compiled runtime only if the project does not use custom widget set
client vaadin-client provided only with custom widget set
client-compiler vaadin-client-compiler provided only with custom widget sett. See also note below.
push vaadin-push compile optional
shared vaadin-shared vaadin-server dependency. No need to be specified in the pom.xml
sass-compiler vaadin-sass-compiler vaadin-server dependency. No need to be specified in the pom.xml

Note: the vaadin-client-compiler dependency is automatically included in the classpath by the Vaadin Maven plugin (vaadin-maven-plugin) when the custom widgets set should be compiled.

Java EE schedulers

Java EE application servers have native scheduling support and, in most of the applications, there is no need to include external dependencies like the famous Quartz scheduler library.

The Java EE 6 Timer Service, available on Java EE 6 and 7 full profile, give us many options to define the scheduling interval and what’s happen if we stop and restart the application which contains our scheduler.

A Java EE scheduler can be:

  • persistent: the application server saves the scheduling events when the application is down  in order to not lose them
  • automatic: simple scheduler definition, most of the details are handled by the application server
  • programmatic: we have full control of all scheduler parameters.

To decide which is the best option, we should first answer to the following questions:

1. Is it allowed to miss some scheduling events?

If we stop or restart the application (for example during an update) the scheduler will be stopped and some scheduling events could be lost.

The scheduler can be configured to save the missed events and to execute them when the application will be up again. The application server uses an internal database (it is usually a Java DB like Derby) to store the missed events.

This is a persistent scheduler.

Note: the application server will generate all missed events at application (re)start. This burst of events is configurable in frequency and delay. See you application server documentation for the details.

We have also the option to not persist the scheduling events which will be lost if application is not running.

In the not persistent case, the scheduler life cycle is the same as the application: it is created at application startup and then destroyed at application shutdown.

On the contrary, a persistent scheduler survives to the application restarts; it is simply sleeping when the application is not running.

How to choose?

If the scheduled functionality is business critical and we cannot afford to miss an event, the persistent scheduler is the way to go.

In all other cases, the not persistent scheduler is lighter (no DB is used) and easier to manage (less hurdle when updating the application because there is no a burst of scheduling events at application restart; the scheduler is always created new at application start ).

2. Will the application run in a cluster?

In a cluster, more than one instance of our application is running (one instance per cluster node) and all instances have their own copy of our scheduler.

But we need to have just one scheduler running among all cluster nodes otherwise we will have multiple copies of the same event.

Every application server has its own way to handle the “multiple scheduler instances” problem (for example see [link 2] for WebSphere) but, in general, it is required that the scheduler should be persistent when we are using a cluster.

3. Should the scheduling interval be programmable at production?

Another important question to be answered: should we able to change the scheduling after the application has been deployed?

If the scheduling parameters (its frequency) are fixed, the automatic scheduler is the best solution because very simple to code: just one annotation (or few XML lines if you prefer the old way).

On the contrary, if the scheduler should be somehow configurable, the best solution is the programmatic scheduler which allow us to define all scheduler parameters during the application startup, reading them from a property file, a DB or any configuration solution we are using.

Remember:

  • the automatic scheduler schedule is defined at build time
  • the programmatic scheduler schedule is defined at application start time

Automatic scheduler

It’s very easy to define an automatic scheduler:

  1. Create a singleton EJB executed at startup
  2. Create a method which will be invoked at every scheduling event

Note: the complete code can be found in the article project [see link 3].

First step:

@Startup
@Singleton
public class MyScheduler

The @javax.ejb.Startup annotation asks the EJB container to create the EJB (and so our scheduler) at application startup.

The @javax.ejb.Singleton annotation forces the EJB container to create just one instance.

Important: the scheduler is used by the application server (the EJB container); it should be never instantiated by the rest of the application code.

Then we need the method which will be invoked at scheduling events:

@Schedule(/** scheduling parameters */)
public void doSomeThing() {..}

The method should be public and return void.

The @javax.ejb.Schedule annotation defines:

  • the scheduling interval, in cron format [see link 4]
  • the name of the scheduler (you could have many schedulers in the application)
  • a persistent boolean flag which defines if the scheduler is persistent or not

For example:

@Schedule(
    minute = "*/15",
    hour = "*",
    info = "15MinScheduler",
    persistent = false )

which defines a non persistent scheduler which runs every 15 minutes.

See AutomaticPersistentScheduler and AutomaticNonPersistentScheduler classes in the article project [link 3] for a complete example.

Note: there is also the @Schedules annotation [see link 1] which allows the define multiple @Schedule definitions.

It is useful when there are schedule requirements which cannot be expressed in a single cron definition.

Programmatic scheduler

The programmatic scheduler is more complex to build but it give us the complete freedom
to define the scheduler parameters.

We have more steps:

  1. Create a singleton EJB executed at startup
  2. Lookup the TimerService resource
  3. Create the scheduler at EJB initialization
  4. Create a @Timeout method

First step is the same as the automatic scheduler:

@Startup
@Singleton
public class MyScheduler

Then (second step) we need to lookup the application server timer service but the injection helps us:

@Resource
private TimerService timerService;

At application startup, the EJB container will inject a TimerService instance which allow us
to interact with the Timer service. For example, we can list (and even delete) all scheduler
defined for the application.

In our case, the Timer service will be used to create the new scheduler as follows (third step):

String minuteSchedule = "*/15";
String hourSchedule = "*";
ScheduleExpression schedule = new ScheduleExpression()
 .minute(minuteSchedule)
 .hour(hourSchedule);

The javax.ejb.ScheduleExpression defines the cron [see link 4] schedule like the @Schedule annotation.

The very important difference between @Schedule and ScheduleExpression is that the first one is fixed at build time: to change the schedule parameters (for example, from every 15min to every 30min) we need to change the class code and build and deploy again the application.

In the latter case (SchedulerExpression), the schedule parameters (in the example above
the variables minuteSchedule and hourSchedule ) can be defined and changed at
application startup, reading the minuteSchedule and hourSchedule from, for example,
a property file or a connected DBMS.

TimerConfig timerConfig = new TimerConfig();
timerConfig.setInfo("ProgrammaticPersistentScheduler");
timerConfig.setPersistent(true);

The javax.ejb.TimerConfig gives us the option to define the name of the scheduler (setInfo(String) ) and if it is persistent or not ( setPersistent(boolean) ) .

Using the ScheduleExpression and the TimerConfig instance, we can use the Timer service
to create the scheduler ( a calendar timer, to be more precise).

timerService.createCalendarTimer(schedule, timerConfig);

The createCalendarTime() method returns a javax.ejb.Timer instance which can be used to interrogate the timer like when the next future event will happen or even to destroy
the scheduler.

The last step is to define a method in the class which will be invoked at every scheduling event

@Timeout
public void doSomeThing() {..}

The method should be public and return void.

And we have our scheduler up and running.

Conclusions

Java EE standard give us many options to define a scheduler which runs our code in a periodical and repetitive way. There is no need for additional project dependencies.

 

Links

  1. Oracle Java EE6 Tutorial on the Timer Service API
  2. IBM WebSphere 8.x Creating timers using the EJB timer service for enterprise beans
  3. Article project on GitHub
  4. Cron on Wikipedia

 

 

Review: Groovy in Action

Groovy in Action
Groovy in Action by Dierk König
My rating: 5 of 5 stars

A must have for any serious Groovy developers.

Groovy in Action (Second Edition) is, at the same time, a detailed overview of the language and core libraries characteristics and an in-depth description on how it works.

First part is dedicated to the language, with the usual list of syntax elements descriptions (operators, data structures, control structures..), including Groovy unique features like being together a dynamic and a static typing language or supporting both object oriented and functional programming styles, not mentioning the scripting capabilities.

Second part is devoted to the Groovy core library: the Groovy Design Kit, how to work with databases and web-services and how to handle JSON and XML.

Final part is dedicated to unit testing, concurrency and, of course, the domain specific languages, one of the traditional Groovy applications area.

I have found very interesting chapter 16 on how integrate Groovy in a Java application, using, for example, Groovy as dynamic business rules engine, and chapter 20 on the Groovy ecosystem (Gradle, Grails..) introduction.

Authors show not only a very strong knowledge of the language and its ecosystem but also an understanding of how Groovy fits in the real world applications. Very interesting and useful the adoption of the assertions statements to better explain the code examples.

This book, with 900+ pages, is not targeting occasional Groovy users but I think it is a must have (as introduction at the begin, as reference later) for anybody intended to seriously use Groovy.

View all my reviews

Tutorial: Correct SLF4J logging usage and how to check it

SLF4J is a very popular logging facade but, like all libraries we use, there is a chance that we use it in a wrong or at least in a not optimal way.

In this tutorial we will list common logging errors and how we can detect them using FindBugs. We will also mention PMD and Sonar Squid checks when relevant.

We will use two external FindBugs plugins which add logging detectors to FindBugs.

The first one is a SLF4J only plugin by Kengo Toda which contains SLF4J detectors only.

The second plugin is the popular FB Contrib which contains, among many others, some logging detectors.

For how to use FindBugs plugins, please refer to the following posts:

Note: in all examples we will assume the following imports:

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

1. Logger definition

Wrong way:

W1a. Logger log = LoggerFactory.getLogger(MyClass.class);
W1b. private Logger logger = LoggerFactory.getLogger(MyClass.class);
W1c. static Logger LOGGER = LoggerFactory.getLogger(AnotherClass.class);

Correct way:

C1a. private static final Logger LOGGER = LoggerFactory.getLogger(MyClass.class);
C1b. private final Logger logger = LoggerFactory.getLogger(getClass());

General rule: the logger should be final and private because there are no reasons to share it with other classes or to re-assign it.

On the contrary there is no general agreement if the logger should be static or not. SLF4J plugin favors non static version (C1b) while PMD (“LoggerIsNotStaticFinal” rule) and Sonar (squid rule S1312) prefer a static logger (C1a) so both options should be considered as valid.

Additional info:

Please note that

  • in the static version (C1a), the logger name is usually in uppercase characters as all constant fields. If not, PMD will report a “VariableNamingConventions” violation.
  • in both cases, the suggested name is “logger/LOGGER” and not “log/LOG” because some naming conventions avoid too short names (less than four characters). Moreover log is the verb, more suited for a method name.
  • the W1c is wrong because we are referring to a class (AnotherClass) which is not the class where the logger is defined. In the 99% of the cases, this is due to a copy & paste from one class to another.

Related FindBugs (SLF4J plugin) checks:

  • SLF4J_LOGGER_SHOULD_BE_PRIVATE
  • SLF4J_LOGGER_SHOULD_BE_NON_STATIC
  • SLF4J_LOGGER_SHOULD_BE_FINAL
  • SLF4J_ILLEGAL_PASSED_CLASS

 

2. Format string

Wrong way:

W2a. LOGGER.info("Obj=" + myObj);
W2b. LOGGER.info(String.format(“Obj=%s”, myObj));

Correct way:

C2. LOGGER.info("Obj={}",myObj);

General rule: the format string (the first argument) should be constant, without any string concatenation. Dynamic contents (the myObj value in the example) should be added using the placeholders (the ‘{}’ ).

Motivation is simple: we should delay logging message creation after the logger has established if the message should be logged or not, depending on the current logging level. If we use string concatenation, message is built any way, regardless the logging level which is a waste of CPU and memory resources.

Related FindBugs (SLF4J plugin) checks:

  • SLF4J_FORMAT_SHOULD_BE_CONST Format should be constant
  • SLF4J_SIGN_ONLY_FORMAT Format string should not contain placeholders only

Related FindBugs (FB Contrib plugin) checks:

  • LO_APPENDED_STRING_IN_FORMAT_STRING Method passes a concatenated string to SLF4J’s format string

 

3. Placeholder arguments

Wrong way:

W3a. LOGGER.info("Obj={}",myObj.getSomeBigField());
W3b. LOGGER.info("Obj={}",myObj.toString());
W3c. LOGGER.info("Obj={}",myObj, anotherObj);
W3d. LOGGER.info("Obj={} another={}",myObj);

Correct way:

C3a. LOGGER.info("Obj={}",myObj);
C3b. LOGGER.info("Obj={}",myObj.log());

General rule: the placeholder should be an object (C3a), not a method return value (W3a) in order to post-pone its evaluation after logging level analysis (see previous paragraph). In W3a example, the mehod getSomeBigField() will be always called, regardless the logging level. For the same reason, we should avoid W3b which is semantically equivalent to C3a but it always incurs in the toString() method invocation.

Solutions W3c and W3d are wrong because the number of placeholders in the format string does not match the number of placeholders arguments.

Solution C3b could be somehow misleading because it includes a method invocation but it could be useful whenever the myObj contains several fields (for example it is a big JPA entity) but we do not want to log all its contents.

For example, let’s consider the following class:

public class Person {
private String id;
private String name;
private String fullName;
private Date birthDate;
private Object address;
private Map<String, String> attributes;
private List phoneNumbers;

its toString() method will most probably include all fields. Using the solution C3a, all their values will be printed in the log file.

If you do not need all this data, it is useful to define a helper method like the following:

public String log() {
return String.format("Person: id=%s name=%s", this.id, this.name);
}

which prints relevant information only. This solution is also CPU and memory lighter than toString().

What is relevant ? It depends on the application and on the object type. For a JPA entity, I usually include in the log() method the ID field (in order to let me find the record in the DB if I need all columns data) and, may be, one or two important fields.

For no reason, passwords fields and/or sensitive info (phone numbers,…) should be logged. This is an additional reason to not log using toString().

Related FindBugs (SLF4J plugin) checks:

  • SLF4J_PLACE_HOLDER_MISMATCH

 

4. Debug messages

IMPORTANT: rule #4 (see 5 rules article) guide us to use a guarded debug logging

if (LOGGER.isDebugEnabled()) {
LOGGER.debug(“Obj={}”, myObj);
}

Using SLF4J, if the placeholder argument is an object reference (see solutions C3a/C3b), we can use avoid the if in order to keep the code cleaner.

So it is safe to use the following:

LOGGER.debug(“Obj={}”, myObj);

 

5. Exceptions

Proper exceptions logging is an important support for problems analysis but it is easy to neglect its usefulness.

Wrong way:

W5a. catch (SomeException ex) { LOGGER.error(ex);}..
W5b. catch (SomeException ex) { LOGGER.error("Error:" + ex.getMessage());}..

Correct way:

C5. catch (SomeException ex) { LOGGER.error("Read operation failed: id={}", idRecord, ex);}..`

General rules:

  1. Do not remove the stack trace information by using getMessage() (see W5b) and not the complete exception. The stack trace often includes the real cause of the problem which is easily another exception raised by the underlying code. Logging only the message will prevent us to discover the real cause of the problem.
  2. Do show significant (for the human which will analyze the log file) information in the logging message showing a text explaining what we wanted to perform while the exception was raised (not the exception kind or messages like “error”: we know already something bad happened). What we need to know is what we were doing and on which data.

The C5 example tells we were trying to read the record with a specific ID whose value has been written in the log with the message.

Please note that C5 use one placeholder in the format string but there are two additional arguments. This is not an error but a special pattern which is recognized by SLF4J as an exception logging case: the last argument (ex in the C5 example) is considered by SLF4J as a Throwable (exception) so it should be not included in the format string.

Related FindBugs (SLF4J plugin) checks:

  • SLF4J_MANUALLY_PROVIDED_MESSAGE: the message should not be based on Exception getMessage()