Review: Groovy in Action

Groovy in Action
Groovy in Action by Dierk König
My rating: 5 of 5 stars

A must have for any serious Groovy developers.

Groovy in Action (Second Edition) is, at the same time, a detailed overview of the language and core libraries characteristics and an in-depth description on how it works.

First part is dedicated to the language, with the usual list of syntax elements descriptions (operators, data structures, control structures..), including Groovy unique features like being together a dynamic and a static typing language or supporting both object oriented and functional programming styles, not mentioning the scripting capabilities.

Second part is devoted to the Groovy core library: the Groovy Design Kit, how to work with databases and web-services and how to handle JSON and XML.

Final part is dedicated to unit testing, concurrency and, of course, the domain specific languages, one of the traditional Groovy applications area.

I have found very interesting chapter 16 on how integrate Groovy in a Java application, using, for example, Groovy as dynamic business rules engine, and chapter 20 on the Groovy ecosystem (Gradle, Grails..) introduction.

Authors show not only a very strong knowledge of the language and its ecosystem but also an understanding of how Groovy fits in the real world applications. Very interesting and useful the adoption of the assertions statements to better explain the code examples.

This book, with 900+ pages, is not targeting occasional Groovy users but I think it is a must have (as introduction at the begin, as reference later) for anybody intended to seriously use Groovy.

View all my reviews

Tutorial: Correct SLF4J logging usage and how to check it

SLF4J is a very popular logging facade but, like all libraries we use, there is a chance that we use it in a wrong or at least in a not optimal way.

In this tutorial we will list common logging errors and how we can detect them using FindBugs. We will also mention PMD and Sonar Squid checks when relevant.

We will use two external FindBugs plugins which add logging detectors to FindBugs.

The first one is a SLF4J only plugin by Kengo Toda which contains SLF4J detectors only.

The second plugin is the popular FB Contrib which contains, among many others, some logging detectors.

For how to use FindBugs plugins, please refer to the following posts:

Note: in all examples we will assume the following imports:

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

1. Logger definition

Wrong way:

W1a. Logger log = LoggerFactory.getLogger(MyClass.class);
W1b. private Logger logger = LoggerFactory.getLogger(MyClass.class);
W1c. static Logger LOGGER = LoggerFactory.getLogger(AnotherClass.class);

Correct way:

C1a. private static final Logger LOGGER = LoggerFactory.getLogger(MyClass.class);
C1b. private final Logger logger = LoggerFactory.getLogger(getClass());

General rule: the logger should be final and private because there are no reasons to share it with other classes or to re-assign it.

On the contrary there is no general agreement if the logger should be static or not. SLF4J plugin favors non static version (C1b) while PMD (“LoggerIsNotStaticFinal” rule) and Sonar (squid rule S1312) prefer a static logger (C1a) so both options should be considered as valid.

Additional info:

Please note that

  • in the static version (C1a), the logger name is usually in uppercase characters as all constant fields. If not, PMD will report a “VariableNamingConventions” violation.
  • in both cases, the suggested name is “logger/LOGGER” and not “log/LOG” because some naming conventions avoid too short names (less than four characters). Moreover log is the verb, more suited for a method name.
  • the W1c is wrong because we are referring to a class (AnotherClass) which is not the class where the logger is defined. In the 99% of the cases, this is due to a copy & paste from one class to another.

Related FindBugs (SLF4J plugin) checks:

  • SLF4J_LOGGER_SHOULD_BE_PRIVATE
  • SLF4J_LOGGER_SHOULD_BE_NON_STATIC
  • SLF4J_LOGGER_SHOULD_BE_FINAL
  • SLF4J_ILLEGAL_PASSED_CLASS

 

2. Format string

Wrong way:

W2a. LOGGER.info("Obj=" + myObj);
W2b. LOGGER.info(String.format(“Obj=%s”, myObj));

Correct way:

C2. LOGGER.info("Obj={}",myObj);

General rule: the format string (the first argument) should be constant, without any string concatenation. Dynamic contents (the myObj value in the example) should be added using the placeholders (the ‘{}’ ).

Motivation is simple: we should delay logging message creation after the logger has established if the message should be logged or not, depending on the current logging level. If we use string concatenation, message is built any way, regardless the logging level which is a waste of CPU and memory resources.

Related FindBugs (SLF4J plugin) checks:

  • SLF4J_FORMAT_SHOULD_BE_CONST Format should be constant
  • SLF4J_SIGN_ONLY_FORMAT Format string should not contain placeholders only

Related FindBugs (FB Contrib plugin) checks:

  • LO_APPENDED_STRING_IN_FORMAT_STRING Method passes a concatenated string to SLF4J’s format string

 

3. Placeholder arguments

Wrong way:

W3a. LOGGER.info("Obj={}",myObj.getSomeBigField());
W3b. LOGGER.info("Obj={}",myObj.toString());
W3c. LOGGER.info("Obj={}",myObj, anotherObj);
W3d. LOGGER.info("Obj={} another={}",myObj);

Correct way:

C3a. LOGGER.info("Obj={}",myObj);
C3b. LOGGER.info("Obj={}",myObj.log());

General rule: the placeholder should be an object (C3a), not a method return value (W3a) in order to post-pone its evaluation after logging level analysis (see previous paragraph). In W3a example, the mehod getSomeBigField() will be always called, regardless the logging level. For the same reason, we should avoid W3b which is semantically equivalent to C3a but it always incurs in the toString() method invocation.

Solutions W3c and W3d are wrong because the number of placeholders in the format string does not match the number of placeholders arguments.

Solution C3b could be somehow misleading because it includes a method invocation but it could be useful whenever the myObj contains several fields (for example it is a big JPA entity) but we do not want to log all its contents.

For example, let’s consider the following class:

public class Person {
private String id;
private String name;
private String fullName;
private Date birthDate;
private Object address;
private Map<String, String> attributes;
private List phoneNumbers;

its toString() method will most probably include all fields. Using the solution C3a, all their values will be printed in the log file.

If you do not need all this data, it is useful to define a helper method like the following:

public String log() {
return String.format("Person: id=%s name=%s", this.id, this.name);
}

which prints relevant information only. This solution is also CPU and memory lighter than toString().

What is relevant ? It depends on the application and on the object type. For a JPA entity, I usually include in the log() method the ID field (in order to let me find the record in the DB if I need all columns data) and, may be, one or two important fields.

For no reason, passwords fields and/or sensitive info (phone numbers,…) should be logged. This is an additional reason to not log using toString().

Related FindBugs (SLF4J plugin) checks:

  • SLF4J_PLACE_HOLDER_MISMATCH

 

4. Debug messages

IMPORTANT: rule #4 (see 5 rules article) guide us to use a guarded debug logging

if (LOGGER.isDebugEnabled()) {
LOGGER.debug(“Obj={}”, myObj);
}

Using SLF4J, if the placeholder argument is an object reference (see solutions C3a/C3b), we can use avoid the if in order to keep the code cleaner.

So it is safe to use the following:

LOGGER.debug(“Obj={}”, myObj);

 

5. Exceptions

Proper exceptions logging is an important support for problems analysis but it is easy to neglect its usefulness.

Wrong way:

W5a. catch (SomeException ex) { LOGGER.error(ex);}..
W5b. catch (SomeException ex) { LOGGER.error("Error:" + ex.getMessage());}..

Correct way:

C5. catch (SomeException ex) { LOGGER.error("Read operation failed: id={}", idRecord, ex);}..`

General rules:

  1. Do not remove the stack trace information by using getMessage() (see W5b) and not the complete exception. The stack trace often includes the real cause of the problem which is easily another exception raised by the underlying code. Logging only the message will prevent us to discover the real cause of the problem.
  2. Do show significant (for the human which will analyze the log file) information in the logging message showing a text explaining what we wanted to perform while the exception was raised (not the exception kind or messages like “error”: we know already something bad happened). What we need to know is what we were doing and on which data.

The C5 example tells we were trying to read the record with a specific ID whose value has been written in the log with the message.

Please note that C5 use one placeholder in the format string but there are two additional arguments. This is not an error but a special pattern which is recognized by SLF4J as an exception logging case: the last argument (ex in the C5 example) is considered by SLF4J as a Throwable (exception) so it should be not included in the format string.

Related FindBugs (SLF4J plugin) checks:

  • SLF4J_MANUALLY_PROVIDED_MESSAGE: the message should not be based on Exception getMessage()

The 5 Java logging rules

Logging is a critical factor that should be always kept into account during the software development.

When something bad happens in production, the log files are usually the starting point of our fault analysis. And, often, they are the only information in our hands to understand what is happened and which is the root cause of the problem.

It is so very important to have the required information logged properly.

The following five logging rules are a way to check and, possibly, improve how we handle the logging in our code.

Please note that we will not discuss how to configure a logging engine nor we will compare them to each other.

Rule 1. Logging is for readers

The logging messages should be meaningful to who will read the log files, not only to who wrote the (logging) code.

It seem a very obvious rule but it is often violated.

For example, let’s consider a log message like the following

ERROR: Save failure - SQLException .....

Saving what? This message could mean something for the developer but it is completely useless for the poor guy which is looking at the production problem.

Much better message is

ERROR: Save failure- Entity=Person, Data=[id=123 surname="Mario"] - SQLException....

which explains what you wanted to save (here a Person, a JPA entity) and the relevant contents of the Person instance. Please note the word relevant, instead of the world all: we should not clutter log files with useless info like the complete print of all entity fields. Entity name and its logical keys are usually enough to identify a record in a table.

Rule 2. Match the logging levels with the execution environment

All logging facades and engines available in the Java ecosystem have the concept of logging level (ERROR, INFO…), with the possibility to filter out messages with too low level.

For example, Java util logging uses the following levels: SEVERE, WARN, INFO, FINE, FINER, FINEST (+ CONFIG and OFF).

On the contrary, the two most popular logging facade, Apache Commons Logging and SLFJ, prefer the following levels: FATAL, ERROR, WARN, INFO, DEBUG, TRACE.

Logging level filtering should depends on which stage of the development is your code: logging level in the production should not be the same as in test/integrations environments.

Moreover, logging level should also depends on the code owner. In general our own application code should have a more detailed logging compared to any third party library we are using. There is no big meaning to see, for example, Apache Commons debug messages in our log files.

I usually configure the logging as following:

  • Production: INFO level for my code and WARN for third party libraries.
  • Test / Integration: DEBUG level for my code and WARN (or INFO if needed) for third party libraries.
  • Development: whatever make sense

Note: I personally discourage the use of the TRACE/FINEST level (and I’m not alone, see for example here). I don’t see big difference between DEBUG and TRACE and it is usually difficult for the young team members to decide which one, DEBUG or TRACE, to use. Following the Kiss principle, I suggest to use ERROR, WARN, INFO and DEBUG levels only.

Rule 3. Remove coding help logging before the commit.

While coding, we usually add logging messages, using the logger or the System.out, in our code for a better understanding of what it is happening in our application during execution /debugging sessions.

Something like:

    void aMethod(String aParam) {
        LOGGER.debug(“Enter in aMethod”);
        if (“no”.equals(aParam)) {
           LOGGER.debug(“User says no”);
          ….

The main purpose of these messages is to trace application behaviour by showing which method is invoked and by dumping internal variables and method parameters values. Quite popular among non TDD devotes.

Unfortunately these messages do not have usually big meaning once the code has been released (to test and then production).

So this rule simply says: once you have finished to develop, remove all temporary and unnecessary logging messages just before committing the code to the SCM system (git, svn..) in use.

This rule does not require to remove all DEBUG messages but only the ones that do not have meaning once the application is completed and released; in other words when we are reasonably sure that the application is working properly.

Rule 4: Check log level before logging DEBUG messages

According to Rule 2, in production logs we will show ERROR, WARN, INFO messages only but in our code we can have many DEBUG messages that should not affect production execution.

Every time you want to log a DEBUG message (all the ones which remain after rule 3), add in front a check if DEBUG logging is enabled:

    if ( LOGGER.isDebugEnabled() ) {
        LOGGER.debug (…….)
    }

This will prevent you code to build the log messages and call the logger. It is for efficiency in the program execution at production.

Rule 5: Know your logger

How we use the logger methods can have a significant cost:

  • To build the message string
  • to collect the data to be included in the message string

We should review the javadoc of the selected logging facade/engine and understand the most efficient way to use its logger.

For example, we could create a message like this:

    LOGGER.info(“Person name is “ + person.getName());

which creates unnecessary strings instances.

Using SLF4J, the correct use is :

    LOGGER.info(“Person name is {}“, person.getName());

where format string is constant and the final message is build only if logging is enabled. See here for more details.

Tutorial: logging during tests

Logging is a popular solution to show what the software is doing while it is running.

But what happens to the logging when we are unit testing our application with jUnit/TestNG ?

During the automated tests execution, we are usually not interested to see the logging messages because our main interest is the test results.

It would be nice to be able to disable logging messages during standard automated tests.

On the contrary, there are some cases where logging messages can be useful at test time. A typical example is when we are coding missing tests for some legacy code we do not want to touch before having a good test coverage in place. In this case logging messages on the console can help us to understand the code and how it works.

So we can identify three use cases:

  1. Running mode, when the application is executed: the logging is enabled and configured as required by the application
  2. Test execution mode, when the automated tests are executed all together: the logging messages should be disabled

  3. Test creation mode, when we are creating new tests: the logging messages are useful but it would be nice to have them in the console

Let see an example based on Maven and SLF4J, the popular logging facade.

Complete project can be found here.

Typical SLF4J configuration in the project pom.xml is the following:

        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <version>${slf4j.version}</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>${slf4j.version}</version>
            <scope>runtime</scope>
        </dependency>

the slf4-api dependency is the main SLF4J library while the second one, slf4j-log4j12, is referencing one of the possible logging engines (LOG4J) that can work below the SLF4J.

This is the running mode configuration. In this example, the project resources will contain a LOG4J properties file which dictates what, how and where LOG4J should log.

The same situation happens when we are using a different logging engine like java.util.logging (JDK) and Logback. See Slf4J manual for more details.

In the Test execution mode we do not want logging so we can simply add the following scope test dependency

        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-nop</artifactId>
            <version>${slf4j.version}</version>
            <scope>test</scope>
        </dependency>

The NOP Logger (slf4j-nop) simply discards all logging.

Important: the dependencies order in the pom.xml is significant. Put the slf4j-nop dependency just after the slf4-apidependency so it will be used during the tests even if there is another logging engine in the dependency.

When there are multiple logging engines dependencies in the pom.xml, SLF4J will show anyway a message like the following:

    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:file:.m2/repository/org/slf4j/slf4j-nop/1.7.12/slf4j-nop-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:.m2/repository/org/slf4j/slf4j-log4j12/1.7.12/slf4j-log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.helpers.NOPLoggerFactory]

To summarize, a complete configuration example is the following:

        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <version>${slf4j.version}</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-nop</artifactId>
            <version>${slf4j.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>${slf4j.version}</version>
            <scope>runtime</scope>
        </dependency>

Again, the last dependency should be adjusted to match the desired logging engine.

As I mentioned before, logging can be useful when creating new tests. In this case (Test creation mode), we can temporarily replace the slf4j-nop with the slf4j-simple dependnecy which enables the SLF4J Simple logger.

The logging messages will now be shown in the console window during tests execution as System.err messages. No configuration file is needed.

By default, the Simple logger does not log DEBUG messages. Standard logging level is INFO.

You can customize the Simple logger behaviour using system variables documented here.

A smart way to define Simple logger configuration is using the [Surefire plugin configuration section] (http://maven.apache.org/surefire/maven-surefire-plugin/examples/system-properties.html):

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>2.18.1</version>
                <configuration>
                    <systemPropertyVariables>
                        <org.slf4j.simpleLogger.defaultLogLevel>DEBUG</org.slf4j.simpleLogger.defaultLogLevel>
                        <org.slf4j.simpleLogger.showDateTime>true</org.slf4j.simpleLogger.showDateTime>
                    </systemPropertyVariables>
                </configuration>
            </plugin>
        </plugins>
    </build>

In the systemPropertyVariables section we can create tags with the Simple logger variable name. In the above example, DEBUG and timestamp logging are enabled.

October 2015 Java releases

Java 9 (next)

New Java version is under development. Preliminary JDK versions and information can be found at https://jdk9.java.net/

Java 9 general availability is planned for September 2016.

Java 8 (current)

Java 8 is the current Java version.

Java 8 JDK/JRE update 65/66 (8u65/8u66) has been released. In addition to several fixes, this release does include additional currencies (fund codes) support. Release 66 bug fixes set is bigger.

Complete release notes can be found here.

JDK 8 can be downloaded here.

Java 7 (end-of-life)

Java 7 79/80 is the last public release. Updates are available for Oracle commercial customers only.

JDK 7u80 can be downloaded from Java 7 Archive downloads.

Java 6 (end-of-life)

Java 6 45 is the last public release. Updates are available for Oracle commercial customers only.

JDK 6u45 can be downloaded from Java 6 Archive downloads.

NetBeans project groups

I’m involved on several projects and during my working day it can happen that I need to switch from one project to another.

In NetBeans I usually create a project group for each project, to keep them well separated.

From the Projects window select Project Groups.. menu option. The Manage Groups window allows you to select the project group you want to switch to or to create a new one.

But what is a project group ?

It can be a single project (any NetBeans recognized project types like ant, maven, …). If the project is hierarchical, NetBeans will also include all sub modules in the project group.

It can also be a directory with many different projects. Nice if you like to pack similar projects together (by customer, by type…).

Finally a project group can also be free: every project you will open after the project group creation will be automatically included in it.

If you do nothing, your are in a special None free project group.

Project groups are a nice feature but there is a weak point if we compare NetBeans with the other most popular IDEs: IDEA and Eclipse.

When NetBeans starts, the last used project group is opened again, with consequent project loading and scanning which is annoying if you wanted to work on a different project because you have to wait until complete project loading before switching to the desired project.

On the contrary Idea has a nice startup GUI which shows the list of the past projects in addition to the possibility to create or import a new one. Eclipse has less flexible but still efficient solution based on workspace selection. Both are before loading last opened projects so very fast.

The NetBeans solution is based on the following command line parameters:

  • --open-group NAME
    open a project group by name
  • --close-group
    close any open project group
  • --list-groups
    list available project groups

Invoking NetBeans with --open-group PROJ1 will force NetBeans to immediately close the last opened project group and open the specified one (PROJ1 in this example).

For each project, on Linux and OS-X you can create a dedicated invocation script while on Windows you can create a dedicated icon with a dedicated command line. Tricky but it works.

Another alternative (actually the one I prefer) is to add the `–close-group’ parameter to the netbeans_default_options parameter in the netbeans_home/etc/netbeans.conf file.

The `–close-group’ parameter force NetBeans to switch to the None project group. Once inside NetBeans you can open the project group you like.

For more inside about this feature, have a look at the Enhancement 168556.

NetBeans startup commands are documented here.

Java: how to check jar version at runtime

Class loading issues can happen in complex execution environments. Situation gets worst when there are different versions of the same library “loaded” and we are not sure which our application is picking up.

To help debugging this issue, we could check at runtime which version of the library is loaded with the our application.

A code like the following can help:

// Example using HTMLEmail from Apache Commons Email 
Class theClass = HtmlEmail.class;
 
// Find the path of the compiled class 
String classPath = theClass.getResource(theClass.getSimpleName() + ".class").toString(); 
System.out.println("Class: " + classPath); 

// Find the path of the lib which includes the class 
String libPath = classPath.substring(0, classPath.lastIndexOf("!")); 
System.out.println("Lib:   " + libPath); 

// Find the path of the file inside the lib jar 
String filePath = libPath + "!/META-INF/MANIFEST.MF"; 
System.out.println("File:  " + filePath); 

// We look at the manifest file, getting two attributes out of it 
Manifest manifest = new Manifest(new URL(filePath).openStream()); 
Attributes attr = manifest.getMainAttributes(); 
System.out.println("Manifest-Version: " + attr.getValue("Manifest-Version")); 
System.out.println("Implementation-Version: " + attr.getValue("Implementation-Version"));

The above code shows how to find jar file path and how to read the contents of a file (it is usually the Manifest) to read some info from it.