Unit testing Java data classes immutability with the Mutability Detector

In all our project, we use data classes which, by definition, contain data (fields) but no (business) logic.

According to the best coding practices, a data class should preferably be immutable because immutability means thread safety. Main reference here is Joshua Bloch’s Effective Java book; this Yegor Bugayenko’s post is also very interesting reading.

An immutable class has several interesting properties:

  • it should be not sub-classable (i.e. it should be final or it should have a static factory method and a private constructor)
  • all fields should be private (to prevent direct access)
  • all fields should be written once (at instance creation time) (i.e. they should be final and without setters)
  • all mutable type (like java.util.Date) fields should be protected to prevent client write access by reference

An example of immutable class is the following:

    public final class ImmutableBean {

      private final String aStr;
      private final int anInt;

      public ImmutableBean(String aStr, int anInt) {
        this.aStr = aStr;
        this.anInt = anInt;
      }

      public String getAStr() {
        return aStr;
      }

      public int getAnInt() {
        return anInt;
      }
    }

Note: as frequent in Java, there is a lot of boilerplate code which hides the immutability definitions.

Libraries like Project Lombok makes our life easier because we can use the @Value annotation to easily define an immutable class as follows:

    @Value
    public class LombokImmutableBean {
        String aStr;
        int anInt;
    }

which is a lot more more readable.

Should we (unit) test a class to check its immutability?

In a perfect world, the answer is no.

With the help of our preferred IDE automatic code generation features or with libraries like Lombok it is not difficult to add immutability to a class.

But in a real world, human errors can be happen, when we create the class or when we (or may be a junior member of the team) modify the class later on. What happen if a new field is added without final and a setter is generated by using IDE code generator? The class is no more immutable.

It is important to guarantee that the class is and remains immutable along all project lifetime.

And with the help of the Mutability Detector we can easily create a test to check the immutability status of a class.

As usual, Maven/Gradle dependencies can be found on Maven Central.

To test our ImmutableBean we can create the following jUnit test class:

    import static org.mutabilitydetector.unittesting.MutabilityAssert.assertImmutable;

    public class ImmutableBeanTest {

      @Test
      public void testClassIsImmutable() {
        assertImmutable(ImmutableBean.class);
      }
    }

the test will fail if the class is not immutable.

For example, if a field is not final and it has a setter method, the test fails and the error message is very descriptive:

org.mutabilitydetector.unittesting.MutabilityAssertionError: 
Expected: it.gualtierotesta.testsolutions.general.beans.ImmutableBean to be IMMUTABLE
 but: it.gualtierotesta.testsolutions.general.beans.ImmutableBean is actually NOT_IMMUTABLE
Reasons:
    Field is not final, if shared across threads the Java Memory Model will not guarantee it is initialised before it is read. 
        [Field: aStr, Class: it.gualtierotesta.testsolutions.general.beans.ImmutableBean]
    Field [aStr] can be reassigned within method [setaStr] 
        [Field: aStr, Class: it.gualtierotesta.testsolutions.general.beans.ImmutableBean]

The complete project can be found on my Test Solutions gallery project on GitHub. See module general.

The approach I suggest is to use Lombok without any immutability test. If Lombok cannot be used (for example in a legacy project), use the Mutability Detector to assert that the class is really immutable.

Advertisements

The 5 Java logging rules

Logging is a critical factor that should be always kept into account during the software development.

When something bad happens in production, the log files are usually the starting point of our fault analysis. And, often, they are the only information in our hands to understand what is happened and which is the root cause of the problem.

It is so very important to have the required information logged properly.

The following five logging rules are a way to check and, possibly, improve how we handle the logging in our code.

Please note that we will not discuss how to configure a logging engine nor we will compare them to each other.

Rule 1. Logging is for readers

The logging messages should be meaningful to who will read the log files, not only to who wrote the (logging) code.

It seem a very obvious rule but it is often violated.

For example, let’s consider a log message like the following

ERROR: Save failure - SQLException .....

Saving what? This message could mean something for the developer but it is completely useless for the poor guy which is looking at the production problem.

Much better message is

ERROR: Save failure- Entity=Person, Data=[id=123 surname="Mario"] - SQLException....

which explains what you wanted to save (here a Person, a JPA entity) and the relevant contents of the Person instance. Please note the word relevant, instead of the world all: we should not clutter log files with useless info like the complete print of all entity fields. Entity name and its logical keys are usually enough to identify a record in a table.

Rule 2. Match the logging levels with the execution environment

All logging facades and engines available in the Java ecosystem have the concept of logging level (ERROR, INFO…), with the possibility to filter out messages with too low level.

For example, Java util logging uses the following levels: SEVERE, WARN, INFO, FINE, FINER, FINEST (+ CONFIG and OFF).

On the contrary, the two most popular logging facade, Apache Commons Logging and SLFJ, prefer the following levels: FATAL, ERROR, WARN, INFO, DEBUG, TRACE.

Logging level filtering should depends on which stage of the development is your code: logging level in the production should not be the same as in test/integrations environments.

Moreover, logging level should also depends on the code owner. In general our own application code should have a more detailed logging compared to any third party library we are using. There is no big meaning to see, for example, Apache Commons debug messages in our log files.

I usually configure the logging as following:

  • Production: INFO level for my code and WARN for third party libraries.
  • Test / Integration: DEBUG level for my code and WARN (or INFO if needed) for third party libraries.
  • Development: whatever make sense

Note: I personally discourage the use of the TRACE/FINEST level (and I’m not alone, see for example here). I don’t see big difference between DEBUG and TRACE and it is usually difficult for the young team members to decide which one, DEBUG or TRACE, to use. Following the Kiss principle, I suggest to use ERROR, WARN, INFO and DEBUG levels only.

Rule 3. Remove coding help logging before the commit.

While coding, we usually add logging messages, using the logger or the System.out, in our code for a better understanding of what it is happening in our application during execution /debugging sessions.

Something like:

    void aMethod(String aParam) {
        LOGGER.debug(“Enter in aMethod”);
        if (“no”.equals(aParam)) {
           LOGGER.debug(“User says no”);
          ….

The main purpose of these messages is to trace application behaviour by showing which method is invoked and by dumping internal variables and method parameters values. Quite popular among non TDD devotes.

Unfortunately these messages do not have usually big meaning once the code has been released (to test and then production).

So this rule simply says: once you have finished to develop, remove all temporary and unnecessary logging messages just before committing the code to the SCM system (git, svn..) in use.

This rule does not require to remove all DEBUG messages but only the ones that do not have meaning once the application is completed and released; in other words when we are reasonably sure that the application is working properly.

Rule 4: Check log level before logging DEBUG messages

According to Rule 2, in production logs we will show ERROR, WARN, INFO messages only but in our code we can have many DEBUG messages that should not affect production execution.

Every time you want to log a DEBUG message (all the ones which remain after rule 3), add in front a check if DEBUG logging is enabled:

    if ( LOGGER.isDebugEnabled() ) {
        LOGGER.debug (…….)
    }

This will prevent you code to build the log messages and call the logger. It is for efficiency in the program execution at production.

Rule 5: Know your logger

How we use the logger methods can have a significant cost:

  • To build the message string
  • to collect the data to be included in the message string

We should review the javadoc of the selected logging facade/engine and understand the most efficient way to use its logger.

For example, we could create a message like this:

    LOGGER.info(“Person name is “ + person.getName());

which creates unnecessary strings instances.

Using SLF4J, the correct use is :

    LOGGER.info(“Person name is {}“, person.getName());

where format string is constant and the final message is build only if logging is enabled. See here for more details.

Tutorial: logging during tests

Logging is a popular solution to show what the software is doing while it is running.

But what happens to the logging when we are unit testing our application with jUnit/TestNG ?

During the automated tests execution, we are usually not interested to see the logging messages because our main interest is the test results.

It would be nice to be able to disable logging messages during standard automated tests.

On the contrary, there are some cases where logging messages can be useful at test time. A typical example is when we are coding missing tests for some legacy code we do not want to touch before having a good test coverage in place. In this case logging messages on the console can help us to understand the code and how it works.

So we can identify three use cases:

  1. Running mode, when the application is executed: the logging is enabled and configured as required by the application
  2. Test execution mode, when the automated tests are executed all together: the logging messages should be disabled

  3. Test creation mode, when we are creating new tests: the logging messages are useful but it would be nice to have them in the console

Let see an example based on Maven and SLF4J, the popular logging facade.

Complete project can be found here.

Typical SLF4J configuration in the project pom.xml is the following:

        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <version>${slf4j.version}</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>${slf4j.version}</version>
            <scope>runtime</scope>
        </dependency>

the slf4-api dependency is the main SLF4J library while the second one, slf4j-log4j12, is referencing one of the possible logging engines (LOG4J) that can work below the SLF4J.

This is the running mode configuration. In this example, the project resources will contain a LOG4J properties file which dictates what, how and where LOG4J should log.

The same situation happens when we are using a different logging engine like java.util.logging (JDK) and Logback. See Slf4J manual for more details.

In the Test execution mode we do not want logging so we can simply add the following scope test dependency

        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-nop</artifactId>
            <version>${slf4j.version}</version>
            <scope>test</scope>
        </dependency>

The NOP Logger (slf4j-nop) simply discards all logging.

Important: the dependencies order in the pom.xml is significant. Put the slf4j-nop dependency just after the slf4-apidependency so it will be used during the tests even if there is another logging engine in the dependency.

When there are multiple logging engines dependencies in the pom.xml, SLF4J will show anyway a message like the following:

    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:file:.m2/repository/org/slf4j/slf4j-nop/1.7.12/slf4j-nop-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:.m2/repository/org/slf4j/slf4j-log4j12/1.7.12/slf4j-log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.helpers.NOPLoggerFactory]

To summarize, a complete configuration example is the following:

        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <version>${slf4j.version}</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-nop</artifactId>
            <version>${slf4j.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>${slf4j.version}</version>
            <scope>runtime</scope>
        </dependency>

Again, the last dependency should be adjusted to match the desired logging engine.

As I mentioned before, logging can be useful when creating new tests. In this case (Test creation mode), we can temporarily replace the slf4j-nop with the slf4j-simple dependnecy which enables the SLF4J Simple logger.

The logging messages will now be shown in the console window during tests execution as System.err messages. No configuration file is needed.

By default, the Simple logger does not log DEBUG messages. Standard logging level is INFO.

You can customize the Simple logger behaviour using system variables documented here.

A smart way to define Simple logger configuration is using the [Surefire plugin configuration section] (http://maven.apache.org/surefire/maven-surefire-plugin/examples/system-properties.html):

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>2.18.1</version>
                <configuration>
                    <systemPropertyVariables>
                        <org.slf4j.simpleLogger.defaultLogLevel>DEBUG</org.slf4j.simpleLogger.defaultLogLevel>
                        <org.slf4j.simpleLogger.showDateTime>true</org.slf4j.simpleLogger.showDateTime>
                    </systemPropertyVariables>
                </configuration>
            </plugin>
        </plugins>
    </build>

In the systemPropertyVariables section we can create tags with the Simple logger variable name. In the above example, DEBUG and timestamp logging are enabled.

Tutorial: Java abstract classes testing

In Java, abstract classes are used to define the interface of the class, with a list of (abstract) methods to be implemented by the extending class; basically what it is also possible to have with the “real” interfaces, the one defined with the interface instruction.

One significant difference between abstract classes and interfaces is that abstract classes can also have fields and non abstract methods which contain logic.

Abstract classes with code (in non abstract methods) are common in some design patterns implementations and in class hierarchy based frameworks. This is a typical solution to share code inside a class family.

Of course this code should be tested but abstract classes cannot be instantiated.

In this tutorial, we will analyze several cases and all possible alternative solutions to test abstract classes with non abstract methods. Let’s start with the libraries we will use on our tests (all Java 6 source level):

Few important remarks:

  1. Some solutions are based on Mockito version 1.10.12 or above but these versions are not compatible with the latest PowerMock version (1.6.1, see here).
    So in the project we will use two different libraries sets:

    1. Classic: jUnit 4.12, Mockito 1.10.8, PowerMock 1.6.1
    2. Latest: jUnit 4.12, Mockito 1.10.19 (no PowerMock)
  2. All examples are based on the “classic” combination except few ones marked with a special comment. In the project pom.xml, both dependencies sets are defined but one is commented out. Please note that, due to the combinations differences, some tests cannot be compiled in all combinations.
  3. Complete project can be found at https://github.com/gualtierotesta/test-solutions
  4. The examples are not real world code. The code has been kept at the minimum in order to better show the adopted testing strategy.
  5. A basic knowledge about Java unit testing and jUnit framework is needed for a more complete understanding of the examples. Mockito and PowerMock knowledge is not required.
  6. Have also a look at the Test Glossary for the meaning of several test terms used in this post.

CASE 1: independent method

This is the simplest case: the abstract class contains, in addition to abstract methods, a public non abstract method (named “methodToBeTested” in our examples) with no internal dependencies: it does not call any other method in its class.

public abstract class AbstractCase1 {
    public abstract int myAbstractMethod();
    public String methodToBeTested() {
        return "CASE-1";
    }
}

We want to test the logic of the method “methodToBeTested()”. We have three possible testing solutions:

Solution 1.A Using a concrete class

We create a concrete class which extends the abstract class and test the method from this class.

Our test class body:

     private class ConcreteCase1 extends AbstractCase1 {
        @Override
        public int myAbstractMethod() {
            return 0;  //Dummy implementation
        }
    }

    @Test
    public void testUsingConcreteClass() {
        // given
        ConcreteCase1 sut = new ConcreteCase1();
        // when
        String result = sut.methodToBeTested();
        // then
        Assert.assertEquals("CASE-1", result);
    }

Explanation:

  • Line 1: we define a concrete implementation of the abstract class where all abstract methods have a dummy implementation; they are not used so we don’t care.
  • Line 11: we create an instance named “sut” (system under test) of the concrete class. Through this instance we have free access to the method we want to test

The only drawback of this solution is the need to create the concrete class and created a dummy implementation for all abstract methods.

The given, when and then comments are used to separated different test phases: preparation, execution and results analysis. See also http://martinfowler.com/bliki/GivenWhenThen.html

Solution 1.B Using a Mockito spy (Mockito 1.10.12+ required)

We can use Mockito to create a spy instance of the abstract class. A spy is a class instance with standard behavior but that can be controlled by Mockito; for example we can ask Mockito to change the normal method return value, bypassing the return value calculated by the method body. See also the Testing Glossary.

In our case, we use a Mockito spy only because it let us create an abstract class instance, which is normally not possible. This is not a typical Mockito spy use case.

    @Test
    public void testUsingMockitoSpy() {
        // given
        AbstractCase1 sut = Mockito.spy(AbstractCase1.class);
        // when
        String result = sut.methodToBeTested();
        // then
        Assert.assertEquals("CASE-1", result);
    }

Explanation:

  • line 4: create the spy instance using Mockito.spy(class) method.
  • line 6: invoke the method to be tested

The main drawback of this solution is that Mockito.spy() will invoke the abstract class constructor during spy instance creation. In some cases the constructor uses external dependencies which can be an obstacle to our unit test executions. These kind of dependencies are usually called “test impediments”.

Solution 1.C Using Mockito mock

If you cannot use latest Mockito version or the abstract class constructor contains some test impediments, we can use Mockito to create a mock.

In general mocking is used to “empty” a class from its logic: a mocked instance does not have code insides the methods. In our case, we create a mock and then ask Mockito to restore the code inside the method we want to test.

This is not a very clean solution but it works.

    @Test
    public void testUsingMockitoMock() {
        // given
        AbstractCase1 sut = Mockito.mock(AbstractCase1.class);
        Mockito.doCallRealMethod().when(sut).methodToBeTested();
        // when
        String result = sut.methodToBeTested();
        // then
        Assert.assertEquals(&quot;CASE-1&quot;, result);
    }

Explanation:

  • line 4: create the mocked instance using Mockito.mock(class) method.
  • line 5: we ask Mockito to use real (=original) code when method is invoked

CASE 2: independent method with test impediment

In this second use case, the method we want to test calls a private method which contains a test impediment. We cannot simply execute the method to be tested because first we need to disable the test impediment method.

  public abstract class AbstractCase2 {

    public abstract int myAbstractMethod();

    public String methodToBeTested() {
        return prefix() + "2";
    }

    // Test impediment
    private String prefix() {
        return "CASE-";
    }
}

In our example, the method to be tested calls the private method “prefix()”. Note: the method body in the example is NOT a test impediment as it just returns a string. In the real world, the method performs some time and/or resource consuming tasks which should be avoided in our unit tests.
We want to “remove” prefix() standard behavior but to do this we cannot create a concrete class because private methods cannot be overridden. We cannot even use Mockito because it cannot control private (and, by the way, also static ) methods so we will use its stronger companion: PoweMock.

Solution 2.A Using PowerMock
The test class:

@RunWith(PowerMockRunner.class)
@PrepareForTest(AbstractCase2.class)
public class AbstractCase2Test {

    @Test
    public void testWithPowerMock() throws Exception {
        // given
        AbstractCase2 sut = PowerMockito.mock(AbstractCase2.class);
        PowerMockito.doCallRealMethod().when(sut).methodToBeTested();
        PowerMockito.doReturn("TESTCASE-").when(sut, "prefix");
        // when
        String result = sut.methodToBeTested();
        // then
        Assert.assertEquals("TESTCASE-2", result);
    }
}

Explanation:

  • line 1: the @RunWith annotation defines PowerMock as the runner for the test
  • line 2: the @PrepareForTest(class) annotation asks PowerMock to prepare the class for later processing
  • line 8: we create a mock using PowerMockito.mock() which, in turns, call the Mockito mock() method
  • line 9: as we did for the first use case, we ask PowerMock to re-enable the body of the method to be tested
  • line 10: here it is the key element of this test: we ask PowerMock to stub the private method return value so we can test methodToBeTested() without test impediments. Mockito alone cannot do this stub. Please note that the name of the private method, prefix, is passed to PowerMock as string because it is not accessible from outside. PowerMock will use reflection to find it.

CASE 3: method which access instance fields
Abstract classes can have an internal state implemented with class fields. The value of the fields can be significant for the test so we need a way to control them in order to define the initial test context (the “test fixture”).
If the field is public (uh, not very nice OO solution) or protected, it can be freely accessed (and modified) from our test class. But if it is private, we have few options to control it.
Let’s start with the example class:

  public abstract class AbstractCase3 {

    protected boolean type = false;
    private int count;

    public abstract int myAbstractMethod();

    public String methodToBeTested() {
        String res;
        count += 1;
        if (count > 10) {
            res = "Too many times";
        } else {
            if (type) {
                res = "Type is true";
            } else {
                res = "Type is false";
            }
        }
        return res;
    }
}

Two class fields, type and count, control the methodToBeTested() behavior. Note: this is just an example so forgive the meaningless logic.

Solution 3.A Using Mockito mock
This solution is identical to solution 1.C: we create a mock and then later we ask Mockito to re-enable the method body.

    @Test
    public void testTrueTypeWithCountLessThen10() {
        // given
        AbstractCase3 sut = Mockito.mock(AbstractCase3.class);
        Mockito.doCallRealMethod().when(sut).methodToBeTested();
        sut.type = true;
        // when
        String result = sut.methodToBeTested();
        // then
        Assert.assertEquals("Type is true", result);
    } 

Explanation:

  • line 4: create the mocked instance using Mockito.mock(class) method.
  • line 5: we ask Mockito to use real (=original) code when method is invoked
  • line 6: we fix the type field value to be true so we can test the method in this specific condition

As shown, we cannot control the count field value with this kind of solution.

Solution 3.B Using Mockito mock and a setter
Like the solution 3.A but we add a count field setter to the abstract class (not to the test class !) so we can control its value from the test class.

In the abstract class code we add the following lines:

    // Only for test
    void setCount(int pAmount) {
        this.count = pAmount;
    }

This solution is identical to solution 1.C: we create a mock and then later we ask Mockito to re-enable the method body.

    @Test
    public void testCountGreaterThen10WithSetter() {
        // given
        AbstractCase3 sut = Mockito.mock(AbstractCase3.class);
        Mockito.doCallRealMethod().when(sut).methodToBeTested();
        Mockito.doCallRealMethod().when(sut).setCount(Mockito.anyInt());
        sut.setCount(15);
        // when
        String result = sut.methodToBeTested();
        // then
        Assert.assertEquals("Too many times", result);
    } 

Explanation:

  • line 4: create the mocked instance using Mockito.mock(class) method.
  • line 5 and 6: we ask Mockito to use real (=original) code when method is invoked and also for the setter. Note the Mockito.anyInt() call which defines for which value (all, in this specific case) the real method body should be used
  • line 7: we call the count setter with a value, 15, above the condition defined inside the method (10) so we can test this code path

Of course we could also change type field value like we did in the previous solution.

Solution 3.C Using PowerMockito
If we cannot add the setter, for example because abstract class source code cannot be touched, we can use PowerMock to change class fields.

   
    @Test
    public void testWithPowerMock() {
        // given
        AbstractCase3 sut = PowerMockito.mock(AbstractCase3.class);
        PowerMockito.doCallRealMethod().when(sut).methodToBeTested();
        Whitebox.setInternalState(sut, "count", 15);
        // when
        String result = sut.methodToBeTested();
        // then
        Assert.assertEquals("Too many times", result);
    }

Explanation:

  • line 6: PowerMock Whitebox class uses reflection to control internal class contents. In our case we change to 15 the value of the count field (second argument, which should be noted it is a string).

Using Whitebox is not very common and it should be considered only for very untestable code only.

CASE 4: method which call abstract methods
This is the typical case, for example, of the Template Method design pattern (http://en.wikipedia.org/wiki/Template_method_pattern) but also other design patterns and frameworks use this solution. The non abstract method defines the global flow while the abstract method can be implemented in different ways to customize the behavior.

   public abstract class AbstractCase4 {

    public abstract int myAbstractMethod();

    public String methodToBeTested() {
        int val = myAbstractMethod();
        return "CASE-" + val;
    }
}

At line 6, the abstract method is invoked.

To test this code, we have three different solutions: extending the abstract class, using Mockito to create a mock or a spy.

Solution 4.A Using a concrete class
We can extend the abstract class, giving a convenient (for testing purposes) behavior to the abstract method:

   
    private class ConcreteCase4 extends AbstractCase4 {
        @Override
        public int myAbstractMethod() {
            return 4;  //Dummy implementation
        }
    }

    @Test
    public void testUsingConcreteClass() {
        // given
        AbstractCase4Test.ConcreteCase4 sut = new AbstractCase4Test.ConcreteCase4();
        // when
        String result = sut.methodToBeTested();
        // then
        Assert.assertEquals("CASE-4", result);
    }

Explanation:

  • line 1: ConcreteClass4 extends the AbstractClass4Test and gives a convenient implementation of the abstract methods.

Solution 4.B Using a mock with Mockito
We create a mock and we stub the abstract method:

    @Test
    public void testUsingMockitoMock() {
        // given
        AbstractCase4 sut = Mockito.mock(AbstractCase4.class);
        Mockito.doCallRealMethod().when(sut).methodToBeTested();
        Mockito.doReturn(4).when(sut).myAbstractMethod();
        // when
        String result = sut.methodToBeTested();
        // then
        Assert.assertEquals("CASE-4", result);
    }

Explanation:

  • line 6: the abstract method is stubbed with the return value we prefer for the test. Please note that, thanks to Mockito, we are stubbing a method which does not have any implementation (because it is abstract).

Solution 4.C Using a spy with Mockito (Mockito 1.10.12+ required)
Very similar to the previous use case:

   
  @Test
  public void testUsingMockitoSpy() {
        // given
        AbstractCase4 sut = Mockito.spy(AbstractCase4.class);
        Mockito.doReturn(4).when(sut).myAbstractMethod();
        // when
        String result = sut.methodToBeTested();
        // then
        Assert.assertEquals("CASE-4", result);
}

Explanation:

  • line 5: the abstract method is stubbed with the return value we prefer for the test. Please note that, thanks to Mockito, we are stubbing a method which does not have any implementation (because it is abstract).

CONCLUSIONS

Abstract classes code can present a not so short list of use cases. Each of them can be tested using different techniques and different test support libraries.

In this tutorial we went through many examples which hopefully cover many real worlds cases. If not, just let me know so I can extend this post.

 

Mockito is back !

Mockito is back with a new release (1.10.x), after almost two years from the last official release (the well-known and loved 1.9.5).

Several 1.10 updates have come out after the 1.10.0 released at the end of this September.

Mockito 1.10.x includes several improvements and cleanups. Interesting is the possibility to create light-weight mocks and the BDD “then” support .

Release notes here.

Thanks to Szczepan Faber and the rest of the team for their wonderful job.

Review: Mockito Cookbook

Mockito Cookbook
Mockito Cookbook by Marcin Grzejszczak
My rating: 5 of 5 stars

Very good. This is a well written and very interesting reading for everybody involved in Java testing. This book is not intended to be a primer on testing in general and on Mockito in particular. Beginners should look for an introduction on automated testing before start with Mockito Cookbook. The author covers everything related to the Mockito library and also on several additional libraries and tools connected to Mockito, like, for example, PowerMock. The book starts with an introductive chapter on Mockito installation and basic usage with both JUnit and TestNG. The core of the book is the chapters from 2 to 7 where the author explain, as recipes, several techniques to test easy and difficult to test code. Final book chapters are on legacy code testing, testing using frameworks like Spring and Mockito compared to other mocking libraries. In all examples, sometimes a bit repetitive due to the recipes approach, the author also explain general testing and object oriented programming approaches and methodologies. This is of course limited in space but there are plenty of references.

View all my reviews

My 5 favorite NetBeans IDE features (vs. Eclipse)

In my current project, Eclipse Kepler has been selected by the customer as the preferred IDE for all main tasks. Of course, I’m also using NetBeans 8.0 for some activities like, for example, legacy code test coding.

I have already used Eclipse some years ago and, at that time (2009), Eclipse Galileo were quite superior to NetBeans 6.x: faster, rich of code hints and warnings, full of useful plugins. So, I was not really happy when the project moved to NetBeans (because of its Swing support).

Now let me compare them again, Eclipse Kepler 4.3 vs NetBeans 8.0, on same project (a big ant based web project) and on the same PC (Win7 64bit 4G RAM).

Speed

My first big surprise. Five years ago, when I moved from Galileo to NetBeans I really missed Eclipse speed: coding, windowing, compilation, everything were significantly faster in Eclipse.

Now this speed advantage is gone. I don’t have benchmarks to demonstrate my feeling but now my coding is visibly faster with NetBeans 8.0 than with Eclipse Kepler. If you install Findbugs and PMD plugins, Eclipse becomes really slow.

Native testing support

Second surprise. Why, in 2014, an IDE for Java EE developers does not include full testing support ?

I cannot imagine any real EE developers which does not need any kind of test support from the IDE.

In NetBeans there is complete native jUnit and TestNG support (templates, test runs, jump to test option in the editor, both *Test and *IT files support) which I found very useful and efficient.

Eclipse native support is poor but of course it can be extended via plugin. I’m using MoreUnit plugin which is quite nice and add many missing features to Eclipse. See here for a small tutorial.

But even if you install a test support plugin like MoreUnit, still an important Eclipse limitation remains : the possibility to have a dir for test files separated from the main src dir, which is an essential feature to let us handling deployments. If you don’t believe me, have a look at this 2008 bug still open: https://bugs.eclipse.org/bugs/show_bug.cgi?id=224708 and its related bugs.
Simply unbelievable.

Java Hints and FindBugs integration

While coding, it is important to have feedbacks from the IDE about the quality of the code and possible errors. NetBeans Java hints is a very reach set of the code checks which helps me to avoid stupid errors and keep high the quality of my work. In addition, NetBeans includes first class support for FindBugs which, again, helps me during coding sessions.

I’ve found NetBeans code quality checks support better than the one available in Eclipse Kepler + FindBugs plugin. More checks and better control on how, for example, disable a FindBugs warning on a line (because it is ok) using an annotation: just a click with NetBeans and by hand with Eclipse.

Coding support: templating, completion,interfaces…

I like NetBeans coding support: code templates (some letters + tab to get most used keywords and structures (see here for some examples), auto-completion, now also using sub words, I can easily jump from an interface to its implementation, the new indent guide lines.

Similar features are available in Eclipse using, for example, the Code Recommenders plugin but I’ve found it quite slow on my machine so I had to disinstall it.

User Interface

NetBeans user interfaces has improved a lot while keeping its strong point: it is easy to use. Windows and menus are easy to find and the full screen mode is very useful on small screens. I really like the color coding setup (Norway Today) which makes code reading a pleasure and the dark skins for working late in the evening.

On the contrary, Eclipse interfaces remains somehow confusing and, my personal opinio, perspectives and workspaces are simply a waste of time.