Quantcast
Channel: Java – codecentric AG Blog
Viewing all 181 articles
Browse latest View live

How to deploy Spring Boot applications to OpenShift

$
0
0

Spring Boot is a framework which helps to get applications up and running as quickly as possible. OpenShift is a Platform as a Service (PaaS) product which makes it easy for developers to deploy applications. Putting both together feels like the natural thing to do. However, OpenShift (being a RedHat product) focuses on JBoss applications. Since we know that application servers are dead, I’m going to show how to deploy Spring Boot applications directly to OpenShift without the need to deploy into a container. So this blog post is for all developers who struggle with deploying Spring Boot applications to OpenShift.

Spring Boot S2I image

Since there is no official Spring Boot S2I image, we have created our own image. The codecentric/springboot-maven3-centos Image is capable of building Maven based projects. The resulting jar will simply be started using the java -jar command. So this image isn’t really specific to Spring Boot, but can run any maven build that produces a fat jar in the end. We’re currently working on a S2I image for Gradle based builds. Furthermore we’re planning to add detection for maven-wrapper to the builder image, since more and more projects are starting to use it.

Example

Lets get our hands dirty and deploy a Spring Boot application to our OpenShift installation hosted at https://your-openshift-installation.com. First of all, we create a new project using the OpenShift CLI:

$ oc new-project spring-boot-sample
Now using project "springboot-sample-app" on server "https://your-openshift-installation.com".

You can add applications to this project with the 'new-app' command. For example, try:

    $ oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-hello-world.git

to build a new hello-world application in Ruby.

Next we use the codecentric/springboot-maven3-centos builder image to create a new application. In this example we’re using a very small sample application that doesn’t do much.

$ oc new-app codecentric/springboot-maven3-centos~https://github.com/codecentric/springboot-sample-app.git
--> Found Docker image a118da0 (11 hours old) from Docker Hub for "codecentric/springboot-maven3-centos"
    * An image stream will be created as "springboot-maven3-centos:latest" that will track the source image
    * A source build using source code from https://github.com/codecentric/springboot-sample-app.git will be created
      * The resulting image will be pushed to image stream "springboot-sample-app:latest"
      * Every time "springboot-maven3-centos:latest" changes a new build will be triggered
    * This image will be deployed in deployment config "springboot-sample-app"
    * Port 8080/tcp will be load balanced by service "springboot-sample-app"
--> Creating resources with label app=springboot-sample-app ...
    ImageStream "springboot-maven3-centos" created
    ImageStream "springboot-sample-app" created
    BuildConfig "springboot-sample-app" created
    DeploymentConfig "springboot-sample-app" created
    Service "springboot-sample-app" created
--> Success
    Build scheduled for "springboot-sample-app" - use the logs command to track its progress.
    Run 'oc status' to view your app.

Using the oc status command we can convince our self that everything is setup as we expect it:

$ oc status
In project springboot-sample-app on server https://your-openshift-installation.com

svc/springboot-sample-app - 172.17.240.24:8080
  dc/springboot-sample-app deploys imagestreamtag/springboot-sample-app:latest <-
    bc/springboot-sample-app builds https://github.com/codecentric/springboot-sample-app.git with springboot-sample-app/springboot-maven3-centos:latest
    #1 deployment running for 12 seconds - 1 pod

View details with 'oc describe /' or list everything with 'oc get all'.

With the new-app command we have created a Service for our sample app. This service is accessible only inside the OpenShift installation. To make it accessible to the outside world, we need to expose it via a new route:

$ oc expose service springboot-sample-app --hostname=springboot-sample-app.your-openshift-installation.com
route "springboot-sample-app" exposed

This is all you have to do to get a Spring Boot application deployed and exposed in an OpenShift environment. Note, that you have to modify the hostname to fit your environment. The resulting deployment should look like the following in your OpenShift dashboard:

OpenShift Dashboard
The springboot-sample-app running deployed to our OpenShift installation.

Conclusion

Deploying Spring Boot applications to OpenShift is a good solution for rapid application development. However, OpenShift does not provide the capability for building and running Spring Boot applications out of the box. In this blog post I showed how to use the codecentric/springboot-maven3-centos S2I builder image for deploying Spring Boot applications to OpenShift. With this you can get your next Spring Boot application up and running in no time.

The post How to deploy Spring Boot applications to OpenShift appeared first on codecentric Blog.


Event Driven Microservices with Spring Cloud Stream

$
0
0

Lately I’ve been much into event driven architectures because I believe it’s the best approach for microservices, allowing for much more decoupled services than point-to-point communication. There are two main approaches for event driven communication:

  • Feed: Each application has a (synchronous) endpoint anybody may pull domain events from in a feed fashion.
  • Broker: There is a dedicated broker responsible for distributing the events, like Kafka.

Each approach has its up- and downsides. With a broker you have more infrastructure to handle, but you also have a central place where your events are stored. Feeds are not accessible when the producing application is down. Scaling is easier with a broker – what happens if you suddenly need to double your consuming applications because of load? Who subcribes to the feed? If both subscribe, events are processed twice. With a broker like Kafka you easily create consumer groups, and each event is only processed by one application of this group. So we preferred the broker way, and we decided to use Kafka.
So far so good – but we were impatient. We wanted to learn about event driven architectures, we didn’t want to spend weeks fighting with Kafka. And there came Spring Cloud Stream to the rescue.

Yes, we spent a little time setting up our own little playground with docker-compose, including Kafka and Zookeeper of course, but also Spring Cloud Config, Spring Boot Admin and an integrated Continuous Delivery setup with Jenkins, Nexus and Sonar. You can find it here: https://github.com/codecentric/event-driven-microservices-platform. Then we thought that the tough part would come – connecting to and using Kafka. We stumbled over Spring Cloud Stream – and using Kafka was a matter of minutes.

Dependencies

You need to add one dependency to your pom:

	<dependency>
		<groupId>org.springframework.cloud</groupId>
		<artifactId>spring-cloud-starter-stream-kafka</artifactId>
	</dependency>

As parent I use the spring-cloud-starter-parent in the most current version (at time of writing Brixton.RC1). It solves all the version management for me.

	<parent>
		<groupId>org.springframework.cloud</groupId>
		<artifactId>spring-cloud-starter-parent</artifactId>
		<version>Brixton.RC1</version>
	</parent>

When using Actuator, Spring Cloud Stream automatically adds a HealthIndicator for the Kafka binder, and a new actuator endpoint /channels with all the channels used in the application.

Producing events

In our sample application we produce one event every 10 seconds with a Poller.

@SpringBootApplication
@EnableBinding(Source.class)
public class EdmpSampleStreamApplication {
 
	public static void main(String[] args) {
		SpringApplication.run(EdmpSampleStreamApplication.class, args);
	}
 
	@Bean
	@InboundChannelAdapter(value = Source.OUTPUT, poller = @Poller(fixedDelay = "10000", maxMessagesPerPoll = "1"))
	public MessageSource<TimeInfo> timerMessageSource() {
		return () -> MessageBuilder.withPayload(new TimeInfo(new Date().getTime()+"","Label")).build();
	}
 
	public static class TimeInfo{
 
		private String time;
		private String label;
 
		public TimeInfo(String time, String label) {
			super();
			this.time = time;
			this.label = label;
		}
 
		public String getTime() {
			return time;
		}
 
		public String getLabel() {
			return label;
		}
 
	}
 
}

When using @EnableBinding(Source.class) Spring Cloud Stream automatically creates a message channel with the name output which is used by the @InboundChannelAdapter. You may also autowire this message channel and write messages to it manually. Our application.properties looks like this:

spring.cloud.stream.bindings.output.destination=timerTopic
spring.cloud.stream.bindings.output.content-type=application/json
spring.cloud.stream.kafka.binder.zkNodes=kafka
spring.cloud.stream.kafka.binder.brokers=kafka

It basically says that we want to bind the output message channel to the Kafka timerTopic, and it says that we want to serialize the payload into JSON. And then we need to tell Spring Cloud Stream the host name where Kafka and Zookeeper are running – defaults are localhost, we are running them in one Docker container named kafka.

Consuming events

Our sample application for consuming events looks like this:

@SpringBootApplication
@EnableBinding(Sink.class)
public class EdmpSampleStreamSinkApplication {
 
	private static Logger logger = LoggerFactory.getLogger(EdmpSampleStreamSinkApplication.class);
 
	public static void main(String[] args) {
		SpringApplication.run(EdmpSampleStreamSinkApplication.class, args);
	}
 
	@StreamListener(Sink.INPUT)
	public void loggerSink(SinkTimeInfo sinkTimeInfo) {
		logger.info("Received: " + sinkTimeInfo.toString());
	}
 
	public static class SinkTimeInfo{
 
		private String time;
		private String label;
 
		public String getTime() {
			return time;
		}
 
		public void setTime(String time) {
			this.time = time;
		}
 
		public void setSinkLabel(String label) {
			this.label = label;
		}
 
		public String getLabel() {
			return label;
		}
 
		@Override
		public String toString() {
			return "SinkTimeInfo [time=" + time + ", label=" + label + "]";
		}
 
	}
 
}

When using @EnableBinding(Sink.class) Spring Cloud Stream automatically creates a message channel with the name input which is used by the @StreamListener above. Our application.properties look like this:

spring.cloud.stream.bindings.input.destination=timerTopic
spring.cloud.stream.bindings.input.content-type=application/json
spring.cloud.stream.bindings.input.group=timerGroup
spring.cloud.stream.kafka.bindings.input.consumer.resetOffsets=true
spring.cloud.stream.kafka.binder.zkNodes=kafka
spring.cloud.stream.kafka.binder.brokers=kafka

We see the binding of input to timerTopic, then we see the content-type we expect. Note that we don’t share the class with the producing application – we just deserialize the content in a class of our own.
Then we specify the consumer group this application belongs to – so if another instance of this application is deployed, events are distributed among all instances.
For development purposes we set resetOffsets of the channel input to true which means that on new deployment, all events are processed again because the Kafka offset is reset. It could also be a strategy to do that on every startup – having all the state just in memory – and in Kafka. Then, of course, consumer groups don’t make sense, and processing the events should not create other events – consuming the events is just used to create an internal state.

Conclusion

What can I say? Spring Cloud Stream was really easy to use, and I will certainly do that in the future. If you want to try it out for yourself with a real Kafka, I can point you again to https://github.com/codecentric/event-driven-microservices-platform.
Install Docker Toolbox, then do this:

$ docker-machine create -d virtualbox --virtualbox-memory "6000" --virtualbox-disk-size "40000" default
$ eval "$(docker-machine env default)"
$ git clone git@github.com:codecentric/event-driven-microservices-platform.git
$ cd event-driven-microservices-platform
$ docker-compose up

Now get a coffee, have a chat with the colleagues, or surf around the internet while Docker is downloading it. Then go to http://${docker-machine ip default}:18080/ and you should see something like this:
Screenshot Jenkins EDMP
Then go to Spring Boot Admin at http://${docker-machine ip default}:10001/ and you should see something like this:
Screenshot Spring Boot Admin EDMP
And if you take a look at the logs of edmp-sample-stream-sink you’ll see the events coming in.

The post Event Driven Microservices with Spring Cloud Stream appeared first on codecentric Blog.

Binding Configuration to JavaBeans in Spring Boot

$
0
0

It is quite easy to assign external configuration values to variables in Spring. The @Value annotation has been available in the Spring Framework for a long time now. With the introduction of @ConfigurationProperties in Spring Boot, a new way for binding configuration values to JavaBeans has been introduced. Depending on how configuration data is passed to an application, there are slight differences in how configuration is bound to JavaBeans. I have been very confused about this recently, so I’m going explain said differences in this blog post.

Reading Configuration from different sources

There are many ways to specify configuration data in Spring Boot and the framework defines a order for overriding configuration values from different sources. This way we can, for example, use profiles to define different environments for an application (like local, dev, qa and prod). Another option is to put local configuration into the application.properties file and provide configuration for the different environments via environment variables. Specifying configuration via environment variables is the recommended way for scalable applications according to the 12 factor method. For this reason I’m going to show how environment variables override configuration data from the application.properties file.
In the remainder of this blog post, I’m going to use a small example project to analyse the binding of configuration data. I’m assuming that you’re familiar with Spring Boot in general and the @Value and @ConfigurationProperties annotations.

The spring-boot-configuration-example project

There are differences in the way Spring binds configuration from environment variables and from property files to JavaBeans. Furthermore the result of the binding depends on whether you use the @Value annotation or @ConfigurationProperties. To demonstrate this differences, I have created the example project codecentric/spring-boot-configuration-example. The project allows you to try this for yourself and to play around with different settings.

The project provides two configuration classes for reading configuration data: AnnotationConfiguration, which uses the @Value annotation and TypeSafeConfiguration, which uses the @ConfigurationProperties annotation. AnnotationConfiguration is defined with the following Spring Expression Language (SpEL) expressions to read configuration data:

@Component
public class AnnotationConfiguration {
    @Value("${example.property-name:not set}")
    public String propertyDashName;
 
    @Value("${example.property.name:not set}")
    public String propertyPointName;
 
    @Value("${example.propertyName:not set}")
    public String propertyName;
 
    @Value("${example.propertyname:not set}")
    public String propertyname;
 
    @Value("${example.property_name:not set}")
    public String property_name;
}

TypeSafeConfiguration on the other hand uses the configuration prefix “example” and has the following fields:

@Component
@ConfigurationProperties(prefix = "example")
public class TypeSafeConfiguration {
    private String propertyName;
 
    private String propertyname;
 
    private String property_name;
}

Note that we can not implement an equivalent for example.property-name and example.property.name in TypeSafeConfiguration, since neither is a valid field name in Java. If you want to inspect how Spring binds configuration values to this classes all you need to do is execute the run.sh script (hope you’re using Unix/Linux). It will print out what it is doing and how the configuration affects the state of the application. The following steps are executed in order:

  1. Run the application without any configuration.
  2. Run the application with environment variables set.
  3. Run the application with application.properties.
  4. Run the application with both, environment variables and application.properties.

When running with environment variables (Step 2 and 4), the script will set the following environment variables:

export EXAMPLE_PROPERTY_NAME=EXAMPLE_PROPERTY_NAME
export EXAMPLE_PROPERTYNAME=EXAMPLE_PROPERTYNAME

Note the missing underscore in the second case. It will be interesting to so, how Spring maps this to our configuration classes. For example I can not tell which of the SpEL expressions in AnnotationConfiguration will get the value of EXAMPLE_PROPERTYNAME. When the run script executes the application with a properties file, the script will write the following to src/main/resources/application.properties:

example.property.name=example.property.name
example.property-name=example.property-name
example.property_name=example.property_name
example.propertyname=example.propertyname
example.propertyName=example.propertyName

The mapping between properties and SpEL expressions is pretty obvious, since there is a one to one mapping for each SpEL expression. However I have no idea which values will get bound to TypeSafeConfiguration. So let’s run this and have a look at it! You can run it yourself like this:

git clone https://github.com/codecentric/spring-boot-configuration-example
cd spring-boot-configuration-example
./run.sh

In the remainder of this blog post, I’m going through the output of run.sh. If you are interested in the details, I have created this gist containing the output. Not surprisingly, all fields will be unset, when running the application without configuration.

Reading Configuration from environment variables

Running the example with just environment variables set is a bit more interesting. The log shows the following bindings for TypeSafeConfiguration (I’m using colors to make it easier to spot the differences):

Field name Configuration value
propertyName EXAMPLE_PROPERTY_NAME
propertyname EXAMPLE_PROPERTYNAME
property_name null

… and for the SpEL expressions in AnnotationConfiguration:

SpEL expression Configuration value
example.property-name not set
example.property.name EXAMPLE_PROPERTY_NAME
example.propertyName EXAMPLE_PROPERTYNAME
example.propertyname EXAMPLE_PROPERTYNAME
example.property_name EXAMPLE_PROPERTY_NAME

Looking at the first table we can see that the TypeSafeConfiguration.propertyName is set to the value of EXAMPLE_PROPERTY_NAME, while TypeSafeConfiguration.propertyname is set to the value of EXAMPLE_PROPERTY_NAME. TypeSafeConfiguration.property_name can not be set by environment variables. The results for AnnotationConfiguration can be seen in the second table: example.property.name and example.property_name both get the value of EXAMPLE_PROPERTY_NAME, while example.propertyName and example.propertyname get the value of EXAMPLE_PROPERTYNAME. The SpEL expression example.property-name can not be set by environment variables.

What is interesting about this? Looking only at the results for TypeSafeConfiguration, I would expect that the environment variable EXAMPLE_PROPERTY_NAME gets mapped to TypeSafeConfiguration.property_name. But instead the value of EXAMPLE_PROPERTYNAME is used. This feels especially confusing to me, when comparing it with the second table. Here the SpEL expression example.property_name will get the value of EXAMPLE_PROPERTY_NAME!

Another inconsistency is the handling of TypeSafeConfiguration.propertyName and TypeSafeConfiguration.propertyname compared to the handling of the SpEL expressions example.propertyName and example.propertyname. TypeSafeConfiguration.propertyName gets the value of EXAMPLE_PROPERTY_NAME while TypeSafeConfiguration.propertyname gets the value of EXAMPLE_PROPERTYNAME, but both SpEL expressions get the value of EXAMPLE_PROPERTYNAME.

The last observation we can make is, that it is not possible to set SpEL expressions containing dashes via environment variables. This is particulary cumbersome since the recommended way for specifying keys in property files, according to the Spring Boot documentation, is to use dashes (see Table 24.1 in the Spring Boot documentation). Imaging a team building all of it’s configuration based on dash-separated keys in properties files, only to notice that they cannot set these values using environment variables, when deploying to production.

Reading Configuration from a properties file

The next part of the run.log shows how configuration from an application.properties file will be mapped to our configuration beans. Here’s a summary of the output for TypeSafeConfiguration:

Field name Configuration value
propertyName example.propertyName
propertyname example.propertyname
property_name example.property-name

… and for AnnotationConfiguration:

SpEL expression Configuration value
example.property-name example.property-name
example.property.name example.property.name
example.propertyName example.propertyName
example.propertyname example.propertyname
example.property_name example.property_name

The data in AnnotationConfiguration is exactly what we expected, so no need to talk about that. What’s really weird is that TypeSafeConfiguration.property_name it set to the value of example.property-name and not example.property_name. I have no idea why it behaves this way. Furthermore we can see, that it is possible to set all values, which was not possible when using only environment variables.

Mixing configuration from environment variables with configuration from properties files

The last thing to have a look at, is how configuration is overridden when providing both, application.properties and envrionment variables. Again here is the result for TypeSafeConfiguration:

Field name Configuration value
propertyName EXAMPLE_PROPERTY_NAME
propertyname EXAMPLE_PROPERTYNAME
property_name example.property-name

… and for AnnotationConfiguration:

SpEL expression Configuration value
example.property-name example.property-name
example.property.name EXAMPLE_PROPERTY_NAME
example.propertyName EXAMPLE_PROPERTYNAME
example.propertyname EXAMPLE_PROPERTYNAME
example.property_name EXAMPLE_PROPERTY_NAME

Since environment variables have precedence over configuration from application.properties, everything that can be initialized from environment variables will be. Only TypeSafeConfiguration.property_name and the SpEL expression example.property-name will be set to the respective value from application.properties.

Conclusion

There two ways to bind configuration data to JavaBean in Spring Boot: by using type safe binding, via @ConfigurationProperties and by using SpEL expressions via @Value. Convenient as it may be, the results can be confusing depending on whether configuration is set from environment variables or properties files. As a take away, my recommendations are:

  • Be consistent with: don’t mix camel case, snake case, and the like when defining properties.
  • Don’t use dashes in property keys.
  • Don’t use underscores in field names.

This will save you a lot of headaches when working with configuration values in Spring Boot.

The post Binding Configuration to JavaBeans in Spring Boot appeared first on codecentric Blog.

Reducing boilerplate code with Project Lombok

$
0
0

It’s not a secret that Java is quite verbose and will require a developer to often write significantly more code for the same task than other languages. To address this problem, we’ve mentioned a library called Lombok on the codecentric blog in the past – see here and here. In short, it’s a code generation library that provides a set of annotations you can use to drastically reduce boilerplate code in your applications. I’ve personally used it with great success on a number of occasions and since the topic came up in my current project I wanted to elaborate on it bit more and address a few problems I was confronted with. As we’ve covered the basics before, let me get right to a few specific features and topics that I find noteworthy on top of that.

Using @Builder

For some time now, Lombok provides an annotation for implementing the Builder pattern on your classes. Doing this manually is a good example of Java’s verbosity:

@Getter
@EqualsAndHashCode
@AllArgsConstructor
public class Person {
  private String firstname;
  private String lastname;
  private String email;
 
  public static Builder builder() {
    return new Builder();
  }
 
  public static class Builder {
 
    private String firstname;
    private String lastname;
    private String email;
 
    public Builder fistname(String firstname) {
      this.firstname = firstname;
      return this;
    }
 
    public Builder lastname(String lastname) {
      this.lastname = lastname;
      return this;
    }
 
    public Builder email(String email) {
      this.email = email;
      return this;
    }
 
    public Person build() {
      return new Person(firstname, lastname, email);
    }
  }
}

With every additional property this code will grow significantly. There’s more sophisticated builder implementations that will for example guarantee that mandatory values are set during the construction of an object, but in my experience, most implementations of the builder pattern look like my example above. Let’s see how Lombok helps:

@Getter
@EqualsAndHashCode
@AllArgsConstructor
@Builder
public class Person {
  private final String firstname;
  private final String lastname;
  private final String email;
}

That’s it! One line and you have the same implementation as shown before. There’s some parameters that you can use to customize the generated builder. @Builder(toBuilder=true) will generate a toBuilder() method that will copy the contents of an existent Person instance to a builder, for example. That’s useful if you want to copy and change an object.

Other libraries have been doing builder generation before Lombok, but I know of none that integrate as smoothly. PojoBuilder – for example – will create seperate class files in a folder that you have to add to your project’s classpath. In contrast, Lombok hooks into the compile phase and will change the abstract syntax tree of the target class itself.

As with anything, the example case above looks intriguing but once you start working seriously, you often encounter edge cases and all kinds of problems. Generally, my experience has been very positive, but when working with the @Builder pattern I actually had a few problems to solve.

@Builder and generics

When I first put @Builder on a generic class I was confronted with a compiler error.

@Builder
public class Response {
  private T body;
}
 
Response<String> response = Response.builder().body("body").build();

The compiler complains about an incompatible assignment, as the result of the build process is Response<Object>. What’s required is a hint for the compiler when creating the builder, you’ll have to specify the requested type explicitly when creating the builder:

Response<String> response = Response.<String>builder().body("body").build();

@Builder and inheritance

Sometimes you use @Builder on a class that inherits from a parent class. Lombok will not consider fields from the superclass in the generated builder class. There’s a workaround, though. Normally, you use @Builder as a type annotation, but you can also use it on constructors and methods. What you can do in this case is create a constructor that takes all the arguments that are required for your class (including the ones for the superclass) and then place @Builder on the constructor.

@AllArgsConstructor
public class Parent {
  private String a;
}
 
public class Child extends Parent {
 
  private String b;
 
  @Builder
  public Child(String a, String b){
    super(a);
    this.b = b;
  }
}

You’ll get a complete builder and can use it like this:

Child.builder().a("testA").b("testB").build();

Lombok and constructor injection

In the context of dependency injection I like to use constructors to pass dependencies into objects: I find it unreasonable to create incomplete objects and to have dependencies set afterwards. In order to use constructor injection, you often have to be able to annotate a constructor. How do you do this if you have Lombok generate your constructors? It turns out, there is an experimental feature that can help you with this:

@AllArgsConstructor(onConstructor = @__(@Autowired) )
public class HelloLombok {
 
  public Dependency dependency;
}

Lombok will then add the provided annotation to the generated constructor. You’re right, the syntax looks a bit funny (see the small print at the bottom of the feature documentation for details). And because of the way it is implemented Lombok makes it clear, that this is experimental and might change or disappear in the future. If you can live with that, than it will enable you to combine Lombok and constructor injection (as well as a few other things). If not, you can always choose to not use Lombok for these constructors, of course.

Integrating Lombok

Integrating Lombok into your project is quite easy: For one thing, you need to have Lombok on the project’s classpath in order to get a build working. But equally important is integration with your IDE. I’ve been using both Eclipse and Intellij when working with Lombok, but there are other integrations as well. Again, the Lombok website gives a good overview about what has to be done: For Eclipse, you run the Lombok jar as a java application and tell it about the location of your Eclipse installation, for Intellij there’s a Plugin that you can install via the plugin repository.

The best code that you can write is the code that you don’t write. Lombok is tremendously useful, it will help you to trim your codebase and focus on the important parts of your applications. I’ve been using it for a few years now and I have not experienced any real problems, so far. I recommend you try it yourself!

The post Reducing boilerplate code with Project Lombok appeared first on codecentric Blog.

Spring Boot & Apache CXF – Testing SOAP Web Services

$
0
0

I promised to tackle further and more advanced topics relating to the interaction of Spring Boot and Apache CXF in my upcoming blog posts. So in the following we will take a look at testing SOAP web services. How do we test a web service from within a unit test? How do we build integration tests? And isn’t there something in between? OK, let´s get started!

Spring Boot & Apache CXF – Tutorial

Part 1: Spring Boot & Apache CXF – How to SOAP in 2016
Part 2: Spring Boot & Apache CXF – Testing SOAP web services

We started our journey in the first blog post of this series, in which we looked at the best way to wire Spring Boot and Apache CXF. We found out how to generate all necessary Java class files based on the WSDL and its bunch of imported XSDs elegantly utilizing the JAX-WS Maven plugin. This way we don´t have to check in generated artifacts into our version control system and we´re always up-to-date regarding our API definition (“contract first”). We also learned how to configure Apache CXF through 100% XML free Spring Java configuration and can now easily fire up a running SOAP endpoint.

And there´s a small update: The JAX WS Maven plugin is back under mojohaus goverance. You can track the development progress on Github. Because of this we´ll use the more recent groupId org.codehaus.mojo instead of org.jvnet.jax-ws-commons in our Maven poms from now on.

But let´s finally start by giving the source folder src/test/ we have ignored so far the attention it deserves and creating some tests. So far we haven´t written any of them – although we should do, especially in this case. The first refactoring will come and SOAP web services can become really complex, so having some good tests in place is inevitable.

Unit tests (aka yxzTest.class)

The following steps are as usual fully reproducible from the Github repository tutorial-soap-spring-boot-cxf. The corresponding project step4_test resides there as well.

Our Endpoint class, which we derived from the generated Service Endpoint Interface (SEI), is just a normal POJO, more precisely a Spring component. So there´s nothing new here. Just instantiate it with the new operator and write your unit tests at a whim.

Since the Endpoint itself shouldn’t contain functional business logic (it is, after all, somewhat “polluted” with infrastructure code), these things are delegated to another component, something called e.g. MyFancyServiceController. Now there´s no real point in testing our WebServiceEndpoint in a completely isolated way, i.e. according to pure testing principles. In most cases you definitely want to add a small piece of Spring and test some rather complex sequence.

To this end, we enhance our example from step 3 with a rudimentary “WeatherServiceController” and configure it as a Spring Bean in a separate ApplicationConfiguration. Through its only implemented method getCityForecastByZIP(ForecastRequest forecastRequest) our WeatherServiceController answers with a valid Weather service XSD compliant response – assisted by the GetCityForecastByZIPOutMapper, which is also new to our project. From our WeatherServiceEndpoint we access the injected WeatherServiceController, so that we finally have some running code we´ll be able to test. Keep in mind that this is only a very simple example implementation. We leave out many things you have to implement in real world projects like complete inbound and outbound transformation, functional plausibility checks, various backend calls, just to mention a few.

Looking at our test class WeatherServiceTest.java, it seems to be implemented in a rather straightforward manner. We only need the two annotations @RunWith(SpringJUnit4ClassRunner.class) and @ContextConfiguration(classes=ApplicationTestConfiguration.class) to successfully initialize our Spring application context, which itself instantiates the two Spring beans WeatcherServiceEndpoint & WeatherServiceController necessary for the test (configured in ApplicationTestConfiguration.java).

Inside our @Test annotated method, we create an appropriate request and call the corresponding method of our injected (via @Autowired) endpoint:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes=ApplicationTestConfiguration.class)
public class WeatherServiceTest {
 
    @Autowired
    private WeatherServiceEndpoint weatherServiceEndpoint;
 
    @Test
    public void getCityForecastByZIP() throws WeatherException {
        // Given
        ForecastRequest forecastRequest = generateDummyRequest();
 
        // When
        ForecastReturn forecastReturn = weatherServiceEndpoint.getCityForecastByZIP(forecastRequest);
 
        // Then
        assertNotNull(forecastReturn);
        // many asserts here
    	assertEquals("22%", forecastReturn.getForecastResult().getForecast().get(0).getProbabilityOfPrecipiation().getDaytime());
    }
}

If this test finishes in “green”, we know that our endpoint POJO is doing what it is supposed to do. And that´s everything we need to create our unit test.

Integration tests (aka yxzIntegrationTest.class)

Up to this point, there was nothing new regarding testing with Spring. But now it´s getting more interesting, I hope: How do we test the SOAP web services themselves?

Integration tests should really involve as many components as possible inside their execution phase. But because we call many backends inside those tests, the time to execute them quickly adds up – not to mention the execution of more than one integration test. Running those inside our normal build process would really slow down our development process. Therefore we should exclude them from being executed every time someone or something triggers a build – e.g. with the help of the Maven Surefire plugin:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <configuration>
      <excludes>
         <exclude>**/*IntegrationTest.java</exclude>
     </excludes>
    </configuration>
</plugin>

Having this plugin in place, our integration tests won´t run while something like mvn install or mvn package is executed. We are still able to call them manually inside our IDE (or as a background process triggered by something like infinitest) or automatically, but decoupled from the normal build job on our CI server. You could create a Maven profile for that, which includes the integration tests again and is executed by a separate integration test CI job.

So let´s now look at how to write the integration test itself. The configuration of the necessary SOAP service in client mode is done using the org.apache.cxf.jaxws.JaxWsProxyFactoryBean, to which we forward our Service Endpoint Interface (SEI) via the method setServiceClass(). Additionally we configure the URL where our service would be reached, e.g. by calling it via SoapUI. It can be helpful to provide the base URL which we used to configure the CXFServlet as an accessible constant, along with the trailing part, which represents the concrete web service in our WebServiceConfiguration.

As a final step we call the create() method that our configured instance of the JaxWsProxyFactoryBean provides. Cast to our service endpoint interface, this will create our web service client, which provides every method defined inside our WSDL file. Sadly the CXF API doesn´t use the power of generics, so this cast is necessary here. The configuration class WebServiceIntegrationTestConfiguration.java for all our integration tests looks like this:

@Configuration
public class WebServiceIntegrationTestConfiguration {
 
    @Bean
    public WeatherService weatherServiceIntegrationTestClient() {
        JaxWsProxyFactoryBean jaxWsProxyFactory = new JaxWsProxyFactoryBean();
        jaxWsProxyFactory.setServiceClass(WeatherService.class);
        jaxWsProxyFactory.setAddress("http://localhost:8080" + WebServiceConfiguration.BASE_URL + WebServiceConfiguration.SERVICE_URL);
        return (WeatherService) jaxWsProxyFactory.create();
    }
}

Compared to our unit test, the new class for integration testing WeatherServiceIntegrationTest looks nearly similar. But there are some differences. We configure our WebServiceIntegrationTestConfiguration and inject the service client instead of the endpoint POJO. Everything else remains the same:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes=WebServiceIntegrationTestConfiguration.class)
public class WeatherServiceIntegrationTest {
 
    @Autowired
    private WeatherService weatherServiceIntegrationTestClient;
 
    @Test
    public void getCityForecastByZIP() throws WeatherException {
        // Given
        ForecastRequest forecastRequest = generateDummyRequest();
 
        // When
        ForecastReturn forecastReturn = weatherServiceIntegrationTestClient.getCityForecastByZIP(forecastRequest);
 
        // Then
        assertNotNull(forecastReturn);
        // many asserts here
        assertEquals("22%", forecastReturn.getForecastResult().getForecast().get(0).getProbabilityOfPrecipiation().getDaytime());
    }
}

If we now run our new integration test, it will fail in most cases, giving us a javax.xml.ws.WebServiceException: Could not send Message […] Caused by: java.net.ConnectException: Connection refused. This is because we didn´t start our SOAP server, which could be done easily with a “Run as…” on SimpleBootCxfApplication.java. As described earlier, the integration test should involve the complete SOAP communication including the XML to Java marshalling and all the backend logic. After starting our SOAP server, the execution of our integration test should give us some green output. :)

And please don´t get confused because of this manual step of starting our SOAP server: If we set up our Continuous Integration and Deployment Pipeline correctly, including working stages running our SOAP server, the Integration tests will run automatically at the end of the deployment process.

Single system integration tests (aka yxzSystemTest.class)

But this cannot be everything! In our current project it soon became obvious that the well-known separation into unit and integration tests is not enough. If you look at the development process timeline, you´ll notice that your unit tests check the core functionality (your POJOs) at the very beginning of this process. The integration tests are executed automatically as the last step in your process. E.g. in the last Jenkins job in your pipeline, when everything else is developed, checked into your version control system and got built, provisioned, and deployed. But our gut feeling tells us that we should do something in between, checking as many of the necessary components as possible, to provide our SOAP endpoint later. Late errors that occur in our integration tests are much more expensive than earlier ones.

Based on this observation and using the power of Spring (Boot), we developed the idea of another variant of tests. These should be executable completely on one system (e.g. your dev machine or CI server), firing up all necessary components at runtime if possible – or at least mocking them out. One can discuss endlessly about names, but we just called them single system integration tests (Java classes have a trailing SystemTest). They are by far the most fascinating technical test variant. We´ll soon see why.

As a preliminary remark, these tests should not be excluded from our normal build process, as they could be executed way faster than integration tests while also being way more stable and independent from other systems. Because they don´t include “IntegrationTest” in their naming, the suggested execution via the Surefire Plugin is also appropriate.

Configuring a single system integration test is mostly identical to the configuration of a usual integration test. But they will mostly differ at the host and port. Because when your CI Pipeline and the corresponding stages are up and running, your single system integration test will run locally, but your integration tests will call remote SOAP endpoints. So although it´s a bit exaggerated to give our example nearly the same configuration class WebServiceSystemTestConfiguration.java as the one configuring integration tests, we will do it anyway. And in real world projects you will for sure need this separation. For our example, we change the port to 8090. In order to give Spring the possibility to inject correctly, we also rename our Bean to weatherServiceSystemTestClient() instead of weatherServiceIntegrationTestClient():

jaxWsProxyFactory.setAddress("http://localhost:8090" + WebServiceConfiguration.BASE_URL + WebServiceConfiguration.SERVICE_URL);

In contrast to our Integrations tests, we want to fire up our SOAP server before the test´s execution, run all test methods against that server and finally tear it down when all methods are executed. Therefore we need a class that is annotated with @SpringBootApplication. But in contrast to what we´ve done with our SimpleBootCxfApplication in production code under scr/main/java, the imports are different. Our new SimpleBootCxfSystemTestApplication.java imports the configuration class WebServiceSystemTestConfiguration:

@SpringBootApplication
@Import(WebServiceSystemTestConfiguration.class)
public class SimpleBootCxfSystemTestApplication {
 
    public static void main(String[] args) {
        SpringApplication.run(SimpleBootCxfSystemTestApplication.class, args);
    }
}

Finally we´ll have a look at our actual test class WeatherServiceSystemTest. It makes use of our well-known @RunWith annotation, but instead of using @ContextConfiguration, we type in @SpringApplicationConfiguration, forwarding our aforementioned SimpleBootCxfSystemTestApplicationl.class. Additionally we use the @WebIntegrationTest annotation, which does all the magic for us: It pulls up our SOAP server, so all the methods can use it within their execution. As you can see, we forward our “SystemTest port” 8090 to it – because we configured our single system integration test configuration to use that one.
As a final step, we rename our injected WeatherService bean to “weatherServiceSystemTestClient”, so Spring knows how to autowire correctly. Again, our test case is only slightly different to our other test variants:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes=SimpleBootCxfSystemTestApplication.class)
@WebIntegrationTest("server.port:8090")
public class WeatherServiceSystemTest {
 
    @Autowired
    private WeatherService weatherServiceSystemTestClient;
 
    @Test
    public void getCityForecastByZIP() throws WeatherException {
        // Given
        ForecastRequest forecastRequest = generateDummyRequest();
 
        // When
        ForecastReturn forecastReturn = weatherServiceSystemTestClient.getCityForecastByZIP(forecastRequest);
 
        // Then
        assertNotNull(forecastReturn);
        // many asserts here
        assertEquals("22%", forecastReturn.getForecastResult().getForecast().get(0).getProbabilityOfPrecipiation().getDaytime());
    }
}

Looking at our simple example, the power of those tests is not always obvious. Some of my current project´s team mates initially thought that these couldn’t be that difficult. But they were suprised when they realized what´s behind the scenes. Seeing an entire “enterprisey SOAP endpoint” (like a BiPro web service) including all its components get started inside a test case and thoroughly checked makes everyone enthusiastic. Even the smallest change for the worse inside your complex SOAP endpoint project will make your IDE or CI server show some red light (assuming that you wrote good and meaningful tests, as described by my colleagues in many blog posts, e.g. in this recent one: Writing Better Tests With JUnit).

How to deal with your test cases

Having looked at all these different kinds of test cases, we should briefly discuss another aspect: No matter what technologies we use to get our SOAP services to live – in the end there are those XML requests that our endpoints have to be able to handle correctly. So for me it´s really reassuring to know that my services are able to handle the XML requests that somebody fires against my web services (which I could easily reconstruct with a client like SoapUI). Here we come to realize that running automated tests involving these XML requests is inevitable and we want to be able to do so all the time.

This raises the question: Where should we store our XML test files and how can we distribute them to all the test users, versioning them safely? Additionally all XML test files should be marked for update when something inside the API or the WSDL or XML schema changes. Also there shouldn’t be too many copies around that have to be taken care of. Based on those requirements, many tools worth (several) millions, but nevertheless useless, have been sold. This was a painful experience I had when I wrote my diploma thesis many years ago.

So why shouldn’t we put all those heavy tools aside and think about a more radical approach? Maybe one that doesn´t cover all of our requirements a 100 %. But hey! If this means up-to-date test cases, where all the project developers raise the alarm because their IDEs run into red test case execution results or where Jenkins jobs break because of incorrect XML test files, why not?

The idea is simple: We just put all our test files called “someFancyTest.xml” into our version control system inside our project`s folder for test resources – let´s say something beneath src/test/resources/requests – and load them into our ever-growing number of unit, integration and system tests. Inside of them we use the power of the JAX-B Java to XML marshallers to load those files into our test cases. This gives us the opportunity to throw every single XML file also manually against our web service endpoints – e.g. just to get a good gut feeling or to reproduce some errors. An example test case, put somewhere in src/test/resources/requests as XYZ-Testcase.xml, could look like this:

<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:gen="http://www.codecentric.de/namespace/weatherservice/general">
   <soapenv:Header/>
   <soapenv:Body>
      <gen:GetCityForecastByZIP>
         <gen:ForecastRequest>
            <gen:ZIP>99425</gen:ZIP>
            <gen:flagcolor>bluewhite</gen:flagcolor>
            <gen:productName>ForecastBasic</gen:productName>
            <gen:ForecastCustomer>
		<gen:Age>30</gen:Age>
		<gen:Contribution>5000</gen:Contribution>
		<gen:MethodOfPayment>Paypal</gen:MethodOfPayment>
            </gen:ForecastCustomer>
         </gen:ForecastRequest>
      </gen:GetCityForecastByZIP>
   </soapenv:Body>
</soapenv:Envelope>

But there´s a catch: We cannot use the extremely simplified configuration of the XML to Java marshalling that a web service framework like Spring Boot with Apache CXF provides. We have to make JAX-B work somehow. But this is not overly difficult. We could develop our own helper class that takes over those recurring tasks – or we take a closer look at the class XmlUtils inside our example project. Particularly the method with the – admittedly bulky – name readSoapMessageFromStreamAndUnmarshallBody2Object (InputStream fileStream, Class jaxbClass) provides us with everything that´s needed to do the job.

With the help of the XML parsers distributed with standard JDK it parses our XML files´ InputStream and builds a org.w3c.dom.Document. Therein it searches for the desired contents of the SOAP body needed to marshall it into the forwarded JAX-B POJO – which for sure was generated via the JAX-WS Maven plugin (see part 1 of this tutorial).

With the resulting object we have our XML test file exactly as we need it inside our test cases. Using these is shown inside the class WeatherServiceXmlFileSystemTest.java, which again displays only few differences to the other test cases:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes=SimpleBootCxfSystemTestApplication.class)
@WebIntegrationTest("server.port:8090")
public class WeatherServiceXmlFileSystemTest {
 
    @Autowired
    private WeatherService weatherServiceSystemTestClient;
 
    @Value(value="classpath:requests/GetCityForecastByZIPTest.xml")
    private Resource getCityForecastByZIPTestXml;
 
    @Test
    public void getCityForecastByZIP() throws WeatherException, XmlUtilsException, IOException {
        // Given
        GetCityForecastByZIP getCityForecastByZIP = XmlUtils.readSoapMessageFromStreamAndUnmarshallBody2Object(getCityForecastByZIPTestXml.getInputStream(), GetCityForecastByZIP.class);
 
        // When
        ForecastReturn forecastReturn = weatherServiceSystemTestClient.getCityForecastByZIP(getCityForecastByZIP.getForecastRequest());
 
        // Then
        assertNotNull(forecastReturn);
        // many asserts here
        assertEquals("22%", forecastReturn.getForecastResult().getForecast().get(0).getProbabilityOfPrecipiation().getDaytime());
    }
}

By the way: We don’t have to load the XML test files ourselves. This is handled in a much easier way by Spring´s org.springframework.core.io.Resource. Via the @Value annotation we just have to point it to the right directory where our test file is located. As said above, somewhere beneath src/test/resources/requests. And be sure not to forget the preceding keyword “classpath:”. Then everything should run fine.

Now we saved our developer´s soul: We are able to test our SOAP web services sensibly and in an automated way, covering several development process steps. Maintenance, finding errors and refactoring will be much easier, just to mention a few benefits. In addition we can completely refrain from using expensive and heavy tools. And my favorite point: We document the correct usage of our SOAP web services! Because after all, having those mighty standards of validating data such as WSDL and XSD in place doesn’t mean that there´s no room for interpretation.

But we still haven’t covered everything! Our SOAP responses´ namespace prefixes look terrifying (“ns1”, “ns2”, …), and our big book entitled “Customer´s custom web service specification” requires that we always respond with a XML schema compliant response, even if somebody throws completely nonsensical requests against our web service endpoint. Additionally our Ops people always want to know if our web service is still working, and we want to know what requests our endpoint has to face in detail. We´ll see how to tackle these points in one of the next parts of this blog tutorial series.

The post Spring Boot & Apache CXF – Testing SOAP Web Services appeared first on codecentric Blog.

Spring Boot & Apache CXF – XML validation and custom SOAP faults

$
0
0

What about XML? Can’t we validate our data with XML easily? Just take the XML schema and … erm. What about the reaction to the validation´s outcome? Most of the time, we have to tailor this response and build a custom SOAP fault. But how does this work with Spring Boot and Apache CXF?

Spring Boot & Apache CXF – Tutorial

Part 1: Spring Boot & Apache CXF – How to SOAP in 2016
Part 2: Spring Boot & Apache CXF – Testing SOAP web services
Part 3: Spring Boot & Apache CXF – XML validation and custom SOAP faults

In the preceding parts we learned how a SOAP web service is configured and tested in detail with Spring Boot and Apache CXF. Now we want to look at a more particular case. There are some big web service specifications out there (look e.g. at the BiPro specs) that require our SOAP endpoint to react with a 100% XML schema compliant response in every situation – even when somebody sends bad XML requests which generate errors inside Apache CXF´s XML processing.

Given a request that could be processed successfully, our response will always be 100% XML schema compliant. We only have to follow this blog series first part´s guidelines and generate the Java classes from our WSDL and XSDs. As usual, there is a new GitHub project waiting in our tutorial repository – if you want to give it a try. :)

As a starting point we´ll use the preceding part´s project for now and fire up the SimpleBootCxfApplication with the help of a “Run as…” . Once our SOAP endpoint is up and running (check http://localhost:8080/soap-api/WeatherSoapService_1.0), we send a valid request against it with SoapUI:

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:gen="http://www.codecentric.de/namespace/weatherservice/general">
   <soapenv:Header/>
   <soapenv:Body>
      <gen:GetCityForecastByZIP>
         <gen:ForecastRequest>
            <gen:ZIP>99998</gen:ZIP>
            <gen:flagcolor>bluewhite</gen:flagcolor>
            <gen:productName>ForecastProfessional</gen:productName>
            <gen:ForecastCustomer>
            <gen:Age>30</gen:Age>
            <gen:Contribution>5000</gen:Contribution>
            <gen:MethodOfPayment>Paypal</gen:MethodOfPayment>
            </gen:ForecastCustomer>
         </gen:ForecastRequest>
      </gen:GetCityForecastByZIP>
   </soapenv:Body>
</soapenv:Envelope>

Also, the reply should look like a valid SOAP response:

<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
   <soap:Body>
      <GetCityForecastByZIPResponse xmlns="http://www.codecentric.de/namespace/weatherservice/general" xmlns:ns2="http://www.codecentric.de/namespace/weatherservice/datatypes" xmlns:xmime="http://www.w3.org/2005/05/xmlmime" xmlns:ns4="http://www.codecentric.de/namespace/weatherservice/exception">
         <GetCityForecastByZIPResult>
            <Success>true</Success>
            <State>Deutschland</State>
            <City>Weimar</City>
            <WeatherStationCity>Weimar</WeatherStationCity>
            <ForecastResult>
               <ns2:Forecast>
                  <ns2:Date>2016-06-06T17:17:06.903+02:00</ns2:Date>
                  <ns2:WeatherID>0</ns2:WeatherID>
                  <ns2:Desciption>weather forecast Weimar</ns2:Desciption>
                  <ns2:Temperatures>
                     <ns2:MorningLow></ns2:MorningLow>
                     <ns2:DaytimeHigh>90°</ns2:DaytimeHigh>
                  </ns2:Temperatures>
                  <ns2:ProbabilityOfPrecipiation>
                     <ns2:Nighttime>5000%</ns2:Nighttime>
                     <ns2:Daytime>22%</ns2:Daytime>
                  </ns2:ProbabilityOfPrecipiation>
               </ns2:Forecast>
            </ForecastResult>
         </GetCityForecastByZIPResult>
      </GetCityForecastByZIPResponse>
   </soap:Body>
</soap:Envelope>

Standard SOAP faults

Approaching the topic for the first time, one could google something like “configure XML schema validation Apache CXF” or the like. The results are somewhat misleading. For example, look at the Apache CXF FAQ. You´ll find thousands of different variants to activate XML schema validation in Apache CXF. And even worse, the examples provided nearly all show up with Spring XML configuration, which we left behind already. But the configuration of the XML schema validation isn´t our real problem. And funnily enough, XML schema validation is already activated in our setup with Spring Boot and CXF. Just fire a non-valid XML request against our endpoint (which we´ll do in a minute). Instead we should shift our focus to the reaction onto a validation error. All the web service specifications define not if, but the terms how to react.

In case of an error, Apache CXF reacts in form of a standard SOAP fault. We´ll try that out now. This time we´ll send a request against our endpoint that doesn´t comply to our XML schema. Actually our SOAP request´s root element has to be spelled as GetCityForecastByZIP according to the weather-general.xsd, which is imported into our WSDL. But because we want to provoke an error, we´ll change the root tag to GetCityForecastByZIPfoo and send it against our endpoint:

<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:gen="http://www.codecentric.de/namespace/weatherservice/general">
   <soapenv:Header/>
   <soapenv:Body>
      <gen:GetCityForecastByZIPfoo>
         <gen:ZIP>99425</gen:ZIP>
      </gen:GetCityForecastByZIPfoo>
   </soapenv:Body>
</soapenv:Envelope>

Our (still running) endpoint should react with a SOAP response like this:

<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
   <soap:Body>
      <soap:Fault>
         <faultcode>soap:Client</faultcode>
         <faultstring>Unexpected wrapper element {http://www.codecentric.de/namespace/weatherservice/general}GetCityForecastByZIPfoo found.   Expected {http://www.codecentric.de/namespace/weatherservice/general}GetCityForecastByZIP.</faultstring>
      </soap:Fault>
   </soap:Body>
</soap:Envelope>

Now our web service specification defines its own type of exception in case of an error. It´s called WeatherException, which is also defined inside the mentioned weather-exception.xsd. This exception is appended to the SOAP operations with the wsdl:fault tag inside our WSDL. It defines the following structure:

<s:element name="WeatherException">
    <s:complexType>
        <s:sequence>
            <s:element name="Uuid" type="s:string"/>
            <s:element name="timestamp" type="s:dateTime"/>
            <s:element name="businessErrorId" type="s:string"/>
            <s:element name="bigBusinessErrorCausingMoneyLoss" type="s:boolean"/>
            <s:element name="exceptionDetails" type="s:string"/>
        </s:sequence>
    </s:complexType>
</s:element>

Our specification states that the element WeatherException and its children should be put beneath the soap:Fault element, which resides inside the detail tag. Such standards are well known from “enterprise WSDLs”. So we´ll have to implement this requirement to provide a specification-compliant SOAP endpoint.

Not XML schema compliant vs. invalid XML

In case of an error, our WeatherException should be returned inside the soap:Fault/detail – whatever the error might be. So our implementation should be capable of handling not only the errors based on non schema compliant XML, but also if somebody sends XML that is per se completely incorrect. Here are some example requests with a broken XML header – starting with a missing right angle bracket:

<?xml version="1.0" encoding="UTF-8"?

…a non-concluded tag somewhere inside the document:

<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:gen="http://www.codecentric.de/namespace/weatherservice/general">
   <soapenv:Header/>
   <soapenv:Body>
      notRelevantHere />
   </soapenv:Body>
</soapenv:Envelope>

…a broken SOAP header:

<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:gen="http://www.codecentric.de/namespace/weatherservice/general">
   <soapenv:Header/
   <soapenv:Body>
   ...

and many more.

Looking at these examples, the requirement to handle such broken requests makes a lot of sense. Especially if we follow the ideas from part 1 of this tutorial. Because then, we rely on 100% XML schema compliant messages – which also implies that those messages contain correct XML. Our framework would otherwise react with cryptic fault messages if the requests aren´t valid.

Apache CXF interceptor chains

So how do we approach this problem? As always many roads lead to Rome. In the following I´ll show a way that worked quite well and has been approved in my current projects. Let´s first have a look at Apache CXFs architecture. The docs show that CXF´s processing relies on interceptors, which are invoked sequentially, one at a time. They are arranged like pearls on a string and are organized in phases. There´s an incoming interceptor chain, where at the end our web service implementation is invoked, as well as an outgoing interceptor chain, which handles the response processing. A image says more than a thousand words, so let´s look at this one:

apache_cxf_interceptor

The incoming web service calls always run through these chains. If an error occurs inside an incoming chain interceptor, the outgoing interceptor chain is provided with the error information and run through in opposite direction. So if for example an error occurs in the phase READ, the succeeding incoming interceptor chain´s phases aren´t invoked any more. By the way: Apache CXF provides an extra outgoing fault interceptor chain specifically for handling every error. And luckily enough, we can hook our own custom interceptors into these chains, so that we are able to react on every event that might appear.

This knowledge is quite useful in our case. If an request contains incorrect XML, the incoming chain stops in the UNMARSHAL phase at the latest and the outgoing fault interceptor chain is called. So we just have to implement an Interceptor who witnesses as many errors as possible and enables us to react on them. The eye-catching phase here is org.apache.cxf.phase.Phase.PRE_STREAM. With that phase we are as far ahead in the chain as possible to not miss any error. We have to derive our Interceptor from org.apache.cxf.binding.soap.interceptor.AbstractSoapInterceptor and override the method void handleMessage(T message) throws Fault. Additionally we provide the phase inside the constructor while calling the super() method:

public class CustomSoapFaultInterceptor extends AbstractSoapInterceptor {
 
    private static final SoapFrameworkLogger LOG = SoapFrameworkLogger.getLogger(CustomSoapFaultInterceptor.class);
 
    public CustomSoapFaultInterceptor() {
        super(Phase.PRE_STREAM);
    }
 
    @Override
    public void handleMessage(SoapMessage soapMessage) throws Fault {
        Fault fault = (Fault) soapMessage.getContent(Exception.class);
        Throwable faultCause = fault.getCause();
        String faultMessage = fault.getMessage();
 
        if (containsFaultIndicatingNotSchemeCompliantXml(faultCause, faultMessage)) {
            WeatherSoapFaultHelper.buildWeatherFaultAndSet2SoapMessage(soapMessage, FaultConst.SCHEME_VALIDATION_ERROR);
        }
        else if (containsFaultIndicatingSyntacticallyIncorrectXml(faultCause)) {
            WeatherSoapFaultHelper.buildWeatherFaultAndSet2SoapMessage(soapMessage, FaultConst.SYNTACTICALLY_INCORRECT_XML_ERROR);
        }
    }
 
    ...

Detecting XML validation errors

First of all we extract the faultCause and faultMessage inside the overridden method handleMessage(SoapMessage soapMessage). Looking at the latter you could find the exact same inside the standard SOAP fault tag faultstring. Based on those two variables, we are able to detect which error occured.

Sadly the CXF API does not provide any help and we have to implement the methods containsFaultIndicatingNotSchemeCompliantXml() and containsFaultIndicatingSyntacticallyIncorrectXml() ourselves. To figure out how Apache CXF reacts to non schema compliant or incorrect XML, one could think about many test cases and send each of them against our SOAP endpoint. This could get quite sophisticated and cumbersome. Luckily there is already a bunch of test cases inside our example project that we could use. As we try every one of them out, some patterns emerge that we could use to build our detection methods:

1. non XML-schema-compliant

The faultCause contains a javax.xml.bind.UnmarshalException if the request message does contain non schema compliant XML. Besides we check if there´s a missing closing tag. In this case we have to check whether the faultMessage contains an “Unexpected wrapper element”:

private boolean containsFaultIndicatingNotSchemeCompliantXml(Throwable faultCause, String faultMessage) {
    if(faultCause instanceof UnmarshalException
        || isNotNull(faultMessage) && faultMessage.contains("Unexpected wrapper element")) {
        return true;
    }
    return false;
}

2. generally incorrect XML

There are three kinds of errors indicating that the request message contains incorrect XML itself. Either the faultCause contains a com.ctc.wstx.exc.WstxException, the wrapped cause is a com.ctc.wstx.exc.WstxUnexpectedCharException or the faultCause contains an IllegalArgumentException:

private boolean containsFaultIndicatingSyntacticallyIncorrectXml(Throwable faultCause) {
    if(faultCause instanceof WstxException
        // If Xml-Header is invalid, there is a wrapped Cause in the original Cause we have to check
        || isNotNull(faultCause) && faultCause.getCause() instanceof WstxUnexpectedCharException
        || faultCause instanceof IllegalArgumentException) {
        return true;
    }
    return false;
}

Building custom SOAP faults

OK, now ne “know” that some invalid or incorrect XML request tried to frighten our endpoint. Let´s now tailor our custom SOAP fault. The class WeatherSoapFaultHelper is able to change the SOAP fault to our needs. The method buildWeatherFaultAndSet2SoapMessage(SoapMessage message, FaultConst faultContent) extracts the org.apache.cxf.interceptor.Fault out of the org.apache.cxf.binding.soap.SoapMessage. Now we have the fault where we could set our desired message and detail:

public static void buildWeatherFaultAndSet2SoapMessage(SoapMessage message, FaultConst faultContent) {
	Fault exceptionFault = (Fault) message.getContent(Exception.class);
	String originalFaultMessage = exceptionFault.getMessage();
	exceptionFault.setMessage(faultContent.getMessage());
	exceptionFault.setDetail(createFaultDetailWithWeatherException(originalFaultMessage, faultContent));
	message.setContent(Exception.class, exceptionFault);
}

It uses the other class WeatherOutError inside the package transformation to arrange the actual WeatherException. As you for sure remember, our specification states that the WeatherException has to be put into the detail tag in our soap:Fault:

private static final de.codecentric.namespace.weatherservice.exception.ObjectFactory objectFactoryDatatypes = new de.codecentric.namespace.weatherservice.exception.ObjectFactory();
 
public static WeatherException createWeatherException(FaultConst faultContent, String originalFaultMessage) {
    // Build SOAP-Fault detail <datatypes:WeatherException>
    WeatherException weatherException = objectFactoryDatatypes.createWeatherException();
    weatherException.setBigBusinessErrorCausingMoneyLoss(true);
    weatherException.setBusinessErrorId(faultContent.getId());
    weatherException.setExceptionDetails(originalFaultMessage);
    weatherException.setUuid("ExtremeRandomNumber");
    return weatherException;
}

There´s one interesting aspect about this: Apache CXF kicks out the root element of that piece of XML that one tries to set into soap:Fault/detail. Therefore we have a short look into the code of the method createFaultDetailWithWeatherException(String originalFaultMessage, FaultConst faultContent) of our WeatherSoapFaultHelper (exception handling is left out of this here for readability reasons):

private static Element createFaultDetailWithWeatherException(String originalFaultMessage,  FaultConst faultContent) {
	Document weatherExcecption = XmlUtils.marhallJaxbElementIntoDocument(WeatherOutError.createWeatherException(faultContent, originalFaultMessage));
	return XmlUtils.appendAsChildElement2NewElement(weatherExcecption);
}

With a little help of our XmlUtils we marshal the WeatherException into a org.w3c.dom.Document. Because the method Fault.setDetail() expects a org.w3c.dom.Element and kicks the root element, we prepend our WeatherException Document with a mock root element which can be thrown away by Apache CXF later on.

Is there a way to build test cases using invalid XML requests?

Now we have an implementation that is said to detect all the cryptic errors Apache CXF produces when requests are sent containing invalid or incorrect XML. Additionally we have a bunch of test cases that we could manually send to our SOAP endpoint (e.g. with SoapUI). But do we have to believe the author? Fortunately not :) Just think of all the things that could break if there´s a small incline of a version number in some used libraries or Apache CXF itself.

Here our knowledge from the preceding part Testing SOAP web services comes in handy. We just have to write some automatically executable tests. Ideally some single system integration tests that fire up our SOAP server endpoint inside the test´s execution.

And as we saw in the paragraph “How to deal with your test cases”, we could also load our test files and marshal them directly into the appropriate object. Or couldn´t we? No, sadly not. I guess you already know why. Because we want to send requests containing invalid XML, we´re not able to use the power of our JAX-B marshallers. If we try to marshal them, we would get similar errors as Apache CXF would throw in its outbound chains in case of an error triggered from invalid XML.

But we still want to be able to write some tests that we could automate. And there´s a way to do it. The core problem is just to send text messages containing our invalid XML via HTTP POST against our endpoint. All we need is a mature HTTP client and we are able to use our invalid XML test files. OK, let´s go! First of all we extend our pom by adding two new dependencies: org.apache.httpcomponents.httpclient and org.apache.httpcomponents.fluent-hcs. But before starting to use our HTTP client, we´ll have a short look at a SOAP message including its HTTP headers that is SOAP 1.1 compliant (in SoapUI just click on “Raw”):

POST http://localhost:8080/soap-api/WeatherSoapService_1.0 HTTP/1.1
Accept-Encoding: gzip,deflate
Content-Type: text/xml;charset=UTF-8
SOAPAction: "http://www.codecentric.de/namespace/weatherservice/GetCityForecastByZIP"
Content-Length: 289
Host: localhost:8080
Connection: Keep-Alive
User-Agent: Apache-HttpClient/4.1.1 (java 1.5)
 
<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:gen="http://www.codecentric.de/namespace/weatherservice/general">
   <soapenv:Header/>
   <soapenv:Body>
      notRelevantHere />
   </soapenv:Body>
</soapenv:Envelope>

Besides the content type, what´s really important here is the HTTP header field SOAPAction. It should contain the SOAP operation, according to the SOAP specification. If not present, our endpoint will complain. But what´s the exact value we have to put in there? This is defined inside the WSDL in the soap:operation´s tag attribute soapAction, which both belong to the wsdl:operation definition. So if we like to use our HTTP client – e.g. via the fashionable fluent API – we have to set the SOAPAction HTTP header correctly. And don´t forget the quotes (which you have to escape)!

Response httpResponseContainer = Request
            .Post("http://localhost:8090/soap-api/WeatherSoapService_1.0")
            .bodyStream(xmlFile, ContentType.create(ContentType.TEXT_XML.getMimeType(), Consts.UTF_8))
            .addHeader("SOAPAction", "\"http://www.codecentric.de/namespace/weatherservice/GetCityForecastByZIP\"")
            .execute();
 
HttpResponse httpResponse = httpResponseContainer.returnResponse();

These few lines of code suffice so that we can torture our endpoint with weird XML requests. As a somehow enhanced version the class SoapRawClient inside our example project does exactly that. We just configure it as a Spring bean in the WebServiceSystemTestConfiguration and provide it with our generated Service Endpoint Interface (SEI). It then dynamically derives the correct SOAPAction header form the SEI. Additionally the method callSoapService(InputStream xmlFile) will give us a SoapRawClientResponse, which is really helpful while crafting our test cases.

single system integration tests with invalid XML

Now we have all the tools in place to finally write our desired test cases. Therefore we again use our knowledge from the last part of this tutorial – especially considering single system integration tests. Because they automatically fire up our SOAP endpoint for the period of the test´s execution. Besides, we know how to load our XML testcases and provide them as InputStream via Spring’s org.springframework.core.io.Resource in a really straightforward manner, without having to grapple with the file handling ourselves.

Our testcase WeatherServiceXmlErrorSystemTest is based on the same principles we know from the last part´s WeatherServiceXmlErrorSystemTest. So let´s look into the details. We inject our SoapRawClient and configure the test files to load:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes=SimpleBootCxfSystemTestApplication.class)
@WebIntegrationTest("server.port:8090")
public class WeatherServiceXmlErrorSystemTest {
 
    @Autowired private SoapRawClient soapRawClient;
 
    @Value(value="classpath:requests/xmlerrors/xmlErrorNotXmlSchemeCompliantUnderRootElementTest.xml")
    private Resource xmlErrorNotXmlSchemeCompliantUnderRootElementTestXml;
 
    @Value(value="classpath:requests/xmlerrors/xmlErrorSoapBodyTagMissingBracketTest.xml")
    private Resource xmlErrorSoapBodyTagMissingBracketTestXml;
 
    // ... and many more

After that we provide every test file with its own test method. All these methods refer to the generalized method checkXmlError(), to which we forward the appropriate test file and the expected kind of error class. The latter are defined in FaultConst, which we already used in the paragraph “Building custom SOAP faults” to build our SOAP fault:

@Test
public void xmlErrorNotXmlSchemeCompliantUnderRootElementTest() throws InternalBusinessException, IOException {
    checkXMLError(xmlErrorNotXmlSchemeCompliantUnderRootElementTestXml, FaultConst.SCHEME_VALIDATION_ERROR);
}
 
@Test
public void xmlErrorSoapBodyTagMissingBracketTest() throws InternalBusinessException, IOException {
    checkXMLError(xmlErrorSoapBodyTagMissingBracketTestXml, FaultConst.SYNTACTICALLY_INCORRECT_XML_ERROR);
}
 
// ... and many more

And finally we´ll see some asserts in action. :) Inside the checkXmlError() method we check our resulting SOAP faults thoroughly. Among others we look for a HTTP status code 500 and the tag faultstring should contain a message from our FaultConst. For simplicity reasons we use the SoapRawClientResponse´s method getFaultstringValue(), which gets us the faultstring out of the HTTP message. With the help of the convenient getUnmarshalledObjectFromSoapMessage(Class jaxbClass) we also get our WeatherException out of the HTTP message. After that we can throw our assert statements:

private void checkXmlError(Resource testFile, FaultConst faultContent) throws InternalBusinessException, IOException {
    // When
    SoapRawClientResponse soapRawResponse = soapRawClient.callSoapService(testFile.getInputStream());
 
    // Then
    assertNotNull(soapRawResponse);
    assertEquals("500 Internal Server Error expected", 500, soapRawResponse.getHttpStatusCode());
    assertEquals(faultContent.getMessage(), soapRawResponse.getFaultstringValue());
 
    de.codecentric.namespace.weatherservice.exception.WeatherException weatherException = soapRawResponse.getUnmarshalledObjectFromSoapMessage(de.codecentric.namespace.weatherservice.exception.WeatherException.class);
    assertNotNull("<soap:Fault><detail> has to contain a de.codecentric.namespace.weatherservice.exception.WeatherException",  weatherException);
 
    assertEquals("ExtremeRandomNumber", weatherException.getUuid());
    assertEquals("The correct BusinessId is missing in WeatherException according to XML-scheme.", faultContent.getId(), weatherException.getBusinessErrorId());
}

One important note here: Using the fully qualified name of your exception (e.g. de.codecentric.namespace.weatherservice.exception.WeatherException) you avoid confusion with identically named classes (de.codecentric.namespace.weatherservice.WeatherException). One could argue here that this should be handled by renaming the exceptions. But sadly this is something you´ll not be able to do in real world projects, where the WSDL is a given and immutable artifact. Just have a look at big enterprisey web services like BiPro.

Now we achieved everything we wanted: Our framework validates the XML requests against the XML schemas and we decide what the SOAP faults will look like. At the same time we can test all this automatically while being able to send arbitrary strange XML requests against our SOAP endpoint. Overall this solution is rather complex from an Apache CXF user´s point of view. Configuring custom SOAP faults as a reaction to XML validation errors could be something far more easy to do. But for now it should work very well for us.

As always, there are things left to look at. Just think about the weird namespaces or the monitoring of SOAP messages with the elastic stack. We´ll have a look at these topics in the upcoming blog posts.

The post Spring Boot & Apache CXF – XML validation and custom SOAP faults appeared first on codecentric Blog.

How I Caused Confusion about Spring Boot

$
0
0

A few weeks ago, I published a blog post about how Spring Boot binds configuration values to JavaBeans. Shortly after this post has been published, Stéphane Nicoll reached out to me. We discussed my findings and came to the conclusion that my blog post was built on false assumptions about how configurations should be defined in Spring Boot.

What was I thinking?!

But let me start from the beginning. I started to think about configuration binding for the first time when I experienced problems with setting the spring.datasource.driver-class-name using OS environment variables. This got me thinking how property names relate to environment variable definitions. Furthermore I was keen to find a good best practice on how to define property keys so that they can be set using environment variables. I looked through the docs to find some information about the logic involved when configuration is loaded in Spring Boot from various sources. After I could not find this information, I started to reverse engineer configuration binding. But this just made the confusion worse. And it all resulted in a blog post that doesn’t really help anybody.

What’s next?

So the question is: How should configuration loading be approached in Spring Boot applications? Stéphane told me that the canonical way of defining property names is to use hyphens. But since Spring Boot cannot accommodate one format for every source (i.e. OS env variables on certain OS doesn’t allow you to use dots), there is the relaxed binding to support those property sources. Moreover he introduced me to configuration meta-data, which is a way to tell your IDE about the structure of your configuration. Tools like IntelliJ can then assist you when implementing configurations. What I learned is that you should define your configuration classes first and then think about how that can be mapped to your property sources instead of doing it the other way around. For this reason I’m planning to write a blog post about the best practices for defining and loading configuration in Spring Boot soon.

Credits

I’d like to thank Stéphane Nicoll for being such a pleasant person to discuss this matter with.

The post How I Caused Confusion about Spring Boot appeared first on codecentric Blog.

Spring Boot & Apache CXF – Logging & Monitoring with Logback, Elasticsearch, Logstash & Kibana

$
0
0

Cool! SOAP-Endpoints that are based on Microservice technologies. But how do we find an error inside one of our many “micro servers”? What about the content of our SOAP messages and how do we log in general? And last but not least: How many products did we sell over the last period? Sounds like we´ll need another blog article dealing with logging and monitoring with Spring Boot and Apache CXF!

Spring Boot & Apache CXF – Tutorial

Part 1: Spring Boot & Apache CXF – How to SOAP in 2016
Part 2: Spring Boot & Apache CXF – Testing SOAP web services
Part 3: Spring Boot & Apache CXF – XML validation and custom SOAP faults
Part 4: Spring Boot & Apache CXF – Logging & Monitoring with Logback, Elasticsearch, Logstash & Kibana

After reading through this blog series’ previous three articles, working with Spring Boot & Apache CXF seems to be a more and more common task. We set up new SOAP endpoints, test them with unit & integration tests and react with XML schema compliant messages – even when the requests are incorrect XML fragments. But before we set up our first SOAP service in a productive environment, we want to know in detail what our SOAP messages do contain when they travel over the wire. Not only to achieve a reasonable collaboration in the process of testing, we need to know what the inbound and outbound SOAP messages are comprised of.

And at the latest while making our way to production, the stakeholders from the non IT departments will ask for real numbers, explaining how often our services are being called – maybe also asking the question how many products we sold over the last period, and so forth. Additionally, our smart architecture evolves into something like the standard thing to do when a SOAP endpoint is needed in a corporate project. And therefore the number of servers is growing rapidly and we can´t manage to look into each and every machine any more just to know what messages it processes right at that moment.

We need transparency over all our SOAP messages. But how do we log with Apache CXF? What framework should we use? And how do we satisfy the questions from the non IT departments that flood us without being forced to dig into every single server`s log files? We´ll try to answer all of those questions, step by step. Let´s just catch a breath for the moment – and then start!

A consistent logging framework: slf4j and Logback

As usual, you can reproduce every step on your own – the GitHub repository tutorial-soap-spring-boot-cxf is waiting for you. The entire following step can be found in the project step6_soap_message_logging and is based on the second-last step from the second part of this blog series Testing SOAP Web Services. This is mainly because you won’t necessarily need custom SOAP faults and so we start on a common basis.

Initially one could ask which logging framework we should use in our architecture. The usage of the Simple Logging Facade for Java (slf4j) is something we for sure don’t really need to discuss. Logback represents a really good implementation of slf4j. Both frameworks’ leading position convinced the Spring Boot team to set them as a standard inside the Spring project. Sadly this is not the case with Apache CXF for now, which makes use of the Java SE Logging from java.util.logging. But there´s some remedy helping us to find one logging framework as a common ground for Spring Boot, Apache CXF and our own implementation: From version 2.2.8 and up, Apache CXF is completely configurable as to what logging framework the whole stack can use. Knowing this, we want to configure slf4j right now. In order to do so, we create a folder META-INF with another one named cxf in src/main/resources. Inside, we create a file org.apache.cxf.Logger containing only one line:

org.apache.cxf.common.logging.Slf4jLogger

And we are done. From the next startup onwards, our whole implementation will use slf4j and our Logback configuration. So now we are in the comfortable position to be able to configure every log statement with our logback-spring.xml:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <include resource="org/springframework/boot/logging/logback/base.xml"/>
    <logger name="org.springframework" level="WARN"/>
    <logger name="org.apache.cxf" level="INFO"/>
    <logger name="de.jonashackt.tutorial" level="DEBUG"/>
 
  <appender name="file" class="ch.qos.logback.core.FileAppender">
      <file> weather-service.log </file>
      <append>true</append>
      <encoder>
         <pattern>%-4relative [%thread] %-5level %logger{35} - %msg%n</pattern>
      </encoder>
   </appender>
 
  <root level="INFO">
      <appender-ref ref="file" />
  </root>
</configuration>

The documentation contains a detailed introduction how to configure Logback. For our purpose this quite simple configuration will be sufficient. At first we include the Logback basis configuration from Spring Boot, so we have a good foundation as a starting point. Second, we configure our main frameworks and implementations with appropriate logging levels. As an example, Apache CXF will run on “INFO” level. This configuration is really just an example and you can play around with it to match your exact needs. After that we configure a simple log file appender, containing a file name and a simple pattern.

Just a quick note: To produce logging events inside our code, we just use an org.slf4j.Logger and instantiate a concrete logger with the help of the method getLogger(Class clazz) from the class org.slf4j.LoggerFactory:

private static final Logger LOG = LoggerFactory.getLogger(ClassWhereWeUseThisLoggerInside.class);

After that we are free to use the whole bandwidth of logging methods like .info(), .debug() and so on.

How to configure SOAP message logging on Apache CXF endpoints

To make Apache CXF log our SOAP messages, the configuration of a LoggingFeature inside our WebServiceConfiguration will be sufficient. This can be done globally on the SpringBus:

@Bean(name = Bus.DEFAULT_BUS_ID)
public SpringBus springBus() {
    SpringBus springBus = new SpringBus();
    LoggingFeature logFeature = new LoggingFeature();
    logFeature.setPrettyLogging(true);
    logFeature.initialize(springBus);
    springBus.getFeatures().add(logFeature);
    return springBus;
}

Now every endpoint of our SOAP server will log all incoming and outgoing messages. A second option ist to configure the logging directly on the Apache CXF endpoint. The configuration could be done quite similarly:

@Bean
public Endpoint endpoint() {
    EndpointImpl endpoint = new EndpointImpl(springBus(), weatherService());
    endpoint.setServiceName(weather().getServiceName());
    endpoint.setWsdlLocation(weather().getWSDLDocumentLocation().toString());
    endpoint.publish(SERVICE_URL);
 
    LoggingFeature logFeature = new LoggingFeature();
    logFeature.setPrettyLogging(true);
    logFeature.initialize(springBus());
    endpoint.getFeatures().add(logFeature);
 
    return endpoint;
}

As we choose one option to configure SOAP message logging, we could fire up one of our (single system) integration tests like the WeatherServiceSystemTest, which provides everything necessary inside. Looking at our console among other things we should be able to find the incoming SOAP message which Apache CXF logged containing some header information like Address, Encoding, and the HTTP headers:

2016-07-14 17:52:50.988  INFO 42064 --- [nio-8090-exec-1] o.a.c.s.W.WeatherService.WeatherService  : Inbound Message
----------------------------
ID: 1
Address: http://localhost:8090/soap-api/WeatherSoapService_1.0
Encoding: UTF-8
Http-Method: POST
Content-Type: text/xml; charset=UTF-8
Headers: {Accept=[*/*], cache-control=[no-cache], connection=[keep-alive], Content-Length=[662], content-type=[text/xml; charset=UTF-8], host=[localhost:8090], pragma=[no-cache], SOAPAction=["http://www.codecentric.de/namespace/weatherservice/GetCityForecastByZIP"], user-agent=[Apache CXF 3.1.6]}
Payload: <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
  <soap:Body>
    <GetCityForecastByZIP xmlns="http://www.codecentric.de/namespace/weatherservice/general" xmlns:ns2="http://www.codecentric.de/namespace/weatherservice/exception" xmlns:ns3="http://www.codecentric.de/namespace/weatherservice/datatypes" xmlns:xmime="http://www.w3.org/2005/05/xmlmime">
      <ForecastRequest>
        <ZIP>99425</ZIP>
        <flagcolor>blackblue</flagcolor>
        <productName>ForecastBasic</productName>
        <ForecastCustomer>
          <Age>67</Age>
          <Contribution>500</Contribution>
          <MethodOfPayment>Bitcoin</MethodOfPayment>
        </ForecastCustomer>
      </ForecastRequest>
    </GetCityForecastByZIP>
  </soap:Body>
</soap:Envelope>

This is only a first step that should really be sufficient to know what´s going on inside the wire.

Tailor Apache CXF´s SOAP message log statements

Up to this point everything has been running very smoothly. But we actually want to decide for ourselves what the log statements should look like. For example, there could be the need to only log the payload´s content which represents our SOAP message. This should be no problem considering the knowledge about the Apache CXF interceptor chains from the last part of this article series. Let´s dig into the CXF class org.apache.cxf.interceptor.LoggingInInterceptor. The method protected void logging(Logger logger, Message message) populates an org.apache.cxf.interceptor.LoggingMessage object with all the necessary information for the standard log output – as we just saw in the preceding paragraph. Apart from the encoding, HTTP method and so forth, our payload is stored here. The method’s last statement calls another method formatLoggingMessage(LoggingMessage loggingMessage), which is really simple, as it only calls toString() on the populated LoggingMessage object. That’s exactly our starting point. We just derive our own class from org.apache.cxf.interceptor.LoggingInInterceptor and override the method formatLoggingMessage(LoggingMessage loggingMessage). And now we are in charge of how Apache CXF will log our SOAP messages.

All the following steps are again prepared inside the GitHub example project step7_soap_message_logging_payload_only. And off we go! Let´s create a class LoggingInInterceptorXmlOnly.java and override the mentioned method like this:

import org.apache.cxf.interceptor.LoggingInInterceptor;
import org.apache.cxf.interceptor.LoggingMessage;
 
public class LoggingInInterceptorXmlOnly extends LoggingInInterceptor {
 
  @Override
    protected String formatLoggingMessage(LoggingMessage loggingMessage) {
        StringBuilder buffer = new StringBuilder();
        buffer.append("Inbound Message:\n");
 
        // Only write the Payload (SOAP-Xml) to Logger
        if (loggingMessage.getPayload().length() > 0) {
            buffer.append(loggingMessage.getPayload());
        }
        return buffer.toString();
    }
}

To let Apache CXF use our own LoggingInInterceptor, we have to configure it as a Spring Bean and hook it into the incoming interceptor chain. Therefore we extend our WebServiceConfiguration

@Bean
public AbstractLoggingInterceptor logInInterceptor() {
    LoggingInInterceptor logInInterceptor = new LoggingInInterceptorXmlOnly();
    // The In-Messages are pretty without setting it - when setting it Apache CXF throws empty lines into the In-Messages
    return logInInterceptor;
}

What´s important here: The bean has to be of the type org.apache.cxf.interceptor.AbstractLoggingInterceptor, and we should refrain from using the obvious method setPrettyLogging(true), because it will just do the exact opposite and deform our SOAP message by adding unattractive empty lines (with one execption: Inside a test in Intellij, that log message still looks nice).

And we didn´t overlook the “In” contained in LoggingInInterceptor – we have to do the same also for our responses. For that we create a class LoggingOutInterceptorXmlOnly.java and derive it from org.apache.cxf.interceptor.LoggingOutInterceptor. Besides the log message containing “Outbound” it´s merely identical to our inbound interceptor implementation. The corresponding Spring Bean in our WebServiceConfiguration will also deliver an AbstractLoggingInterceptor, but in this case we can go ahead and use the method setPrettyLogging(true) – at this point, the Apache CXF implementation surprisingly differs completely from the incoming message logging:

@Bean
public AbstractLoggingInterceptor logOutInterceptor() {
    LoggingOutInterceptor logOutInterceptor = new LoggingOutInterceptorXmlOnly();
    logOutInterceptor.setPrettyLogging(true);
    return logOutInterceptor;
}

Finally we´ll hook our own Logging Interceptor into the Apache CXF interceptor chains. And since we don´t want to lose a single message, we also configure them into the fault chains that are executed in case of an error. All this is done directly on the SpringBus inside our WebServiceConfiguration:

@Bean(name = Bus.DEFAULT_BUS_ID)
public SpringBus springBus() {
    SpringBus springBus = new SpringBus();
    springBus.getInInterceptors().add(logInInterceptor());
    springBus.getInFaultInterceptors().add(logInInterceptor());
    springBus.getOutInterceptors().add(logOutInterceptor());
    springBus.getOutFaultInterceptors().add(logOutInterceptor());
    return springBus;
}

As we fire up our WeatherServiceSystemTest, the SOAP messages inside our log statements only contain what we intended:

2016-07-15 08:35:05.522  INFO 45895 --- [nio-8090-exec-1] o.a.c.s.W.WeatherService.WeatherService  : Inbound Message:
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
  <soap:Body>
    <GetCityForecastByZIP xmlns="http://www.codecentric.de/namespace/weatherservice/general" xmlns:ns2="http://www.codecentric.de/namespace/weatherservice/exception" xmlns:ns3="http://www.codecentric.de/namespace/weatherservice/datatypes" xmlns:xmime="http://www.w3.org/2005/05/xmlmime">
      <ForecastRequest>
        <ZIP>99425</ZIP>
        <flagcolor>blackblue</flagcolor>
        <productName>ForecastBasic</productName>
        <ForecastCustomer>
          <Age>67</Age>
          <Contribution>500</Contribution>
          <MethodOfPayment>Bitcoin</MethodOfPayment>
        </ForecastCustomer>
      </ForecastRequest>
    </GetCityForecastByZIP>
  </soap:Body>
</soap:Envelope>

Elasticsearch, Logstash, Kibana – How to log SOAP messages in 2016

When I learned about elastic ELK stack´s (or more recent “Elastic-Stack”) possibilities, I was left really enthusiastic. And I am not the only one, considering the countless articles on our codecentric blog concerning the topic.

A colleague of mine´s idea made me think that it might be worth a try to map the requirements of monitoring SOAP messages onto the capabilities of an ELK stack. At the beginning we already mentioned that solely logging to log files on one server out of many inside a big cluster of servers is possibly not a good idea. Especially if we set up a whole lot of servers and think about the need for staging environments that multiply the resulting number again. Not to mention the need to analyze our log data to form key performance indicators – ideally presented in nice-looking dashboards that not only IT nerds get something out of. Not to mention the need for mechanisms to drill into a specific case of error and have a look at the very specific SOAP request, so that we can analyze in detail what happened.

And last but not least there was a driver behind my approach that really only came to my mind while writing this blog post. And that is, many solutions in the area of enterprise application integration try to sell their solutions by promising great analysis features and dashboards. But in my experience these solutions often promised things they couldn´t really keep. For example we want transparency over all the incoming and outgoing SOAP messages, which often has a massive performance impact on our integration applications. This is something we really want to avoid. And that not being enough, the cost of activating those nice-looking dashboards often exceeds our integration project´s budget. As if we didn´t already have enough problems inside those projects… The usage of an open source solution helps us to really proceed fast in our project. If we gain the desired success, we can add some nice enterprise features into our monitoring system like alerting and security features or enterprise support, just to mention a few.

And the final point: I promised to use the Elastic stack in the first article of this blog series. So now I can deliver on my own promise. OK. Let´s go!

Initial ELK architecture

There are for sure countless possibilities to set up an Elastic stack. The search engine Elasticsearch is completely dedicated to scalability. If you fire up several instances inside the same network, they´ll find each other automatically and connect into one joint cluster. In the same way, the process of shipping the logs – which is quite costly in some cases – could be buffered with some kind of queueing mechanism or the like. We have these options ready if our ELK installation gets really big and so we are steeled for huge amounts of data.

In our use case we´ll for now want to start with a preferably simple approach. It should give us everything we need while being extensible. Colleagues of mine described a whole bunch of solutions how a Spring Boot application is brought together with an ELK server. Just have a look at our blog or the Softwerker special regarding Elasticsearch (german only atm). From all of them we just pick the most suitable alternative and expand it for being able to cope with the requirements of SOAP message monitoring. And that´s the point where an architectural picture comes in handy:

spring-boot-cxf-elk-architecture

Our SOAP endpoints, which we made available through the usage of Spring Boot and Apache CXF, log everything through slf4j and Logback after working past the starting paragraphs of this article. A really easy but nevertheless powerful approach is the utilization of the logstash-logback-encoders, which will do all the heavy lifting of shipping our log events to the ELK server. And that comes with the additional benefit that we don´t have to install or manage any separate log shipper or agent on our SOAP servers.

The logstash-logback-encoder´s appenders deliver the log events to Logstash – and all of them already JSON encoded. Logstash will then index those log events and stuff them into Elasticsearch. Once every log event is pushed to Elasticsearch, we are able to search and visualize the results with the help of the web application kibana. Optionally we can put an enterprise firewall friendly reverse proxy like Nginx in front to provide Kibana with port 80.

Right. That sounds like rocket science? But don´t worry. We´ll just see it in a moment with the aid of our example project. Therefore – and you´re already familiar with that – we have a new project step8_logging_into_elasticstack inside our GitHub repository.

Configuring the logstash-logback-encoder

Let´s begin with the configuration of the logstash-logback-encoder. It comprises some encoders that will preprocess our log events and put their contents into Fields in JSON style (key value). These Standard Fields contain a good starting package for our later analysis of logs inside the ELK server.

But before we dig into the configuration of the logstash-logback-encoder, we have to add the appropriate dependency inside our pom.xml:

<!-- Logging with ELK -->
<dependency>
  <groupId>net.logstash.logback</groupId>
  <artifactId>logstash-logback-encoder</artifactId>
  <version>4.6</version>
  <!-- Exclude older version of logback-core for conflicting with newer in spring-boot,
  see https://github.com/logstash/logstash-logback-encoder/issues/153 -->
  <exclusions>
    <exclusion>
      <artifactId>logback-core</artifactId>
      <groupId>ch.qos.logback</groupId>
    </exclusion>
  </exclusions>
</dependency>

As you can see, it´s recommended to exclude the transitive dependency to Logback itself, because Spring Boot already brings its own version into the mix. And sadly at this point beside the issue 153 there are some more headaches concerning the interaction of Spring Boot, logstash-logback-encoder und Logback. For your wellbeing, I would recommend for now that you stick with the Spring Boot version 1.3.3.RELEASE. Otherwise you run into issue 160, which actually is a problem inside of Logback 1.1.7. This error is fixed in 1.1.8, which isn´t released yet and therefore not a Spring Boot ingredient for now. Hopefully the release schedule of Logback 1.1.8 will match the one of Spring Boot 1.4.0. Then the whole problem will be gone. If you can´t wait to use a newer Spring Boot version, you could try overriding the Logback version inside the Maven properties tag (but I can´t really recommend that):

<logback.version>1.1.6</logback.version>

But now back to the essence. To configure the encoder, we expand our logback-spring.xml known from the project step7_soap_message_logging_payload_only. We replace our FileAppender and substitute it with the appropriate appender from the logstash-logback-encoder:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <include resource="org/springframework/boot/logging/logback/base.xml"/>
    <logger name="org.springframework" level="WARN"/>
    <logger name="de.jonashackt.tutorial" level="DEBUG"/>
    <logger name="org.apache.cxf" level="INFO"/>
 
    <!-- Logstash-Configuration -->
    <appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>192.168.99.100:5000</destination>
        <encoder class="net.logstash.logback.encoder.LogstashEncoder">
            <customFields>{"service_name":"WeatherService 1.0"}</customFields>
            <fieldNames>
                <message>log_msg</message>
            </fieldNames>
        </encoder>
        <keepAliveDuration>5 minutes</keepAliveDuration>
    </appender>
 
  <root level="INFO">
        <appender-ref ref="logstash" />
  </root>
</configuration>

Inside our example project we rely on the LogstashTcpSocketAppender, which is one of the many available appender variants. As a following step the alternative usage of an async appender would be imaginable, resulting in another dimension of decoupling the generation of logs from our productive web service calls. You could achieve a similar effect by leveraging an intermediary Redis for example.

But for now and the most cases, the LogstashTcpSocketAppender will suffice because it will also never block a logging thread. Internally all TcpAppenders work asynchronously with the help of the LMAX Disruptor RingBuffer. In the unlikely event of a ring buffer being overrun, the log events will be dropped to prevent our system from crashing. But for those extreme situations the mentioned alternative solutions could be worth a look. In each and every case we avoid any effects on the performance of our SOAP endpoints while at the same time gaining complete transparency over every incoming and outgoing message. Hence we are one step ahead of the expensive enterprise application integration suites.

Inside our logback-spring.xml we forward the destination containing our Elastic server´s IP and the port of Logstash to the LogstashTcpSocketAppender. We add two field definitions inside the LogstashEncoder. At first we configure a Custom Field containing the name of our SOAP web service. This field is evaluable later on in Kibana. Additionally we optionally change the Standard Field strong>message into log_msg, just to provide better readability and identification in Kibana.

In the end we configure a keepAlive every 5 minutes in the connection between our SOAP server and the Elastic server – just to provide a bit more reliability.

Set up our ELK server and configure Logstash

As we already learned, there are manifold options to set up an Elastic server. Again we´ll use a simple but powerful setup. Cloning a GitHub Repository and calling docker-compose up inside the root folder seems to be pretty simple, right?! The only prerequisites are a running Docker installation besides Docker Compose. Brew on the Mac has both available for you. Many thanks at this point to deviantony for providing this easy ELK setup. 🙂

Just one moment until we give in to the temptation of firing up our Elastic server. We should just configure Logstash. An example is again provided inside the mentioned GitHub repository inside the folder logstash/config/logstash.conf. You can ignore the mutate-Filter that´s used there for now:

input {
  tcp {
    port => 5000
  }
}
 
filter {
  ### getting all fields that where extracted from logstash-logback-plugin
  json {
    source => "message"
  }
  ### filter out keep-alive-messages, that have no valid JSON-format and produce _jsonparsefailure in tags
  if "_jsonparsefailure" in [tags] {
      drop { }
  }
}
 
output {
  elasticsearch {
      hosts => "elasticsearch:9200"
  }
}

Every Logstash configuration is comprised of three building blocks: input, filter and output. Inside the input block we configure our entry channel. In our concrete use case we utilize the input plugin tcp – one of the many possible input plugins

Inside the filter block we harness one of the likewise many filter plugins. Based on the filled fields from the logstash-logback-encoder, the json filter plugin expandes the contained JSON into actual logstash data structures, which will then be pushed to Elasticsearch. As we configured a keepAlive in our logstash-logback-encoder, we have to filter out the keep alive messages again because we don´t want ugly “_jsonparsefailure” log statements inside our Kibana discovery perspective.

In the last section we configure the goal of all of this inside our output block: the port of our Elasticsearch instance. Now we can finally bring our Elastic server to life by submitting a docker-compose up on commandline.

Starting the log analysis with Kibana

It will take some time, so feel free to grab a coffee. But eventually our Elastic server has started and all the components like Logstash, Elasticsearch and Kibana are running. Depending on the IP of your Docker host and on your setup, the URL to Kibana can slightly differ. If your Docker host has the IP 192.168.99.100, the URL to Kibana should look like this: http://192.168.99.100:5601/app/kibana. The port is defined inside the docker-compose.yml of our ELK setup. If you open up your browser, Kibana should look like this:

kibana_first_screen

At that point our Elastic server seems to be running. As a next step, we should create an index pattern under Settings/Indices. We just accept the preallocated logstash-* and click right on Create. After that Kibana should show a list of fields:

kibana-create-index

Now we eventually want to produce some log statements and evaluate if our “initial ELK architecture” is working in the field. Therefore we start a single system integration test (see part 2 of this blog series) like the WeatherServiceSystemTest. This should be a no-brainer, so that we can focus on working with Kibana. And going back again to the Discover page, we see our first log statements:

kibana-first-logs

With the help of the add buttons we could configure our Discovery perspective to use the indexed fields as column headings. For example the custom field service_name and the output of the actual log event inside of log_msg should be interesting. The time heading is always presented as the first column:

kibana-fields-added

And that´s it! Now our SOAP server based on Spring Boot and Apache CXF is logging into our Elastic server. From now on, the play instinct of some readers will come to life. Because now we are able to leverage all the power of the analysis and visualisation tooling provided in Kibana. And we can produce some of the shiny dashboards our marketing staff will envy us about.

If we really want to exhaust all the possibilities, we could optimize the data that is flowing into our Elastic server. For instance we can stuff our SOAP messages into their own Elasticsearch fields, so that we can evaluate them far better. Additionally we want to know what log statements belong to a specific SOAP request. Erm… Let´s actually build that right now. The play instinct of the author is also coming through. 🙂

Logging SOAP messages into their own Elasticsearch fields

To answer all the questions of the non IT-departments it´ll pay off to have Custom Fields escpecially for the inbound and outbound SOAP messages. That´s because an evaluation onto a specific Elasticsearch field is done far more easily later on in Kibana and sometimes it´s just impossible otherwise. Therefore we need a concept to push the SOAP messages logged by Apache CXF to Elasticsearch, residing inside their own fields.

Again there´s a whole bunch of possible solutions. But one of them is simultaneously easy to use, but really powerful when it comes to its features. The elegancy of the concept is quite thrilling: We just use the Mapped Diagnostics Context – in short MDC. As part of the slf4j API it is implemented by Logback and based on the book Patterns for Logging Diagnostic Messages in Pattern Languages of Program Design written by R. Martin, D. Riehle and F. Buschmann. But fear not. You don´t have to read the whole book now. From a user´s perspective the Logback MDC is just some kind of Map in which we can put our log messages at the time of their generation, accompanied by an appropriate key. The logstash-logback-encoder at the end just transfers every MDC record into an field inside a log event. And these fields are traveling through all the intermediate stations par for par into an Elasticsearch field. And that is also suitable for many SOAP requests in parallel, which are processed inside respective threads. Because we already know how to hook into the Apache CXF logging mechanism, our only remaining concern is how to write our SOAP messages into an MDC field.

OK. Let´s roll up our sleeves! As usual we´ll find everything inside an separate project in our GitHub repository called step9_soap_message_logging_into_custom_elasticsearch_field. We start with the adjustment of the interceptors we built in step7_soap_message_logging_payload_only and rename them appropriately: SoapMsgToMdcExtractionLoggingInInterceptor.java and SoapMsgToMdcExtractionLoggingOutInterceptor.java. Instead of logging the SOAP messages directly with the help of the logstash-logback-encoder´s method net.logstash.logback.marker.Markers.append we put them directly into the MDC. Therefore we have to initialize a org.slf4j.Logger at first:

import org.apache.cxf.interceptor.LoggingInInterceptor;
import org.apache.cxf.interceptor.LoggingMessage;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import static net.logstash.logback.marker.Markers.append;
 
public class SoapMsgToMdcExtractionLoggingInInterceptor extends LoggingInInterceptor {
 
    private static final Logger LOG = LoggerFactory.getLogger(SoapMsgToMdcExtractionLoggingInInterceptor.class);
    private static final String SOAP_MESSAGE_INBOUND = "soap-message-inbound";
 
    @Override
    protected void log(java.util.logging.Logger logger, String message) {
        // just do nothing, because we don´t want CXF-Implementation to log,
        // we just want to Push the SOAP-Message to Logback -> Logstash -> Elasticsearch -> Kibana
    }
 
    @Override
    protected String formatLoggingMessage(LoggingMessage loggingMessage) {
        if (loggingMessage.getPayload().length() > 0) {
            LOG.info(append(SOAP_MESSAGE_INBOUND, loggingMessage.getPayload().toString()), "Log Inbound-SoapMessage to Elasticseach");
        }
        return "";
    }
}

Furthermore we override the method log(Logger logger, String message) of the class org.apache.cxf.interceptor.AbstractLoggingInterceptor to undermine further Apache CXF SOAP message logging. We use the same method for the Outbound-Logging-Interceptor. After that we just start our WeatherServiceSystemTest und add the new Elasticsearch fields into our Kibana´s Discovery perspective. Now the SOAP messages should look like this:

kibana-soap-messages-customfields

Now we could fire up any custom query onto our new Elasticsearch fields. For example:

soap-message-inbound: "*<MethodOfPayment>Bitcoin</MethodOfPayment>*"

The results would show all incoming SOAP messages that contain Bitcoin as the method of payment. And building on top of that result set, we could set up more surveys such as counting or setting in relation to the time of occurrence… and so forth. At that point there are almost no boundaries. You should just watch out for SOAP messages that are really big – for example if they contain more than one base64 encoded PDF files. In this case it is best to use a grok filter to shorten the messages before pushing them to Elasticsearch.

Correlating all log events relating to a specific SOAP request

Secondly we wanted to know what log statements belong to a specific SOAP request. Inside the scope of our example project, we are often the only user of our implementation because we just start one test that starts one request at a given time span. In this case, the log entries inside Kibana are mostly in a chronological order. But this is not guaranteed. This situation changes particularly if our SOAP endpoint is set productive and is used by many users. They will send many parallel requests to our servers and we couldn´t tell anymore what log entry belongs to a specific SOAP request. Additionally, if we have more than one SOAP server, that situation is going to be worse.

The need for some kind of request ID arises for which we could use the filter mechanisms of Kibana. Again the concept of the MDC will help us tremendously. Besides the many benefits mentioned, it holds all entries threadwise – meaning per request thread. Putting an UUID into the game would be the perfect fit. There´s just one thing you have to know about the usage of the MDC: The specification allows the reuse of field contents, when a thread is dying. For that we have to be sure to always generate a fresh UUID at the beginning and again always delete the UUID at the end of a request.

At the same time we want to witness every single log event that our system produces – no matter if this event occurred in our own functional implementation or in Apache CXF. We just have to assess the MDC in the earliest possible stage. At this point the Servlet Specification which the Apache CXF Servlet is implementing comes in handy. The specification allows the usage of a Servlet Filter which could hook in before and after every request that a servlet is processing.

This seems to be a perfect fit. A Servlet Filter would be able to notice every single SOAP request and the correct usage of the MDC guarantees the uniqueness of every ID that is pushed into Elasticsearch. Sounds like a dream team. And we are going to implement that now. Our project step9_soap_message_logging_into_custom_elasticsearch_field already contains an implementation:

import org.slf4j.MDC;
import javax.servlet.*;
import java.io.IOException;
import java.util.UUID;
 
public class LogCorrelationFilter implements Filter {
    private static final String SERVICE_CALL_ID_KEY = "service-call-id";
 
    @Override
    public void init(FilterConfig filterConfig) throws ServletException {}
 
    @Override
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
        MDC.put(SERVICE_CALL_ID_KEY, UUID.randomUUID().toString());
        try {
            chain.doFilter(request, response);
        } finally {
            MDC.remove(SERVICE_CALL_ID_KEY);
        }
    }
 
    @Override
    public void destroy() {}
}

We derive our class from the javax.servlet.Filter and override all necessary methods. We are only interested in doFilter(ServletRequest request, ServletResponse response, FilterChain chain). Inside of it we generate a UUID and put it into the org.slf4j.MDC – together with the key service-call-id. After that a chain.doFilter(request, response) call ensures that the Apache CXF Servlet proceeds with its processing. At the end inside the final block, we remove the UUID from the MDC to prevent reusage of this concrete ID. And that´s all for the Servlet Filter. We only have to configure it as a Spring Bean. Therefore we add it to the WebServiceConfiguration.java:

@Bean
public Filter filter() {
    return new LogCorrelationFilter();
}

Now we can start our WeatherServiceSystemTest and should recognize the new Elasticsearch field that should pop up inside our Kibana´s discovery perspective. Just to test our feature, we repeatedly start the system test or we just fire up our SimpleBootCxfApplication.java and use the power of Load Test inside of SoapUI to treat our SOAP endpoint with many parallel threads. With this approach we generate valid test data and are now able to filter a specific service-call-id inside our log statements. In order to do so, you just have to open a log entry in the discovery perspective and use the small magnifier icon containing the plus symbol behind the field service-call-id:

kibana-filter-for-service-call-id

The result ist quite striking. Kibana shows us all log entries that belong to a specific SOAP request, also if our endpoint handles many parallel calls:

kibana-filtered-for-one-service-call-id

By now we have accomplished nearly every initially stated requirement that a logging and monitoring solution for SOAP endpoints should be capable of. Every used framework now logs through the help of slf4j and Logback. We also know for sure what SOAP messages travel over the wire and are able to help our service consumers quite extensively in case of questions or errors in the implementation or test phase. Additionally we are able to spot errors inside our own implementation substantially faster. And that is true for a nearly uncountable number of SOAP servers that are based on Spring Boot and Apache CXF. In other words, we don´t have to dig into every single server´s log file inside our kind of microservice SOAP endpoints and shorten the needed time massively to find an error.

Also in the case of massive parallel user activity, we know what log entry belongs to a specific SOAP request. And we are prepared for nearly every question that arises from a non IT department, as we have our dedicated custom fields containing the SOAP messages. We could create the appropriate visualization to answer every question and are able to configure nice-looking dashboards – there are many good tutorials outside, for instance on elastic.co. And last but not least we are one step ahead of the big enterprise application integration suites. We are able to log without affecting the productive performance. It´s finally a really nice monitoring solution based on recent technologies.

This article answered many questions that arose in the first part of this blog post series. Nevertheless there will definitely be a follow-up. At least one topic is ready to get off the starting block: We could package all the ideas of the previous articles and make them available for all our SOAP endpoints… But I don´t want to give away too much for now.

The post Spring Boot & Apache CXF – Logging & Monitoring with Logback, Elasticsearch, Logstash & Kibana appeared first on codecentric Blog.


Spring Boot & Apache CXF – SOAP on steroids fueled by cxf-spring-boot-starter

$
0
0

You haven´t read any of this blog series’ articles yet? Perfectly fine – because the best is yet to come. We´ll now combine Spring Boot and Apache CXF in their own spring-boot-starter. By doing so, we’ll have our SOAP endpoints up and running even faster, while at the same time leveraging the power of all introduced features!

Spring Boot & Apache CXF – Tutorial

Part 1: Spring Boot & Apache CXF – How to SOAP in 2016
Part 2: Spring Boot & Apache CXF – Testing SOAP web services
Part 3: Spring Boot & Apache CXF – XML validation and custom SOAP faults
Part 4: Spring Boot & Apache CXF – Logging & Monitoring with Logback, Elasticsearch, Logstash & Kibana
Part 5: Spring Boot & Apache CXF – SOAP on steroids fueled by cxf-spring-boot-starter

In the preceding parts we learned a lot about how to work with Spring Boot and Apache CXF. There´s only one problem left: With every new SOAP endpoint we start from the first blog post´s steps and have to walk through all of them every time. We now have all those current technologies, but this is not feeling right and we somehow contravene DRY. But luckily enough, the Spring Boot guys also thought about this problem. Because it is possible to build a custom spring-boot-starter which brings in everything necessary for a special use case.

But wait! Isn´t there an official Apache CXF spring-boot-starter?

In the meantime the Apache CXF developers also became aware the absence of their own spring-boot-starter and just released one (thanks again for pointing that out Stéphane Nicoll 🙂 ). If one takes a deeper look at this spring-boot-starter, it soon becomes clear that it´s only focused on Apache CXF. Which is absolutely fine! But sadly for us this means that the spring-boot-starter just initializes Apache CXF. And that´s it. As described in the first article of this blog post series, this concerns the following code we can spare:

@Bean
public ServletRegistrationBean dispatcherServlet() {
    return new ServletRegistrationBean(new CXFServlet(), "/soap-api/*");
}
@Bean(name=Bus.DEFAULT_BUS_ID)
public SpringBus springBus() {
    return new SpringBus();
}

As mentioned, this is perfectly fine from the viewpoint of Apache CXF! But for us – embedded inside our enterprisey environments – this is not enough. Because with everything else, we are just left alone.

Introducing our own spring-boot-starter

Because of all this we decided to develop our own spring-boot-starter for Apache CXF that also brings in all the necessary surrounding technologies. We open-sourced it on GitHub.

But wait! Isn´t a spring-boot-starter a Spring developer´s exclusive way to make functionality available through Spring Boot? Hell no! Besides this huge list of starters from the Spring guys themselves, it´s possible for every developer to write their own one. And there are already quite a few on the official community contributions list.

The Spring Boot developers describe how to build your own spring-boot-starter on docs.spring.io. Additionally a colleague of mine summarized all the necessary steps in his blog article.

If you´re going to build your own starter, you´ll have the chance to deep-dive into the Spring Boot technology and gain a more profound understanding about this awesome framework. At the same time technically focused libraries aren´t developed multiple times and they can be used in all those projects with the same requirements. So building spring-boot-starters doesn´t contradict the thoughts behind the Microservices movement – it actually supports them. The exchange of technical libraries – ideally over GitHub – is explicitly encouraged.

cxf-spring-boot-starter

Let´s get straight to the point: Similar to the spring-boot-starter-batch-web, we made the cxf-spring-boot-starter available on GitHub. It makes our developer´s life a whole lot easier and lightens our workload by doing all the stuff we otherwise have to do on our own. This includes:

  • Initializing and setting up all Apache CXF components ( of course with 100% Spring Java configuration 😉 )
  • Extremely easy SOAP message Logging configuration
  • Extraction of all in- and out-going SOAP XML messages ready for your Elastic-Stack (including own custom fields and correlation of all log events relating to one SOAP request)
  • Providing a builder for custom SOAP faults that will respond in case of non-XML-schema-compliant requests
  • Comprehensive support for Unit, Integration and Single System Integration tests including assistance with invalid XML requests

As the devil is in the detail with the configuration of the jaxws-maven-plugin, it is completely encapsulated within a second component: the cxf-spring-boot-starter-maven-plugin. Now the generation of all necessary Java classes from the WSDL and its imported XSD files is going to be a cakewalk. Additionally the cxf-spring-boot-starter-maven-plugin scans your resource folder for the WSDL and ensures that every generated class is placed in the classpath. It also configures the jaxws-maven-plugin in a way that there will be no absolute paths inside the @WebServiceClient annotated class. This will for sure save you pains on your CI server.

Sounds good? Let´s go!

To show the advantages of the cxf-spring-boot-starter, I wish to propose the following approach to you: We completely set up an example project with all features from the ground up that touches every topic described in this blog series – but hopefully much faster. 😉 As usual there is an example project inside our tutorial repository, where you can reconstruct every step.

So let´s go! As mentioned in the first article we speed up the initial project creation using the Spring Initializr. Given a group and artifact we create our project and are ready to go. The generated POM is derived from the spring-boot-starter-parent. It depends on the spring-boot-starter-test and has the build plugin spring-boot-maven-plugin in place.

To embed our cxf-spring-boot-starter, we have to add the following dependency inside the appropriate section (the current version is 1.0.7.REALEASE):

<dependencies>
    <dependency>
        <groupId>de.codecentric</groupId>
        <artifactId>cxf-spring-boot-starter</artifactId>
        <version>1.0.7.RELEASE</version>
    </dependency>
</dependencies>

Thereafter we also add the build plugin cxf-spring-boot-starter-maven-plugin inside our build section (1.0.7.RELEASE is also the current version):

<build>
    <plugins>
        <plugin>
            <groupId>de.codecentric</groupId>
            <artifactId>cxf-spring-boot-starter-maven-plugin</artifactId>
            <version>1.0.7.RELEASE</version>
            <executions>
                <execution>
                    <goals>
                        <goal>generate</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

Due to a bug in logback 1.1.7 and the fact that we use the logstash-logback-encoder to extract our in- and outgoing SOAP XML messages for processing inside the Elastic Stack, we have to manually downgrade logback to 1.1.6 at the moment – which is only temporary and just until logback 1.1.8 is released:

 <properties>
    <logback.version>1.1.6</logback.version>
</properties>

Full speed ahead!

Now we throw our WSDL and XSD files somewhere inside of src/main/resources – where somewhere really means that you can choose any subfolder you want. 🙂 Inside our example project all files again can be found in src/main/resources/service-api-definition. Now we just need to start the generation of all necessary Java classes from the WSDL via a mvn generate-sources or alternatively import the project into our favorite IDE, which does that in background.

To complete the last step, we again need to implement the SEI. That´s the WeatherServiceEndpoint, where we start implementing our business logic later on. After that we create a @Configuration annotated class where we initialize our endpoint:

@Configuration
public class WebServiceConfiguration {
 
    @Autowired private SpringBus springBus;
 
    @Bean
    public WeatherService weatherService() {
        return new WeatherServiceEndpoint();
    }
 
    @Bean
    public Endpoint endpoint() {
        EndpointImpl endpoint = new EndpointImpl(springBus, weatherService());
        endpoint.setServiceName(weatherClient().getServiceName());
        endpoint.setWsdlLocation(weatherClient().getWSDLDocumentLocation().toString());
        endpoint.publish("/WeatherSoapService_1.0");
        return endpoint;
    }
 
    @Bean
    public Weather weatherClient() {
        return new Weather();
    }
}

As you can see, we don´t have to initialize Apache CXF. The cxf-spring-boot-starter does that for us. We just need to autowire the org.apache.cxf.bus.spring.SpringBus, which is needed to set up the endpoint.

And that´s it! We are now able to start our SimpleBootCxfApplication via a “RUN” inside our IDE or just execute a mvn spring-boot:run inside our console. A short look at http://localhost:8080/soap-api assures us that Apache CXF is completely up and running and our web service endpoint is registered. Real SOAP calls with SoapUI will work. Try it!

Testing SOAP web services

Now that we have covered all key aspects of article one, let´s have a look at the second one about testing SOAP web services. We´ll focus on the Single System Integration tests, because inside of them the impact of the cxf-spring-boot-starter is the biggest of all test variants. In a Single System Integration test we have to have a JAX-WS client that is capable of calling our local server. And therefore we need the URL of our local SOAP endpoint. Since one of the features of the cxf-spring-boot-starter is to automatically set the web service´s URL, we need to know how to obtain it from the starter.

Therefore we take the WeatherServiceXmlFileSystemTest from the second blog post. It will be configured with the new SimpleBootCxfSystemTestConfiguration – and there´s the point we have to look into. We autowire the de.codecentric.cxf.configuration.CxfAutoConfiguration – then we are able to get our required URL via cxfAutoConfiguration.getBaseUrl(). After that the configuration of our JAX-WS client looks like this:

@Autowired private CxfAutoConfiguration cxfAutoConfiguration;
 
@Bean
public WeatherService weatherServiceSystemTestClient() {
    JaxWsProxyFactoryBean jaxWsProxyFactory = new JaxWsProxyFactoryBean();
    jaxWsProxyFactory.setServiceClass(WeatherService.class);
    jaxWsProxyFactory.setAddress("http://localhost:8090" + cxfAutoConfiguration.getBaseUrl() + SimpleBootCxfConfiguration.SERVICE_URL);
    return (WeatherService) jaxWsProxyFactory.create();
}

To adjust the basic URL of our CXF endpoints, it´s enough to extend our application.properties with the soap.service.base.url property – expanded with an appropriate path like:

soap.service.base.url=/my-custom-service-api

The title of the CXF generated service overview site displayed when entering a root URL like http://localhost:8080/my-custom-service-api could be configured just as easily. This comes in handy when that site is published to consumers of your services and you want to give it a customized name. Just add the cxf.servicelist.title property. You can see both properties in action inside the example project´s application.properties.

Our test case WeatherServiceXmlFileSystemTest is only changing slightly. The cxf-spring-boot-starter comes with the convenient utility class de.codecentric.cxf.common.XmlUtils that does all the (un-) marshalling of XML and JAX-B objects inside our test cases:

@RunWith(SpringRunner.class)
@SpringBootTest(
        classes=SimpleBootCxfSystemTestApplication.class,
        webEnvironment= SpringBootTest.WebEnvironment.DEFINED_PORT,
        properties = {"server.port=8090"}
)
public class WeatherServiceXmlFileSystemTest {
 
    @Autowired private WeatherService weatherServiceSystemTestClient;
 
    @Value(value="classpath:requests/GetCityForecastByZIPTest.xml")
    private Resource getCityForecastByZIPTestXml;
 
    @Test
    public void getCityForecastByZIP() throws WeatherException, IOException, BootStarterCxfException {
        // Given
        GetCityForecastByZIP getCityForecastByZIP = XmlUtils.readSoapMessageFromStreamAndUnmarshallBody2Object(getCityForecastByZIPTestXml.getInputStream(), GetCityForecastByZIP.class);
 
        // When
        ForecastReturn forecastReturn = weatherServiceSystemTestClient.getCityForecastByZIP(getCityForecastByZIP.getForecastRequest());
 
        // Then
        assertNotNull(forecastReturn);
        assertEquals(true, forecastReturn.isSuccess());
        ...
    }
}

OK, there´s another difference. But this has nothing to do with the cxf-spring-boot-starter, but with the new features around testing that come with Spring Boot 1.4.x, where most annotation classes are condensed into org.springframework.boot.test.context.SpringBootTest. This annotation substitutes all the other ones like @WebIntegrationTest, @SpringApplicationConfiguration, @ContextConfiguration and so forth (for more info see the spring.io blogpost). Also now writing just SpringRunner instead of SpringJUnit4ClassRunner will reduce the time you need to type.

And that´s all you need for testing SOAP web services.

XML validation and custom SOAP faults

The third blog article´s conclusion (about the many steps that are necessary to configure custom SOAP faults that will respond when XML schema validation fails) wasn´t the best. But because this requirement is often found in enterprisey web service specifications (just look at BiPro spec), the cxf-spring-boot-starter makes life a lot easier for us. Basically we can forget every step from the third article and just implement the interface de.codecentric.cxf.xmlvalidation.CustomFaultBuilder. And that´s all!

In more detail: There are two methods we have to override. createCustomFaultMessage(FaultType faultType) gives us the possibility to tailor the fault messages inside of our SOAP faults. Due to the forwarded parameter of the type de.codecentric.cxf.common.FaultType we also know if it´s an XML schema validation error or if there´s just incorrect XML sent to our endpoint. Due to the design of the cxf-spring-boot-starter, we can also react to errors in a schema compliant way that have nothing to do with XML schema validation at all.
With the help of the second method createCustomFaultDetail(String originalFaultMessage, FaultType faultType) we are able to build our custom XML schema compliant error messages that will be placed inside the detail-Tag. We can see both in action inside the class WeatherFaultBuilder of our example project:

@Component
public class WeatherFaultBuilder implements CustomFaultBuilder {
 
	private de.codecentric.namespace.weatherservice.exception.ObjectFactory objectFactoryDatatypes = new de.codecentric.namespace.weatherservice.exception.ObjectFactory();
 
	@Override
	public String createCustomFaultMessage(FaultType faultType) {
		if(FaultType.SCHEME_VALIDATION_ERROR.equals(faultType))
			return CustomIds.NON_XML_COMPLIANT.getMessage();
		else if(FaultType.SYNTACTICALLY_INCORRECT_XML_ERROR.equals(faultType))
			return CustomIds.COMPLETE_USELESS_XML.getMessage();
		else
			return CustomIds.SOMETHING_ELSE_WENT_TERRIBLY_WRONG.getMessage();
	}
 
	@Override
	public WeatherException createCustomFaultDetail(String originalFaultMessage, FaultType faultType) {
		// Build SOAP-Fault detail <datatypes:WeatherException>
		WeatherException weatherException = objectFactoryDatatypes.createWeatherException();
		weatherException.setBigBusinessErrorCausingMoneyLoss(true);
		setIdBasedUponFaultContent(faultType, weatherException);
		weatherException.setExceptionDetails(originalFaultMessage);
		weatherException.setUuid("ExtremeRandomNumber");
		return weatherException;
	}
...
}

Inside the method createCustomFaultDetail(String originalFaultMessage, FaultType faultType) we should be careful to respond with the correct exception type. Sometimes there are multiple exception types of the same name inside those specifications. Again, this can be found in the BiPro web services.

Everything that´s left here is to define this @Component annotated class as a Spring Bean. That´s it again 🙂

We for sure shouldn´t trust the author – we want to see a running test case! And also supporting this case the cxf-spring-boot-starter has something for you: the de.codecentric.cxf.soaprawclient.SoapRawClient. It allows us to send non-XML-schema-compliant XML against our endpoints. This is needed to provoke XML validation failures that will lead to our custom tailored SOAP faults.

To show this in action, we re-use the WeatherServiceXmlErrorSystemTest from the third article. It´s a Single System Integration test that´ll provoke the desired validation errors and checks if our endpoint reacts in the desired way. We´ll just expand it a bit. Besides the obligatory changes that come with Spring Boot 1.4.x we change the manually implemented constants into the cxf-spring-boot-starter provided de.codecentric.cxf.common.FaultType inside our WeatherServiceXmlErrorSystemTest. We should also assert with regard to the error text we provided inside our WeatherFaultBuilder:

assertEquals(WeatherFaultBuilder.CUSTOM_ERROR_MSG, soapRawResponse.getFaultstringValue());

The test cases´ configuration changes also just slightly: It uses the CxfAutoConfiguration to get the needed URL:

@Autowired private CxfAutoConfiguration cxfAutoConfiguration;
 
@Bean
public SoapRawClient soapRawClient() throws BootStarterCxfException {
    return new SoapRawClient(buildUrl(), WeatherService.class);
}
 
private String buildUrl() {
    // return something like http://localhost:8084/soap-api/WeatherSoapService
    return "http://localhost:8087"
            + cxfAutoConfiguration.getBaseUrl()
            + SimpleBootCxfConfiguration.SERVICE_URL;
}

If we run our WeatherServiceXmlErrorSystemTest now, everything should go green. Additionally we can marvel at the XML-schema-compliant error messages via SoapUI or Boomerang SOAP & REST Client. And that´s it: We configured our custom SOAP faults in no time.

Logging & Monitoring with Logback, Elasticsearch, Logstash & Kibana

Now we’ve already reached this blog series´ last article. Particularly the subject of logging can cause a whole lot of work. And again the cxf-spring-boot-starter will support us effectively. Using it, all the logging is automatically configured to use slf4j and logback. If we want to see the in- and outgoing SOAP XML messages inside your logfile or console, we only have to set the property soap.messages.logging inside our application.properties.

If you want to throw your logs inside an Elastic Stack, the cxf-spring-boot-starter will take over most of the heavy lifting. The only requirements are setting the property

soap.messages.extract=true

and having a logback-spring.xml in place. It can look exactly like the one in the previous article:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <include resource="org/springframework/boot/logging/logback/base.xml"/>
    <logger name="org.springframework" level="WARN"/>
    <logger name="de.jonashackt.tutorial" level="DEBUG"/>
    <logger name="org.apache.cxf" level="INFO"/>
 
    <!-- Logstash-Configuration -->
    <appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>192.168.99.100:5000</destination>
        <encoder class="net.logstash.logback.encoder.LogstashEncoder">
            <customFields>{"service_name":"WeatherService 1.0"}</customFields>
            <fieldNames>
                <message>log_msg</message>
            </fieldNames>
        </encoder>
        <keepAliveDuration>5 minutes</keepAliveDuration>
    </appender>
 
  <root level="INFO">
        <appender-ref ref="logstash" />
  </root>
</configuration>

If you have a running Elastic Stack in place (there are some hints in the last article how you can get this up and running easily), that´s it.

But there´s more. All SOAP messages are placed into the custom fields soap-message-inbound and soap-message-outbound, which will be analysable much more easily. All those fields are defined in the enumeration de.codecentric.cxf.logging.ElasticsearchField. Furthermore all log events that are created through one SOAP request are automatically correlated and the cxf-spring-boot-starter tries to determine the SOAP service method’s name (although this just works for WSDL specification 1.1 at the moment) and places it into the custom field soap-method-name. Finally we have everything in place that´s needed to thoroughly log our SOAP messages.

More speed in SOAP service development!

Now we´ve got everything we need to develop our SOAP services even faster, while having all the features ready to support the needs of our enterprisey environments. With the cxf-spring-boot-starter we can provide a solution for every problem described in our four blog articles. And as you can see: that´s a lot.

But if you´re really missing something – no problem! The cxf-spring-boot-starter is waiting for your contribution! That´s the power of Open Source software – you benefit from it in your projects, and you can give something back.

Now: Have fun with the cxf-spring-boot-starter! We are waiting for feedback and pull requests. 🙂

The post Spring Boot & Apache CXF – SOAP on steroids fueled by cxf-spring-boot-starter appeared first on codecentric AG Blog.

(J)Unit Testing Principles

$
0
0

This article is about basic principles for Java JUnit tests. Even though most of the following principles are probably also true for unit tests written in other programming languages.

Every now and then I am really wondering why we – the folks developing software – have so much trouble agreeing on how to do certain things. Now unit testing is really around long enough that one would believe there are no fundamental discussions on the dos and don’ts. But those discussions are there – constantly :)! Therefore I will try to give some reasonable principles for JUnit Testing in the following to maybe mitigate at least some of the discussion points.

JUnit tests must not make use of any infrastructure

At some point of time this seems to happen in every project: Tests are written that require a running database system or any other piece of the infrastructure the application is running on. But databases are really a kind of favourite here.

If you feel the urgent need to write these kind of tests, just grab a cup of coffee, relax and consider mocking for the database access in your unit tests!

If testing on the physical database layer is required, Integration Tests are probably the way to go. But those are then only executed on specific environments or locally if wanted, but not as part of the normal JUnit cycle. One very important aspect of executing JUnit tests is speed!

“If people don’t stick to the plan, then this leads to chaos, and no-one likes chaos” – Parker

Personally I consider these tests most of the time completely useless and prefer testing this implicitly when testing features. This is then done preferably using Automated Acceptance Tests or with traditional QA.

Test the public API of your application

The first topic might go with little discussion – if you are lucky. But this one will not. Just googling for this will bring up endless discussion whether or not private methods should be tested explicitly or implicitly through the public API.

Make everyones life easier and only write JUnit tests agains the public API of your application.

There cannot be any private methods that are not anyway executed through the public interfaces unless we are considering really esoteric cases. Therefore all private methods are anyway tested implicitly when testing the corresponding public API.

Testing private methods directly does not only require some technical wrinkle, but it also makes the tests more susceptible to refactorings done in the code under test. And the public interface provides the business logic of your application and this is what we want to test. A more in-depth view on this specific topic can be found from this excellent article.

Test classes of the application in isolation

Considering the previous point a JUnit test should test the public API of one class in isolation. This means all services/methods used from other classes must be mocked. This does of course exclude data transfer objects and other POJOs.

Unit tests are isolated and on class-level.

We are having one “test-class” that corresponds to the class we are testing and we are having one or more “test-methods” for each public method in that class. Really straightforward and well supported by IDEs.

Test methods are as small as possible and well structured

One test-method should test one specific behavior of your API. If you need to test behavior in error situations write an own test-method for it. If you need to test certain boundary cases then write own test-methods for each of those. The advantage of this approach is not only that the test code will be much more readable, but in case of a failing test it is immediately possible to locate the problem.

Break down the test-methods into preparation, execution and verification. Or simply Given/When/Then :-). The following code-snippet shows an example.

@Test
public void getItemNameUpperCase() {
 
    //
    // Given
    //
    Item mockedItem = new Item("it1", "Item 1", "This is item 1", 2000, true);
    when(itemRepository.findById("it1")).thenReturn(mockedItem);
 
    //
    // When
    //
    String result = itemService.getItemNameUpperCase("it1");
 
    //
    // Then
    //
    verify(itemRepository, times(1)).findById("it1");
    assertThat(result, is("ITEM 1"));
}

Especially if more complex functionality is tested where more preparation – probably through mocking – and more verification is required this kind of source code documentation is really helpful. Agree on a common style for this inside the project team.

Following these basic principles for the implementation of JUnit tests should already help a lot in daily project work. Of course when pairing is used or some review process for the feature development the same should be applied when writing JUnit tests.

The post (J)Unit Testing Principles appeared first on codecentric AG Blog.

Prevent acceptance tests becoming a time sink

$
0
0

So in a average IT project something like acceptance test comes up sooner or later. Which is a good thing because we want to be sure about the functionality that is provided by the software actually works. So we make acceptance tests and show the results on a dashboard. Most people agree that acceptance tests are critical in providing resilient software. But people also tend to agree that acceptance tests are expensive. They take some time to run (10+ min. in bigger projects) and they take extra time to create. This is time not spent on the actual creation of functionality. So we need them, but we need to be careful.

A whole different problem is software not providing the expected behavior or functionality. This is something Acceptance Test Driven Developement(ATDD) tries to solve. ATDD is something that originated from the test driven movement, although Kent Beck in 2003 thought it was impractical. ATDD still gained momentum, and it has its benefits. By defining the tests before actually building the software ATDD provides more clarity about what needs to be created.

Other benefits of ATDD are:

  • You know when and if functionality is provided without manual testing.
  • Forces careful thinking about the functionality.

And ofcourse there is a drawback:

  • You need to invest more time before creating the functionality.

Maybe there are more drawbacks of ATDD. I know that Acceptance tests themselves have some drawbacks. It makes sense to write your acceptance tests first before starting the actual coding. Maybe not for  small and simple things, but definitely for the large and complex.

Implementing the code for running the test descriptions should take as little time as possible. We want to implement this before the functionality so we see a red bar. For this we use tools that translate these descriptions. The descriptions should be readable for the tool, but we would like to be as free as possible. Often the syntax used for these descriptions are sentences starting with Given, When and Then which orginates from Behavior Driven Development (BDD) approach invented by Dan North and Chris Matts.

Next to being free in our way we write our tests a test framework should support us as much as possible in writing tests quickly. Which means the following according to me:

  • Not a lot of coding needed before a test runs.
  • IDE should support my preferred test description.
  • I can generate some code based on the test description.
  • The tool should run the tests in a convenient way.
  • Not a lot of boilerplate code needed to setup.
  • I can get support from a community.
  • I can see the internals and improve on it (Open source).
  • I can integrate the tool in a build pipeline.
  • The tool provides libaries or integration with libraries that can test a certain UI , API or data

This is quite a list of capabilites for a tool. A small team, including me, of codecentric wanted know if there are any frameworks available that allow us to write tests faster and thus prevent headaches. The following acceptance test frameworks score highly on the capabilities I mentioned.

  • Cucumber
  • Robot Framework
  • Jbehave
  • Gauge
  • Concordion

Although we tried to look at all the acceptance test frameworks briefly we probably did miss some. Cucumber is part of the list and I already use it a lot. I am more curious about the other frameworks which maybe allow me to write tests faster.

Robot Framework looked very promising and I studied it in more detail. Concordion, Gauge and Jbehave are very nice frameworks but we looked at them only briefly because of time constraints.

I really like the Robot Framework it’s inital setup is quite easy using Java and Maven. This is how a simple Maven setup looks like:

<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
 
	<groupId>nl.cc.dev</groupId>
	<artifactId>RobotATDD</artifactId>
	<version>1.0-SNAPSHOT</version>
 
	<dependencies>
		<dependency>
			<groupId>com.github.markusbernhardt</groupId>
			<artifactId>robotframework-selenium2library-java</artifactId>
			<version>1.4.0.8</version>
			<scope>test</scope>
		</dependency>
	</dependencies>
 
	<build>
		<plugins>
			<plugin>
				<groupId>org.robotframework</groupId>
				<artifactId>robotframework-maven-plugin</artifactId>
				<version>1.4.6</version>
				<executions>
					<execution>
						<goals>
							<goal>run</goal>
						</goals>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</build>
</project>

This is an overview of the test project in my IDE:

ide

The calculcatePage.robot is a test description for a web page with a calculator which should be in the directory robotframework. The FancyLib.java contains a class with methods that can be called by the test. You can run the tests with the command ‘mvn verify’.

The test cases in calculatePage.robot can look like this:

test_cases2

These test are quite readable I think (sorry about the boasting) but still I would like the ability to leave out the settings and only show the test cases.
Another big help are the large number of available test libraries for using in the Robot Framework tests. This is only a small listing of libraries:

  • Selenium
  • Android/IOS
  • SSH/SFTP
  • Windows GUI
  • MQTT

More libraries can be found at the robot framework site. Other people at codecentric already wrote a lot about the Robot Framework so if you want to know more I really recommend reading their posts.

Less time wasted on acceptance testing is not only about using great tools it is also knowing what to test and what not to test. I get the idea of trying to test every part of the software end to end, and in some critical software it is even demanded.  But often resources are scarce and the certainty provided by full coverage of ATDD does not really cover the cost.

A lot of acceptance tests does also not mean integration and unit test are to be neglected. A anti pattern for testing is reversing the well known test pyramid making it an ice cream. The problem with the ice cream is that acceptance tests are not suited to test negative paths. So what if service X fails in because writing to a file fails, if so we want a certain logging. In an integration test or unit test this easy to test. In an acceptance test this is more challenging. Feedback from an acceptance test is also less useful for developer to solve bugs. Which brings us to the fact that acceptance tests are more fragile than unit tests because they are quite dependent on environment.

Icecream Cone to Test Pyramid

Talking about a ice cream cone, unit testing of the frontend, which seems a bit double when you already have acceptance tests which validates through the UI, should not be ignored.

So to prevent acceptance tests to be a black hole for time don’t go for full coverage but focus on the most important functionality. Take some time to choose the best framework. Be aware how much time you spent writing and running acceptance test and try to use ATDD it will likely improve the whole development process.

The post Prevent acceptance tests becoming a time sink appeared first on codecentric AG Blog.

Legacy SOAP API integration with Java, AWS Lambda and AWS API Gateway

$
0
0

Introduction

Once you have decided to migrate your infrastructure to AWS, the migration process is usually not executed at once. Instead there will most likely be a transition period, in which both, new and legacy infrastructure, have to coexist and communicate with each other. In this transition period the existing systems are gradually migrated to the cloud environment. However, sooner or later it might be possible that you run into compatibility problems, because a legacy system cannot be incorporated into the cloud (for whatever reasons) or refuses to communicate with modern API interfaces. There could, for instance, be clients which can have their API endpoint configured, but cannot be changed with regard to the message format they send to this endpoint. For this kind of purpose the AWS API Gateway offers several options to integrate incoming requests and outgoing responses into the cloud infrastructure.

In this article I want to show a basic solution of how to integrate requests with the AWS API Gateway and AWS Lambda using the example of a SOAP request.

Prerequisites

A basic understanding of the AWS platform as well as an AWS account are required. Also you should be familiar with Java and Maven. The full sample code, used in this article, can be found on GitHub.

The Plan

We will create an AWS API Gateway resource, which receives and processes a SOAP message and returns a SOAP message as response. In order to achieve this, we implement a Java Lambda function, which is configured as an integration point in the method execution of our resource. The API Gateway is in turn responsible for mapping the incoming request and the outgoing response to corresponding content types.

integration_message_flow

Let’s start with setting up the Lambda function.

Set up Lambda

We start with a Java 8 implementation of the RequestHandler interface provided by the AWS Java SDK. Because Lambdas are only able to process JSON, the API Gateway has to map the incoming SOAP request correspondingly (I elaborate on this point in the “Integration Request” section of this article). To process the mapped request we create a Wrapper class, which can be instantiated with the JSON String. This wrapper object contains the original XML within a String field and can be handed to the RequestHandler implementation for processing.

Include libraries

We create a Java 8 Maven project and add the following dependencies to the pom.xml:

<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-lambda-java-core</artifactId>
    <version>1.1.0</version>
</dependency>
<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-lambda-java-events</artifactId>
    <version>1.3.0</version>
</dependency>
<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-lambda-java-log4j</artifactId>
    <version>1.0.0</version>
</dependency>

Please note that in most applications the “full” AWS SDK is added to implement all kinds of use cases. But as we want to keep the Lambda function as compact as possible, we only include the minimum set of dependencies required for the execution of the function.

Create Wrapper Class

The SoapWrapper class is a simple POJO, containing the XML request / response as a String:

public class SoapWrapper {

    private String body;

    public SoapWrapper() {}

    public SoapWrapper(String body) {
        this.body = body;
    }

    public void setBody(String body) {
        this.body = body;
    }

    public String getBody() {
        return body;
    }
// ...
}

Implement Request Handler

The implementation of the RequestHandler interface expects a SoapWrapper object as input and returns a SoapWrapper object as response. The AWS Lambda execution environment will take care of the JSON serialization and deserialization for us, as long as the respective class offers a default constructor and setters for the fields.

public class ApiRequestHandler implements RequestHandler<SoapWrapper, SoapWrapper> {

    @Override
    public SoapWrapper handleRequest(SoapWrapper request, Context context) {
        // ...
    }
}

To verify that the SoapWrapper object works as intended, we parse the String content of the body field to a Java SOAPMessage. Afterwards we return a hard coded SOAPMessage as response to test the end to end scenario. Feel free to take a look at the code in the sample project in GitHub for further reference.

Package Lambda

Java Lambdas need all classes that are required for the execution of the program in a single jar file. Hence Maven has to package these classes into a so called “fat jar”, which comprises all necessary runtime dependencies. This can easily be achieved by including the shade plugin into the pom.xml:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-shade-plugin</artifactId>
    <version>2.4.3</version>
    <configuration>
        <createDependencyReducedPom>false</createDependencyReducedPom>
    </configuration>
    <executions>
        <execution>
            <phase>package</phase>
            <goals>
                <goal>shade</goal>
            </goals>
        </execution>
    </executions>
</plugin>

Finally we create our jar file with mvn clean package.

Configure and deploy Lambda

To configure and deploy the Lambda function, log into the AWS console and go to the Lambda service:

  1. Hit “Create a Lambda function”
  2. Select the “Blank Function” blueprint
  3. Skip the “Configure triggers” section with “Next”
  4. Provide a meaningful name for the function
  5. Select “Java 8” as Runtime
  6. For the code entry type select “Upload a .ZIP or .JAR file” and upload the previously created fat jar. The maven shade plugin actually creates two jar files, so make sure to select the one without the “original-” prefix. Amazon recommends that packages larger than 10 MB should be uploaded to AWS S3. Java Lambdas almost always exceed this threshold, but for the time being upload the jar file manually
  7. Afterwards provide the handler, which is the fully qualfied name of the class implementing the RequestHandler interface (e.g. de.codecentric.lambda.ApiRequestHandler)
  8. Role: Depending on what the Lambda function should do, it needs the appropriate rights to do so. Basic execution is sufficient for our purpose, hence select “Create a custom role”. Click on “Allow” in the following AIM service window
  9. Finally leave the “Advanced Settings” section untouched and proceed with “Next” to review the input

Test Lambda

Now that we have deployed our RequestHandler implementation, we can test the execution with a simple JSON document (containing an escaped SOAP XML), which we paste directly into the editor on the AWS website. Select the Lambda function in the AWS Lambda service and click on “Actions”, “Configure test event”, enter the following and hit “Save and test”:

{
  "body": "<SOAP-ENV:Envelope xmlns:SOAP-ENV=\"<SOAP-ENV:Envelope xmlns:SOAP-ENV=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:codecentric=\"https://www.codecentric.de\"><SOAP-ENV:Header/><SOAP-ENV:Body><codecentric:location><codecentric:place>Berlin</codecentric:place></codecentric:location></SOAP-ENV:Body></SOAP-ENV:Envelope>"
}

A successful test should not have raised exceptions, and we should see the incoming request as log output in the AWS CloudWatch log files. If the Lambda function works as intended, we can proceed to set up the API Gateway.

Set up API Gateway

Using the AWS Management Console for the API Gateway, we are able to set up our SOAP Wrapper API within minutes. We just have to keep in mind to map the incoming request content, which is XML, to JSON (as the Lambda function only speaks JSON). Conversely we map the outgoing response content to XML, in order to emulate an actual SOAP response. This can be done with an Integration Request and an Integration Response within the AWS API Gateway, respectively. We define a content type and a mapping template in each of these method execution steps to process the body of the request / response. Within the mapping template we can modify the content of a request / response with Velocity.

Create API, Resource and Method

  1. Go to the API Gateway service and click “Create API”
  2. Select “New API”, input a name (e.g. “soapApi”) and hit “Create API”
  3. Select the API, push the “Actions” button, select “Create Resource”, provide a resource name (e.g. “legacy”) and hit “Create Resource”
  4. Select the resource, hit “Actions” again, select “Create Method” and then “POST”. Confirm
  5. Wire the lambda function with the API in the following window: Select “Lambda function” integration type, specify Region and function name, then hit “Save”
  6. Confirm the permission request for the API Gateway in the following window

After the API is sucessfully created, we can see the visualized “Method Execution” when we select our POST method:

method_execution

Integration Request

In the “Method Execution”, click on the “Integration Request” and open the “Body mapping Templates” section. Select “Add mapping template” and type in “text/xml”. Then simply “jsonify” the whole request with the following Velocity snippet:

{
   "body" : $input.json('$')
}

As the SoapWrapper Java Class expects a single JSON element “body”, we define the JSON object accordingly. Because the Java SOAP library sends requests with text/xml as content type we provide the type analogically. Depending on the migration scenario and which tools are used to execute the request, it might be necessary to adjust the content type appropriate to the scenario. Furthermore depending on the selected “body passthrough” option, the API Gateway either rejects requests not matching the content type, or passes them through “as is”. Having finished the Integration Request, the Lambda function should already be able to receive SOAP messages from the API Gateway. Finally, we take care of the response.

integration_request

Integration Response

The Lambda function so far delivers a SoapWrapper object as JSON. Yet, what we actually need is XML. Hence we map the response to the respective content type and message body. For that purpose click on “Integration Response” in the “Method Execution”, unfold the existing response and the “Body Template” section. In the ensuing step, change the content type from application/json to application/xml and return the body of the SoapWrapper response (which contains the XML as String) with the following Velocity snippet:

#set($inputRoot = $input.path('$'))
<?xml version="1.0" encoding="UTF-8"?>
$inputRoot.body

integration_response

Method Response

For the finishing touch of our response, we define a “Method Response” for the HTTP status code 200 with application/soap+xml as content type:

method_response

Deploy API

In order to test our created resource, we deploy the API to an arbitrary deployment stage, e.g. “test”. To do so, simply select the API, hit “Actions” and “Deploy API”. We receive an endpoint URL after the deployment, which can be used in the next step to test the interaction of API Gateway and Lambda function.

Test interaction of API and Lambda

The project on GitHub provides an integration test (WrapperApiEndpointIntegrationTest), which sends a post request to the specified endpoint URL (which we have received in the preceding “Deploy API” step). Of course we should also be able to test with any software capable of sending a POST request and receiving a response.

Conclusion

While SOAP is no longer supported on the AWS API Gateway, you can still include legacy SOAP requests in your new shiny cloud infrastructure, at least for a transition period. Of course the „soap legacy” resource requires further development; eg. we did not go into security: it is mandatory to put some thought in authentication and authorization. Also your legacy system might need SOAP headers or other parameters which have to be included in your request. Furthermore we are lacking a WSDL file to describe our API. It is also worth mentioning, that your AWS infrastructure probably resides within a VPC network, in which case you might need further configuration in terms of AWS networking. It also stands to question if Java should be the programming language of choice for this kind of purpose. If you have infrequent, unpredictable and spiky API calls and the function’s runtime is rather short, a programming language with less ramp up time could be the better option. But this also depends on the specific purpose of the API call, and which libraries are needed to process the request at hand.

Obviously Integration Requests and Responses are not limited to SOAP. With Velocity you can map a vast amount of requests and responses to all kinds of formats and spin up an API within minutes.

The post Legacy SOAP API integration with Java, AWS Lambda and AWS API Gateway appeared first on codecentric AG Blog.

Running Spring Boot Apps on Windows with Ansible

$
0
0

There are times you have to use a Windows box instead of your accustomed Linux machine to run your Spring Boot app on. Maybe you have to call some native libraries, that rely on an underlying Windows OS or there´s some other reason. But using the same Continuous Integration (CI) tools like we are used to should be non-negotiable!

Windows? No problem, but not without beloved Ansible!

No matter how – it´s fine if we have to use Windows to run our App on. But we should´nt accept beeing forced to give up on our principles of modern software development like Continuous Integration (CI) and Deployment (CD) or automation of recurring tasks like setting up servers and bringing our apps to life on them.

In our current CI-Pipeline we have a Jenkins building & testing our Spring Boot apps and use Ansible to provision our (Linux) machines, so that we can deploy and run our apps on them. Why not just do the same with those Windows boxes?

Seems to be something like a dream? Ansible was this Unix/SSH-thing right?! How could that work with Windows? Our Jenkins runs on Linux – and this should be somehow capable of managing Windows environments?

windows_is_coming

Well, that´s possible and there´s a way to use Ansible here 🙂 From Version 1.7 on, Ansible also supports managing Windows machines! Instead of using SSH, Ansible does this with the help of native PowerShell remoting (and Windows Remote Management WinRM), as you can read in the docs.

Do you like the idea? Let´s go ahead and try it out!

Get yourself a Windows (Vagrant) box

First of all we need a Windows box we can do our magic on. So if you don´t have one spare, the Microsoft developer sites have something for you. It was really suprising to me, but there are Vagrant images you can just download! Go to https://developer.microsoft.com/en-us/microsoft-edge/tools/vms and select a virtual machine like Microsoft Edge on Windows 10 Stable (14.xxx) and Vagrant as platform. You´ll need to have some virtualization software running on your machine – my Mac is loaded with VirtualBox for example. Download the MSEdge.Win10_RS1.Vagrant.zip and extract it. And there you are: The Windows Vagrant box dev-msedge.box is nearly ready.

Because Microsoft doesn´t seem to ship metadata for the box, we need to add it via:

vagrant box add dev-msedge.box --name "windows10"

The last things we need is an installed Vagrant and a Vagrantfile, which is already prepared inside this blog post´s corresponding example project on github:

Vagrant.configure("2") do |config|
  config.vm.box = "windows10"
  config.vm.guest = :windows
 
  # Configure Vagrant to use WinRM instead of SSH
  config.vm.communicator = "winrm"
 
  # Configure WinRM Connectivity
  config.winrm.username = "IEUser"
  config.winrm.password = "Passw0rd"
 
  config.vm.provider "virtualbox" do |vb|
     # Display the VirtualBox GUI when booting the machine
     vb.gui = true
   end
end

Because we use Windows, the Vagrantfile mainly tweaks the Vagrant Configuration Options to use WinRM instead of SSH. You can read more details in the vagrant winrm docs. To fire up a full blown Windows 10 you only have to clone the repository and run vagrant up. Wait a few seconds and your Windows box should be running:

windows10_vagrant_box

There´s only one thing, that can cause the vagrant up to run into a “Timed out while waiting for the machine to boot […]”. This is because Microsoft sadly doesn´t configure the Network List Management Policies in a way, that Windows Remote Management (WinRM) could work together with Vagrant completely frictionless. To solve this we need to manually go into Local Security Policy / Network List Management Policies (after the Windows box is up and running), double klick on Network, go to tab Network Location and set the Location type to private and the User permissions to User can change location. Having made these changes, our vagrant up will work like a charm 🙂

Making Windows Ansible ready

There are some steps needed to prepare our Windows box so that it will smoothtly work together with Ansible. These steps are quite dependent on the Windows version.

If you´re going with a current version like the one from developer.microsoft.com mentioned above then there´s not much to do here. Because Powershell 3.0 or higher is a requirement for Ansible and Windows 10 ships with 5.0 out of the box, we just need to run this script to configure remoting for Ansible on our Windows box. The easiest way to do this is to use iwr and iex Powershell commands:

iwr https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1 -UseBasicParsing | iex

Having older versions could lead to a more complex setup process. Sometimes you need to allow script execution in general by running a command like the following (this is one of many possible solutions and it´s the “just make it work”-way 😉 ):

Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser

Or there´s no Powershell 3.0 or higher (see the list of powershell-versions-and-their-windows-version). To see the Powershell version, run a

get-host

To upgrade to Powershell 3.0, there´s also a script for you.

We also need to know some kind of credentials of our Windows box. You could do it with Kerberos as well, but you should be warned that the configuration is rather complex. I would advice to go with the good old administrative account, which is the second possible option besides Kerberos to connect to the Windows box using Ansible. Looking at our Microsoft Edge on Windows 10 Vagrant box, the installation instructions tell us the needed secrets (IEUser & Passw0rd!).

Testdrive the Ansible connectivity

That´s nearly everything we need to testdrive the Ansible connectivity of our Windows box. To give it a try we need a ansible playbook we can execute. That should contain a hostsfile with the IP of our Windows box (which is localhost in our case) and a correct Ansible configuration. Our github repository provides a working configuration example:

ansible_user: IEUser
ansible_password: Passw0rd!
ansible_port: 55986
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore

Together with our running & Ansible ready Windows 10 Vagrant box, we could run our first connectivity test. A good method here is to use the win_ping module, which is one of the many Ansible Windows modules. Just be sure to have the latest Ansible release installed on your Linux or Mac. Writing this article this is 2.2.0.0, which I installed via pip from the PyPI. Installing ansible on Windows is another story… To start the test, type the following command:

ansible restexample-windows-dev -i hostsfile -m win_ping

If you get a SUCCESS, everything is fine:

127.0.0.1 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

But if you get an UNREACHABLE!, it could be a real challenge. That could lead to a lot of work to get things working. For me it helped to double or triple check the credentials (try to log off and on to your Windows box again!). If you get time outs, check if you did all the steps described in Making Windows Ansible ready above.

Have a runnable & buildable Spring Boot App in place

This is the easy part – but if you want to deploy an Spring Boot app, you have to have a working example in place, right!?! 🙂 You could create one in a few minutes for yourself (e.g. with Spring Initializr or the awesome Spring starter guides), take something existing you already build or just take the example project used for this blog post (a extremely simple REST service app).

Either way you choose: Be sure to have a working build in place that is able to produce a runnable Spring Boot jar-File. In our example project restexamples you get the needed restexamples-0.0.1-SNAPSHOT.jar by running:

mvn clean package

Suggestions for Windows-ready ansible playbooks

Before you start to craft your first ansible playbook to provision a Windows box, let me give you some points to take along. If you have some Ansible experience with managing Unix-like machines, maybe you´re not aware of those things in the first place:

Update to the latest Ansible version. Ansible´s Windows support is getting better with every release. Many of the Windows modules only work with the latest Ansible version – so be sure to have that installed!

Don´t think the Ansible Windows docs have the same quantity or quality like as you are used to. I don´t want to obsess anybody here and great work is done from the Ansible team! But working with Ansible on Windows is not only this high-gloss way that you are used to. There are times you have to try 3-5 different modules till you finally have a working solution for your problem.

ALWAYS escape backslash in paths with a leading backslash. If you have a path like C:\temp you should place somthing like this in your playbook:

"C:\\temp"

Don´t assume that paths with C:\ProgramFiles (x86)\XYZ will work. Especially in our case this is quite important as we need a Java Runtime Environment (JRE) to fire up our Java application. If you use the installed one, try to use alternative paths instead like this one that Oracle places after a successful JRE installation:

"C:\\\ProgramData\\\Oracle\\\Java\\\javapath\\\java.exe"

A complete example

This blog posts´ example project already ships with a complete Ansible playbook that shows how to provision a Windows box, so that we can deploy and run a Spring Boot app on it. Let´s have a more detailed look on this!

First of all, we prepare the Windows box to handle our new deployment:

- hosts: "{{host}}"
  vars:
    spring_boot_app_path: "C:\\spring-boot\\{{spring_boot_app_name}}"
    path_to_java_exe: "C:\\ProgramData\\Oracle\\Java\\javapath\\java.exe"
 
  tasks:
  - name: Create directory C:\spring-boot\spring_boot_app_name, if not there
    win_file: path={{spring_boot_app_path}} state=directory
 
  - name: Install nssm (non-sucking service manager) via chocolatey
    win_chocolatey:
      name: nssm

After defining some paths needed later, we create a directory to deploy our Spring Boot app to and install the Non-Sucking Service Manager (nssm) with the help of the Windows package manager chocolatey, which are both really helpful in the context of working with Windows boxes.

The latter brings the missing capability of package managment to Windows, which you already love on your Linux or Mac (with brew) machines. And nssm will give us the power to run our Spring Boot app as real Windows services with all the benefits like auto-restarting after reboots. Gone through to several experiments with many possible solutions I discovered this as a quite elegant way to manage Spring Boot apps on Windows.

The next steps are quite interesting and yet not really intuitive. It took me some time to figure those out and there are some alterations we should discuss afterwards:

  - name: Stop Spring Boot service, if there - so we can extract JRE & other necessary files without Windows file handle problems
    win_service:
      name: "{{spring_boot_app_name}}"
      state: stopped
    ignore_errors: yes
 
  - name: Install Java Runtime Environment (JRE) 8 via chocolatey
    win_chocolatey:
      name: jre8
 
  - name: Copy Spring Boot app´s jar-File to directory C:\spring-boot\spring_boot_app_name
    win_copy:
      src: "{{spring_boot_app_jar}}"
      dest: "{{spring_boot_app_path}}\\{{spring_boot_app_name}}.jar"

The first thing here is to stop the service, that manages our Spring Boot app. Well – that´s kind of weird I hear you saying. It is, but it has nothing to do with the first execution of our playbook – but with all the others, beginning with the second one.

Because Windows has this “fine feature” called sharing violation error: If a running process has a handle on a file or directory, Windows won´t let you change or delete it. But that´s what we want to do: We want to be able to update the used JRE or other files, that our Spring Boot app will need to run fine. So that´s something like a missing advice: always stop your processes or services before taking any more action!

I mentioned the first run of our script – it would break, if the service does not exist. Therefore we use a really nice Ansible feature – we just ignore errors with the help of ignore_errors: yes. Now the service is stopped if it´s already installed to prevent sharing violation errors or the win_service module brings up an error – which is ignored, if there was no service installed before.

Now we are able to download and extract the necessary Java Runtime Environment or just install the jre8 package with the help of chocolatey. In the third step we deploy the pre-build Spring Boot app as jar into our previously created directory.

Install & configure the Windows service

We finally reached the point where we could install our Spring Boot app as Windows service:

  - name: Install Spring Boot app as Windows service (via nssm), if not already there - but remain stopped to configure Application directory
    win_nssm:
      name: "{{spring_boot_app_name}}"
      application: "{{path_to_java_exe}}"
      app_parameters:
          "-jar": "{{spring_boot_app_path}}\\{{spring_boot_app_name}}.jar"
      state: stopped
 
  - name: Set the Application path for the Spring Boot app to the folder where the needed native libraries reside
    raw: nssm set {{spring_boot_app_name}} AppDirectory {{spring_boot_app_path}}
 
  - name: Fire up Spring Boot app Windows service
    win_service:
      name: "{{spring_boot_app_name}}"
      state: restarted

The first thing here is to define the Windows service with the help of the win_nssm module. We provide the path to the java.exe as application option and the -jar spring-boot-app.jar as app_parameters. The state is only stopped for the moment, because we want to configure another nssm service option.

The nssm service option AppDirectory could be really important, if your application needs native libraries like dll files at the same directory like your jar file. The crucial nssm option could be configured manually via a nssm edit servicename which will bring up something like this:

nssm_startup_directory

But we need to change the value of Startup Directory within our Ansible script. Because the win_nssm module sadly doesn´t provide a configuration option, we need to rely onto the raw module. With the help of nssm set servicename AppDirectory path we are able to do the trick.

Using win_service we could now safely fire up our Spring Boot app as a Windows service 🙂 So let´s get our hands dirty and run our Ansible playbook! Just make sure you´re Windows box is running and Ansible prepared:

ansible-playbook -i hostsfile restexample-windows.yml --extra-vars "spring_boot_app_jar=../restexamples/target/restexamples-0.0.1-SNAPSHOT.jar spring_boot_app_name=restexample-springboot host=restexample-windows-dev"

The script should produce an output like that:

running_ansible_playbook_windows

Smoketest

As you may noticed, we didn´t discuss the script´s last step. Let´s have a look into the restexample-windows.yml:

  - name: Wait until our Spring Boot app is up & running
    win_uri:
      url: "http://localhost:8080/swagger-ui.html"
      method: GET
    register: result
    until: result.status_code == 200
    retries: 5
    delay: 5

As a final step it is a good practice to check if our Spring Boot app is running fine. This could be achieved with the win_uri module. The test is only working with something we can do some kind of meaningful HTTP GET on. Therefore the small example application leverages the power of SpringFox, which generates JSON API documentation and provides a small web app we can do a GET on – you can try it yourself on http://localhost:8080/swagger-ui.html. So if the SwaggerFox app is up and running (and returns a HTTP Statuscode 200), we assume our Spring Boot app is working as expected.

Endless possibilities…

Now we are able to deploy our Spring Boot apps to Windows boxes – and run more complex scenarios on the Microsoft machines. How about a good old SOAP Service, based on Spring Boot and deployed 10 times in parallel – each one with a separate port? Or any other app you´d like to run!

The possibilities are endless. I would be really pleased to hear from your deployment scenarios with Ansible and Windows 🙂

The post Running Spring Boot Apps on Windows with Ansible appeared first on codecentric AG Blog.

6 reasons for native Android development

$
0
0

2016 was yet again a successful year for the mobile device market. The operating systems Android and iOS together reach a market coverage of 99.3%.

Source: http://www.idc.com/promo/smartphone-market-share/os

It sounds promising to develop cross-platform apps and share certain components between them to reduce code duplication. Based on the experience with the Xamarin.Android platform this post shows a few reasons why this may not be such a good idea after all.

About Xamarin.Android

The concept of the Xamarin.Android platform is promising at first sight. Combining an awesome language like C# with a cross-platform managed runtime (Mono) should enable the developers to focus more on producing new features instead of
maintaining code for Android and iOS separately.

The fact that C# provides features on language level for some Java- and Android specialties (i.e. manifest generation using annotations, implicit casts via generic methods, background threads via the async keyword) and sharing code using the Mono runtime seems to make mobile development more efficient for developers.

I think this approach has some disadvantages which drove me to abandon Xamarin.Android and proceed with pure Android development. Although the title says “native development” this post is not a rant against hybrid development, but a plea for vanilla Android, which got so much easier than a few years ago.

Reason #1: The vast ecosystem of native libraries

Since the beginning of Android the amount of libraries, which are developed for or enhanced to support Android, has grown. Only a small part of that is being ported to C# and available as NuGet-Packages for the Xamarin platform.

Reason #2: The direct vendor support by Google

The community and the vendor support around Android is far more extensive than for Xamarin.Android. Bugs can be created and discussed using the Android bug tracker (https://source.android.com/source/life-of-a-bug.html).
Newest Android SDK versions are available as soon as Google releases them, there is no need in waiting for someone to port them, as it is the case with Xamarin.Android.

The well known discussion about who is responsible for the bug is between the Android project and the developer, there is no third party like Xamarin involved.

Reason #3: Stackoverflow AKA developer love

Stackoverflow is *the* source for problem solutions in IT. Currently (as of 27.12.2016) there a 931,615 questions tagged with Android and only 18,590 tagged with Xamarin.

Reason #4: Android Studio

Since Google dropped Eclipse, switched to the IntelliJ platform and is offering Android Studio, the IDE has become much better. Although I am using IntelliJ IDEA and there is the Android plugin available, I am still using Android Studio separately because its such a good adaption of the Android developer usecase.

My favorite features are:

  • the incredibly awesome layout designer
  • the extensive amount of lint rules for code and layout optimization
  • Instant Run

Reason #5: Tools

The tools coming with the Android SDK are very well integrated in Android Studio. Additionally, Android uses the uniform, transparently described, extendable build system Gradle, which can also use the extensive Maven repositories as a source for third party libraries. The productivity as a developer is very good, because everything fits in nice and works well together.

Additionally, Android Studio is available for Windows and macOS. Until now, for Xamarin.Android you have to use Xamarin Studio for macOS and Visual Studio for Windows. But this may change in the near future: https://www.visualstudio.com/vs/visual-studio-mac/

Reason #6: Startup time and size of the apps

Apps developed with Xamarin.Android use the Mono runtime to execute the code written in C#. This runtime has to be booted everytime the app starts in contrast to the JVM in Android, which is always running. This results in increased deployment times while developing in comparison to Android Studios Instant Run.

The Mono runtime and parts of Mono.Android have to be integrated in the app, which leads to a bigger application size.

Conclusion

A few of these reasons are of course personal taste, but I think native app development is more cost effective than is commonly believed. Which experience do you have with Xamarin and Android? Share your thoughts and leave a comment!

The post 6 reasons for native Android development appeared first on codecentric AG Blog.

Must-have libraries for Android

$
0
0

There are a few libraries for Android, which implement a lot of widely used features and concepts from the well known Java ecosystem for less powerful devices. Some of then provide the base for my Android technology stack, which I would like to present today.

Android Annotations (http://androidannotations.org)

Android Annotations provides a whole lot of features, which really provide value for the developer in terms of readability and maintainability. The main features are:

  • Dependency injection
  • Event handling
  • Simple threading
  • Consuming REST APIs

Android Annotations uses APT and generates optimized classes at compile time. This was a design choice to reduce the launch time (respectively don’t increase it) at startup and prevent sluggish runtime behavior. It’s simple to include in your build.gradle files (both the app’s build.gradle and the project level build.gradle):

buildscript {
	// …
	dependencies {
		classpath 'com.android.tools.build:gradle:2.2.3'
		classpath 'com.neenbedankt.gradle.plugins:android-apt:1.8'
	}
	// …
}

build.gradle (Projekt)

apply plugin: 'android-apt'
 
android {
	// …
}
 
dependencies {
	// …
	apt('org.androidannotations:androidannotations:4.2.0')

	compile('org.androidannotations:androidannotations-api:4.2.0')
}

build.gradle (App)

public class ExampleActivity extends Activity {
	private Button exampleButton;
 
	@Override
	protected void onCreate(Bundle savedInstanceState) {
		super.onCreate(savedInstanceState);
		setContentView(R.layout.layout_example);
		exampleButton = (Button) findViewById(R.id.exampleButton);
		exampleButton.setOnClickListener(new OnClickListener() {
			@Override

			public void onClick(View view) {// do something}
		});
	}
}

Vanilla Android

@EActivity(R.layout.layout_example);
public class ExampleActivity extends Activity {
	@ViewById
	Button exampleButton;
 
	@Click
	void exampleButtonWasClicked() {
		// do something
	}
}

Android with Android Annotations

When Android Annotations provides to much features which are not used (or for the guys out there, who don’t like the annotation preprocessing), one can use a combination of Butterknife (view injection, http://jakewharton.github.io/butterknife), Dagger (dependency injection, https://google.github.io/dagger) and Retrofit (REST client, https://square.github.io/retrofit).

EventBus (http://greenrobot.org/eventbus)

To decouple an Activity or Fragment from the business logic, it may be worth to have a look at the publish/subscribe pattern and an established library called EventBus from greenrobot:

apply plugin: 'android-apt'
 
android {
	// …
}
 
dependencies {
	// …
	compile('org.greenrobot:eventbus:3.0.0')
}

build.gradle (App)

public class ExampleActivity extends Activity {
    protected final EventBus eventBus = EventBus.getDefault();
 
    @Override
    protected void onStart() {
        super.onStart();
        eventBus.register(this);
    }
 
    @Override
    protected void onStop() {
        super.onStop();
        eventBus.unregister(this);
    }
}

Other steps, like publishing an event and subscribing to it, can be found in the EventBus documentation at GitHub.

IcePick (https://github.com/frankiesardo/icepick)

IcePick reduces the boilerplate code which arises as a result of having to manage instance states from activities and fragments. This is accomplished by by the means of APT and code generation (remember Android Annotations?).

@EActivity(R.layout.layout_example);
public class ExampleActivity extends Activity {
 
	@State
	String exampleText;
 
	@ViewById
	Button exampleButton;
 
	@Click
	void exampleButtonWasClicked() {
		// do something
	}
 
	@Override
	protected void onSaveInstanceState(Bundle outState) {super.onSaveInstanceState(outState);

    		Icepick.saveInstanceState(this, outState);}


 
	@Override

	protected void onRestoreInstanceState(Bundle savedInstanceState) {super.onRestoreInstanceState(savedInstanceState);

    		Icepick.restoreInstanceState(this, savedInstanceState);}
}

The content of exampleText will be restored on all configuration change events (i.e. OrientationChanges).

LeakCanary (https://github.com/square/leakcanary)

Memory leaks are not a harmless crime! In order to find them you can use the library LeakCanary, which – once it is initialized in the Application implementation, shows a notification when it discovered a memory leak while testing the debug build.

public class ExampleApplication extends Application {
 
	@Override
	public void onCreate() {
		super.onCreate();
		if (LeakCanary.isInAnalyzerProcess(this)) {
			// This process is dedicated to LeakCanary for heap analysis.
			// You should not init your app in this process.
			return;
		}
    		LeakCanary.install(this);
  	}
}

Espresso (https://developer.android.com/topic/libraries/testing-support-library/index.html#Espresso)

Espresso is a test framework included in the Android test support libraries. It provides a DSL for automated UI tests. The implemented concepts (based on JUnit, JUnit TestRules, Matchers) are well known to developers, so this framework should be easy to learn. Espresso runs on emulators and real devices.

Conclusion

This is just a small, selected list of libraries, which focusses on code quality, maintainability and testability.
A few rough edges, which sometimes make Android development so cumbersome, are smoothed over.
Praise the community!

Which libraries are you using? Leave a comment and discuss this article with me.

The post Must-have libraries for Android appeared first on codecentric AG Blog.


Web frameworks and how to survive them

$
0
0

SUMMARY: Frameworks that help build the web apps of tomorrow must keep up with all powerful new technology there is on offer. At some point your application has to adapt, and that is never a painless process. You can avoid a total rewrite however if you respect the rate of change of web frameworks and don’t allow your code to become too tangled up with them.

I loosely borrowed the title for this blog from Families and how to survive them, a self-help book that explains why we keep falling in and out love, by psychiatrist Robin Skynner and funnyman John Cleese (himself three times divorced). Well, the start of a new year is always a fitting time to take stock of what’s new and what’s dead or dying. I have finally said goodbye to Google Web Toolkit because for the past several months I have developed something of a love affair with Angular 2/Typescript  and a REST backend with SpringBoot and Kotlin. It’s a stack so bleeding edge it doesn’t even have an acronym — KoTS? No, scrap that, please! I could well imagine it to become my toolset of choice for the next couple of years, but don’t hold me to it. Web frameworks are the boy bands of software in terms of user loyalty, and I have switched favours before.

Who needs web frameworks anyway?

Remind me why we need web frameworks in the first place? Back in 1999 we didn’t have them. If you wanted to, say, display a comma-separated list of values on a web page, this is what you would write:

#!/usr/bin/perl
print "<html><body><table>";
open my $handle, '<', './data.txt';
chomp(my @lines = <$handle>);
close $handle;
foreach $line (@lines)
{
    my @columns = split(';', $line);
    print "<tr>";
      foreach $column (@columns){
        print "<td>".$column."</td>";
      }
    print "</tr>";
}
print "</table></body></html>";

Brilliant! No dependencies except for a Perl runtime, no compilation, no boiler plate. Just FTP your employees.cgi and data.txt  to an executable Apache folder and you’re good to go. Copy/paste with minimal changes and you have the true amateur’s idea of software re-use. You’ll laugh, but the field of budding web development around the turn of the century was truly cowboy territory. Everybody was re-inventing the wheel, writing their own templating language to fill  placeholders in a static HTML file from a perl hash. Move forward five years and suddenly you could do really cool stuff with CSS and Ajax. Sadly there was no standard to speak of. Well, there was, but no major browser really conformed and when IE8 did its best to be more compliant all the old sites rife with IE7 hacks broke. In short: cross-platform compatibility was a nightmare.

Compatibility the GWT way

Enter Google Web Toolkit (GWT) in 2006. GWT lets you write client-side code in type-safe Java, which is compiled (some would say transpiled) to JavaScript into a single minified download customised for each combination of browser vendor, version and language. Among other things it offered an RPC mechanism to create client and implement endpoints using a pair of related Java interfaces. The framework would take care of (de)serialisation. “Brilliant!” is what I thought in 2011, five years late to the party. GWT’s compatibility layer abstracted away most (though not all) browser quirks of the time. I have worked on rich clients (for clients that were even richer) with some sophisticated UI behaviour like drag-and-drop, cancellable file-uploads, double-clicking, you name it. It worked fine anywhere you ran it, provided your browser wasn’t too far behind.
But all this magic came at a cost. From the beginning GWT was notorious for its long compilation time. Multi-language sites could take more than an hour to build. The vital development mode – to give you the illusion that the browser is actually running Java – was overhauled more than once because it required a plugin that broke with every Firefox update. Nevertheless, I was so hooked on GWT that I made it my selling point for consultancy work and even registered the domain gwtdeveloper.nl. I let it expire. You can register it if you want. I have fallen out of love. Version 2.8 took well-nigh of two years after 2.7. If we’re lucky we may see a version three before I retire, but I have lost patience already.

I took some time to explain what made GWT so great in the beginning because the compelling reasons to adopt it then are no longer there. To be fair, they tackled many of the shortcomings of the earlier versions, but over the past ten years the world around GWT has also mended its ways: standard compliance and cross-platform compatibility is much, much better than it used to be. The appeal has gone while many of the old drawbacks are only mitigated at best. For myself I can see no benefit anymore in transpiled Java now that I have TypeScript and the excellent support for it in IntelliJ IDE. The GWT-RPC mechanism is elegant, but it does create a dependency on GWT server-side, whereas a REST endpoint is fully ignorant of what’s running client-side. JSON serialisation is handled pretty well by Angular, it’s as simple as saying @RestController in Spring, and it makes your server backend much more re-usable for other clients.

Two roads to irrelevance

There are many reasons why perfectly fine (web) frameworks can nevertheless become irrelevant. I’ll concentrate on two. One is that the world around it has developed a better or more efficient way of doing things. Web application development has always been a very dynamic playing field. Users want rich, responsive web applications that run on multiple client platforms (PC, tablet, mobile) and the tools desperately try to catch up. GWT made great strides around 2012, but from a user’s perspective development seems stagnated for the last years. Sure, supporting Java 8 syntax in the JavaScript compiler must have been a real bear, because it took ages, but in the meantime TypeScript came to the scene with superior lambda syntax. Tough luck.

Paradigm changes are a second and more fundamental reason why some software is ready for the museum. In the early days the web was a bunch of static hyperlinked pages, basically HTML. Then came linked images, user forms and JavaScript. Dynamic manipulation of the DOM and Ajax enabled smooth single page applications. Now we want multiple versions optimised for PC/tablet, mobile and smartwatch. Early web frameworks were toolsets for doing things the old way, i.e. generating an html stream server side using some template or widget mechanism. Even in 1999 you could argue that it wasn’t the best way to build a responsive web app. It just happened to be the only way to do it. That worked fine for a long time, but here’s the dreaded car analogy: more energy-efficient petrol engines are irrelevant once we have a viable electric alternative. Trying to improve something that has become outdated or inferior is just silly.

Both forces are at play in web development frameworks: we keep getting new and better technologies and languages (HTML5, CSS3, Websockets, TypeScript) to build stuff we can’t really build comfotably unless the tools to support them adapt, sometimes radically. I sympathise with those who lament that Angular 2 is actually a new framework rather than a version upgrade. I also have time invested in Angular 1, but I found learning Angular 2 well worth the effort.

Well, it seemed like a good idea at the time

Here’s what can happen if you bet on the wrong horse. I can recommend Scott Rosenberg’s book Dreaming in Code about the brave effort by Mitch Kapor and his team to build an Open Source PIM (anyone remember that acronym?) called Chandler set to kill Microsoft Outlook. Spoiler: it didn’t. Kapor sank millions of his personal capital (he invented Lotus 1-2-3) into the project and learned an expensive lesson in humility. It’s tempting to get smug about the team’s poor decisions, but choosing a server-less peer-to-peer architecture and a desktop client (one you have to donwload, double-click and install) was probably the most ill-fated one. It seemed a good idea at the time. It probably was a good idea when they started, but writing the ultimate Outlook killer takes time and by the time you’re finished the world is a different place.

Is there a lesson to be learned from all this? Only that resistance to change is rather futile and (economically) unwise. Web frameworks adapt to the new way of doing things and if you want to grow your software with it you must be prepared to learn and adopt to the new ways. This is not a cheap or effortless process. Switching web frameworks in an existing code base can mean anything from a hefty refactoring to a complete rewrite of your application, not to mention the learning curve for the team and the powers of persuasion required. Converting a code base is one thing, converting the more conservative forces in your team is another. That’s why there’s still disgruntled hordes tinkering with Struts and JSP monoliths. You’re not going to attract the greatest talent is that what you can offer them. I once worked on a major JSP ‘enterprisey’ application whose pages were basically dressed-up editing forms for a corresponding Oracle table. The entire architecture of the application was dictated by the Oracle and JSP way of doing things, making code re-use for different clients (e.g. mobile) impossible. Still, there really is one thing worse than having such an architecture, and that is having an ad-hoc, home-baked, non-standard way of doing web apps, like in my Perl example.

Everything changes in web land but it’s safe to assume that HTML, CSS and JavaScript will be with us for a while. Likewise JSON, REST and Spring. Grunt, gulp, yeoman, yarn and any other flavour of gluten-free hipster build frameworks? Don’t hold your breath. ‘How easy will it be to migrate this to […]’ is a very prudent question you should ask yourself regularly when evaluating options. The solution – as you will have guessed – is in minimising and centralising the amount of code that touches your framework library. Be prepared to make liberal use of the Adapter/Facade patterns and you’ll thank yourself later. I wish you happy learning for 2017.

The post Web frameworks and how to survive them appeared first on codecentric AG Blog.

My 100th post on the codecentric blog :-)

$
0
0

Today this will be a “slightly different” blog post than usual. That is because this very post will be an anniversary for me personally as it is my 100th post on our company blog. Thus I thought this cannot be just some “ordinary” post. Some kind of small celebration is definitely required here ;-).

Having the possibility to spend at least 100 (well, probably more) days on writing blog posts is pretty amazing. It is really great that codecentric is supporting this as one way of utilizing the “+1”-time from the 4+1 working model. For me writing blog posts is one of the best ways for learning new things and digging deeper into technologies used in current projects. Beside that it is simply something I like to do.

So what do you typically do in case of an anniversary? Looking back and doing some kind of top-x list! What will I do in this blog post? Looking back and doing a top-x list :-). For that I selected five blog posts without any specific order with some thoughts why I like those ones the most.

— Robot Framework Tutorial – Mark I —

Well, I really like the Robot Framework for automating acceptance tests. One reason for this might be that I was working at Nokia Networks at the time it was developed there – a long time back. I remember I joined codecentric just in time to promote the Robot Framework over some other acceptance test frameworks evaluated at that time. That test framework will not be mentioned here :).

robot-framework-1

This first Robot Framework tutorial was written in 2012 and it still gets a decent amount of hits. Probably due to the fact that a lot of basic principles in the framework are very stable and thus the articles are not really getting old.

— JUnit testing using Mockito and PowerMock —

This is a really nice example for those kind of blog posts that are fun to write and at the same time it really encourages oneself to dig quite deep into the topic. Surprisingly it is a rather successful post in terms of visits.

unit-mock-powermock

Surprisingly because there are really *really* tons of articles available on the topic and I was not too sure whether or not writing yet another post on the topic will do any good. Luckily I decided to write it and today this post is although acting as a kind of reference for myself when I need a little refreshment on the subject matter.

— Robot Framework Tutorial – Mark II —

As the first Robot Framework tutorial was from 2012 and I still liked the tool in 2016 it was time for a little update. Trying to cover new features and different aspects than in the first tutorial this new series now already again consists of nine posts.

robot-tutorial-2

This is probably a good opportunity for some greetings to Pekka Klärck and the team from the Robot Framework who might still be taking a look here every now and then. And while writing about it I just accidentally found a podcast with Pekka on the Robot Framework. So we can even add something new here.

— What Agile Software Development has in common with Sailing —

The colleagues that are with me in projects – or accidentally meet me in the coffee kitchen in Solingen – probably get to know much more about sailing than they would like to ;-). It is the one topic I could talk about all day long – and sometimes I do. That was how slowly and surely the idea was growing to compare certain aspects of sailing with principles of agile software development.

sailing-agile

The result is a post that was fun to write that hopefully is fun to read, too. In addition it contains lots of cool sailing videos which alone makes it worth taking a look.

— Blogging on the online MongoDB Developers Course —

This one was quite different from the other series and it was a bit more stressful ;-). I – more or less spontaneously – decided to blog on an online course provided for MongoDB. The course was online but on fixed dates and I wanted to write up the post for the previous day of the course before the next one was starting.

mongodb-course

This was really kind of an experiment, but one that I really liked and some other members of that course as well. Fun fact: It took really a lot of time writing down the posts so that in the end I failed in getting the certificate in the final exam test.

What else?

Coming to an end of this anniversary blog post I am a bit puzzled that all posts I spontaneously selected have been from the year 2012 or 2016. Seems these have been two good years for blogging and I have the feeling 2017 will be as well :-).

The post My 100th post on the codecentric blog :-) appeared first on codecentric AG Blog.

Integration testing strategies for Spring Boot microservices

$
0
0

SUMMARY: Unit tests are a necessary condition to clean code, but today’s convention-over-configuration frameworks like Spring Boot are often used to build applications consisting of multiple services. You need some way of ensuring that the parts are going to fit together and that you are using the framework properly.

Since this is a blog for serious people who are serious about software I should not have to explain the virtues and importance of solid automated tests but I will do so anyway. Why should we test, other than for the obvious acknowledgement that we are mere fallible mortals?

  • Tests ensure that changes somewhere in the code do not cause unexpected behaviour elsewhere.
  • They validate that the code behaves as designed.
  • They safeguard good coding principles.
  • They make sure the various pieces of the product fit together

In short: tests lessen the risk that your product is going to make the customer angry or unhappy.
There are many methods to test and not all aims listed above apply equally (or at all) to each method at our disposal.

Unit tests: solitary and limited by design

In test-driven design (TDD) writing test assumptions should precede writing production code. A most noble principle, although in practice it’s likely to proceed more hand in hand. A key benefit however of working this way is that it strengthens your understanding of the code under construction, invites you to seek out edge cases and protects you against yourself by making it hard to cut corners since badly designed code is often hard, if not impossible to test. A good unit test is the canary in the coal mine of evil coding. Coupled with good mutation testing coverage, it’s an invaluable tool to ensure that any change which makes your software produce different output will result in a failed test.

The canonical purpose of a unit test is to focus rigidly on a single piece of code, with every invocation of code external to the unit mocked out, preferably by a mocking framework like mockito or jmockit. Abstracting away all dependencies may be the correct way to do unit testing, it also means that these tests in no way guarantee that the product as a whole works as designed. That’s okay: they’re a necessary though not sufficient condition to code quality.

Modern day container environments like Spring Boot can be set up in a convention-over-configuration fashion providing a fully fledged server application with JPA persistence, web security and JSON serialisation, all with minimal boilerplate code. That’s great, but handing off all these responsibilities to the framework makes it even more important to check that you’re using the framework properly, none of which can be guaranteed with a unit test. A case in point:

@OneToMany(mappedBy="paremt")
private List children

The spelling mistake in the mappedBy property will cause a runtime error in the persistency framework

@RequestMapping(value="/my/unique/path", method = RequestMethod.GET)

If another method in another rest controller tries to use the same /my/unique/path with the GET http verb, the Spring context will fail to start up.

I could go on for a while…

The problem with end-to-end testing

We should test our application as it is actually going to be run in production, with none of the dependencies mocked or stubbed. Fair enough, but in a typical web application this means serving a front-end and back-end, setting up and populating a database, not to mention a whole host of other services external to your team, either already existing or being developed elsewhere. A typical end-to-end test for a web shop would fire up a browser, complete an order in the form (e.g. with Selenium) hit the order button and check that the payment server, warehouse and courier are informed appropriately. Such tests are expensive both in terms of setting up and running.
What’s more, they’re impossible to build from day one: in a large, distributed team components will not be developed at the same pace. There may not be a ‘PLACE ORDER’ button to click on for months after the back-end team already finished their REST endpoint.

The weather server

Imagine we are building the backend of a shiny new weather app that gives up-to-date meteorological information from a number — say thirty — weather stations in the Netherlands (observant readers will notice that I used the weather metaphor already in my post about caching). Users can send a city, a postal code or a GPS coordinate through their mobile device to the central server which delegates the request to the appropriate weather station and returns it to the user. Results are cached to prevent querying the weather station again within the same minute.
Your mission, should you choose to accept, is to develop this weather server. For now we’ll only support four-digit postal codes and only three weather stations. Another team of more hardware-inclined geeks will hook up a thermometer and hygrometer to an Arduino with 4G connectivity which will listen to REST requests and send accurate temperature and humidity data. Yet another team of bearded JavaScript hipsters in Amsterdam will build the front-end, but they’re out of the equation entirely for now.
An automated end-to-end test is not going to happen, since the measurement of environmental data is beyond the scope of a software test. Anyway, the weather station device is nowhere near completion. We do however like to test our weather server beyond the simple unit test.

Integration testing with SpringRunner

With integration testing in Spring you can step beyond the shortcomings of the unit test long before the components of the product are ready for an end-to-end test. In fact, integration tests can and should go hand in hand with unit tests.

Download the sample project here:

git clone git@gitlab.com:jsprengers/springboot-testing-tips.git

The project consists of four sub-projects. Weatherstation is the implementation of the net-enabled thermometer and atdd contains Cucumber tests. They will be covered in the part two of this post, on end-to-end testing.

The api project contains the WeatherReportDTO transfer object and the interface to be implemented by the weaterstation. Actually, this being a RESTful service, the weatherstation doesn’t have to be developed with Spring or even Java, but let’s assume that it is. There are other ways to cast a REST api in concrete, but please let’s not go there. Having a separate api project makes it possible for the weaterstation developers to import the specification without any reference to the weatherserver project, and that’s a good thing.

The server will communicate with three weather stations by mapping a postal code to the url for the appropriate station. Since it’s a fixed and limited number we inject the urls as separate values, which can be fed as application variables upon startup or be read from file — if you choose the latter option then keep it out of your packaged .jar file, so a change in url doesn’t require a rebuild! In addition it can return the temperature in degrees Fahrenheit for visiting Americans. The weather stations always return degrees Celsius, this being Europe.

@Value("${weather.host.1000-3000}")
private String host1000_3000;
@Value("${weather.host.3001-6000}")
private String host3001_6000;
@Value("${weather.host.6001-9999}")
private String host6001_9999;

Notice that a unit test would be sadly inadequate: it cannot check that the proper values are injected at runtime, it cannot check that there’s another rest controller mapped to the same /weather url and least of all it cannot check that the rest client actually connects to the weatherstation server and retrieves a correct result. All that can be done in Spring test. The actual @Test methods are omitted for brevity; you can find them in the source code.

@RunWith(SpringRunner.class)
@SpringBootTest(classes = {
        WeatherServerApplication.class,
        NorthWeatherStation.class, SouthWeatherStation.class},
        webEnvironment = SpringBootTest.WebEnvironment.DEFINED_PORT)
@TestPropertySource(properties = {
        "weather.host.1000-3000=http://localhost:8090/north/weather",
        "weather.host.3001-6000=http://localhost:8090/south/weather",
        "weather.host.6001-9999=http://localhost:8090/south/weather"})
public class WeatherServerIntegrationTest {

    private RestTemplate restTemplate = new RestTemplate();

    private void assertWeatherForPostcode(String postcode, double temperature, String unit, int humidity) {
        String url = String.format("http://localhost:8090/weather?postCode=%s&unit=%s", postcode, unit);
        WeatherReportDTO temperatureObject = restTemplate.getForObject(url, WeatherReportDTO.class);
        assertThat(temperatureObject.getTemperature()).isEqualTo(temperature);
        assertThat(temperatureObject.getUnit().name()).isEqualTo(unit);
        assertThat(temperatureObject.getHumidity()).isEqualTo(humidity);
    }
}

Our WeatherServerIntegrationTest runs as any other JUnit test, but it fires up your Spring Boot application and turns your test class into a component where you can inject any component or service with the @Autowired annotation. The @SpringBootTest classes property points to our production application WeatherServerApplication and you can add any number of test-specific managed components or other @Configuration classes. Bear with me and I’ll get to the NorthWeatherStation.
We could inject our WeatherServerEndpoint and invoke its getTemperatureForPostalCode:

@Autowired
WeatherStationRestClient client;

@Test
public void getTemperatureForPostalCodeInFahrenheitShouldBe42(){
   WeatherReportDTO report = client.getTemperatureByPostalCode("1234","F");
}

But since SpringRunner has started an actual REST server why not query the endpoint straight away, using the RestTemplate, a useful wrapper around the http client and JSON serialisation bureaucracy.

Not so fast though: given a valid postal code, our server will want to query another REST endpoint, namely the one that our charming but reclusive IoT friends are soldering together. But that isn’t there yet. There are two ways to go about it. The first is to use a mock REST client in our sever. We could add a @Profile(“!test”) annotation to our WeatherStationRestClient class, add a test implementation annotated with @Profile(“test”) to our Spring test config and the context would instantiate our mock implementation instead of the production one.

@Service
@Profile("test")
public class StubWeatherStationClient implements WeatherStationClient {
  WeatherReportDTO getTemperatureByPostalCode(int postCode, String unit){
  return new WeatherReportDTO(20,TemperatureUnit.C,60);
}
}

I don’t like this solution, because it cuts out the juicy, error-prone REST call to the weather station. That’s  a missed opportunity. Besides, it requires you to annotate the production implementation with a @Profile(“!test”) and to run your test with the “test” profile. Better to leave the production code untouched and let it connect to an actual weather station endpoint through REST, albeit one of our own making and only available to our integration test. Writing a rest endpoint that gives back a hard-coded value for test purposes is as easy as:

@RestController
@RequestMapping("north/weather")
public class NorthWeatherStation implements WeatherStation {

    @RequestMapping(method = RequestMethod.GET)
    public WeatherReportDTO getWeatherReport() {
        return new WeatherReportDTO(-3.2, TemperatureUnit.C, 30);
    }
}

To let the SpringRunner context manage that controller, just add it the the classes property of @SpringBootTest and the test context will host both the weather server and our two weather station stubs.

We’re not quite there yet. We have to point our WeatherStationRestClient  to the correct test endpoint, i.e. the one we just supplied. There are three URLs it can connect to, depending on the postal code provided. In the live situation these will be different hosts listening to the same path. For our test setup let’s create two different implementations (north and south) and map codes 1000-3000 to the first, and all other codes to the second. To make this happen the endpoints will listen to different paths (running on the same host), but that’s okay since the weather server makes no assumption about urls anyhow. All can be configured through application properties, and you can supply them in an application.properties file in src/test/resources or for a one-off test case through the TestPropertySource annotation.

@TestPropertySource(properties = {
        "weather.host.1000-3000=http://localhost:8090/north/weather",
        "weather.host.3001-6000=http://localhost:8090/south/weather",
        "weather.host.6001-9999=http://localhost:8090/south/weather"}

With little set-up code and a few concise service stubs we now have a fully fledged integration test that not only covers all of our application logic (mapping postal codes and handling Celsius to Fahrenheit conversion) but also tests tests that our REST server and weather station client actually work.

In the next part I will go one step further and show you some strategies to manage startup and shutdown of several packaged Springboot applications in a test environment using the Cucumber framework. This setup contains no more stubbing or mocking and comes close to an actual end-to-end test.

The post Integration testing strategies for Spring Boot microservices appeared first on codecentric AG Blog.

CQRS and Event Sourcing with Lagom

$
0
0

Lagom is the new microservices framework from Lightbend (formerly Typesafe, the company behind Scala and Akka). The framework and the concepts behind it are heavily based on CQRS (Command Query Responsibility Segregation) and ES (Event Sourcing). This dictates how state is handled and persisted internally.

In this article I will describe the basics of Lagom and then look more closely at the concepts of CQRS and ES in combination with the framework.

Lagom, The Framework

The philosophy behind Lagom is that it

  • has to be distributed
  • has to have asynchronous communication
  • has to support high development productivity

These ideas dictate how the framework is built. The goal is to develop services on top of Lagom which are very small (in lines of code) and compact. Certain conventions make it straightforward to let the services communicate asynchronously. To give an example of this:

ServiceCall createCustomer();
ServiceCall getCustomerByEmail(String email);
ServiceCall getCustomerAverageAge();

@Override
default Descriptor descriptor() {
   return named("customer-store").withCalls(
           pathCall("/api/customer/average-age", this::getCustomerAverageAge),
           restCall(Method.POST, "/api/customer", this::createCustomer),
           restCall(Method.GET, "/api/customer/:email", this::getCustomerByEmail)
   ).withAutoAcl(true).withCircuitBreaker(CircuitBreaker.perNode());
}

Three interfaces are being defined here. Because getCustomerAverageAge is a ServiceCall with NotUsed as first generic parameter, it will be automatically generated as an HTTP GET request. A ServiceCall with an object as first parameter and Done as second type will turn this automatically into a POST (even though the type doesn’t have to be explicit within the restCall method. This shows it’s possible with minimal code to define RESTful interfaces that internally are handled asynchronously.
Besided CQRS and ES some other important concepts are applied, such as immutability of objects, design-driven APIs and polyglot programming. Java as well as Scala are supported by the framework APIs, but by using RESTful APIs with JSON data, communication with other services has been made easy.
As the Lagom framework is developed by Lightbend, the technology it is based on should not come as a surprise. Akka, together with Akka Streams, Akka Persistence and Akka Cluster constitute the fundamentals and take care of communication and storage of data. Play is integrated for creation of the RESTful interfaces and for configuration of the framework. Slick is used as ORM framework, where SQL calls are also handled asynchronously. Lastly, ConductR takes care of deploying and scaling the application in production environments.

Some other noteworthy libraries are Logback (logging), Jackson (JSON serialization), Guice (dependency injection), Dropwizard (metrics) and Immutables (immutable objects).
The focus on immutability, non-blocking APIs and a strong presence of the CQRS and Event Sourcing concepts makes the biggest difference when comparing it to frameworks like Spring Boot. Moreover, Lagom is a much compacter framework and offers less functionality. For example, interfaces for queueing are not there and would need work to add and configure. In general Lagom prevents you from having to touch the underlying layers of the framework, but for any more advanced requirements, it will be essential to know and learn about these layers.

Persistence in Lagom

By default Lagom uses the Cassandra key-value store for persistency. As of version 1.2 it is also possible to use a JDBC store, where the principles and APIs are more or less comparable. Later we will dive into using a JDBC store more specifically.
Storing of data works by implementing the PersistentEntity abstract class (a code example will follow later). The PersistentEntity corresponds with the Aggregate Root from the Domain Driven Design concepts.

Every PersistentEntity has a fixed identifier (primary key) that can be used to fetch the current state and at any time only one instance (as a “singleton”) is kept in memory. This is in constrast to JPA, where multiple instances with the same identifier can exist in memory. To add to that, with JPA only the current state is usually stored in the database, whereas Lagom stores a PersistentEntity with its history and all events leading to the current states.
In alignment with the CQRS ‘flow’ a PersistentEntity needs a Command, Event and State. All interaction proceeds by sending Commands to the entity, followed by either an update being executed, or by a response that contains the requested data. So even the querying of the current state is handled by sending Commands.
In case of a change, the Command will lead to an Event that will be persisted. The Event then again results in the State being modified.
CQRS Command, Event, State flow
Fig 1: CQRS Command, Event, State flow

The next listing shows an example Command for adding a new customer.

public interface CustomerCommand extends Jsonable {

   @Immutable
   @JsonDeserialize
   public final class AddCustomer implements CustomerCommand, CompressedJsonable, PersistentEntity.ReplyType {
       public final String firstName;
       public final String lastName;
       public final Date birthDate;
       public final Optional comment;

       @JsonCreator
       public AddCustomer(String firstName, String lastName, Date birthDate, Optional comment) {
           this.firstName = Preconditions.checkNotNull(firstName, "firstName");
           this.lastName = Preconditions.checkNotNull(lastName, "lastName");
           this.birthDate = Preconditions.checkNotNull(birthDate, "birthDate");
           this.comment = Preconditions.checkNotNull(comment, "comment");
       }
   }

}

How to implement a service (the interface of which we saw in the first listing) and send a Command to an entity is shown in the next listing.

@Override
public ServiceCall createCustomer() {
   return request -> {
       log.info("===> Create or update customer {}", request.toString());
       PersistentEntityRef ref = persistentEntityRegistry.refFor(CustomerEntity.class, request.userEmail);
       return ref.ask(new CustomerCommand.AddCustomer(request.firstName, request.lastName, request.birthDate, request.comment));
   };
}

As you can see, the PersistentEntityRef is fetched by using a combination of the type and the identity / primary key. The reference is an instance that you can interact with by sending Commands.

The CreateCustomerMessage implementation (not shown in any listing) is comparable to the AddCustomer implementation from the second source code listing, but also conains the email address from the user as primary key.
To process Commands it is necessary to define so-called ‘Command Handlers’ in Lagom. These determine the Behavior for your PersistentEntity and always start with a clean State. The following listing shows the implementation for the CustomerEntity with its Behavior:

public class CustomerEntity extends PersistentEntity {

   @Override
   public Behavior initialBehavior(Optional snapshotState) {

      /*
       * The BehaviorBuilder always starts with a State, which can be initially empty
       */
       BehaviorBuilder b = newBehaviorBuilder(
               snapshotState.orElse(new CustomerState.EMPTY));

      /*
       * Command handler for the AddCustomer command.
       */
       b.setCommandHandler(CustomerCommand.AddCustomer.class, (cmd, ctx) ->
               // First we create an event and persist it
               // {@code entityId() } gives you automatically the 'primary key', in our case the email
               ctx.thenPersist(new CustomerEvent.AddedCustomerEvent(entityId(), cmd.firstName, cmd.lastName, cmd.birthDate, cmd.comment),
                       // if this succeeds, we return 'Done'
                       evt -> ctx.reply(Done.getInstance())));

      /*
       * Event handler for the AddedCustomerEvent event, where we update the status for real
       */
       b.setEventHandler(CustomerEvent.AddedCustomerEvent.class,
               evt -> {
                   return new CustomerState(Optional.of(evt.email), Optional.of(evt.firstName), Optional.of(evt.lastName), Optional.of(evt
                           .birthDate), evt.comment);
               });

      /*
       * Command handler to query all data of a customer (String representation of our customer)
       */
       b.setReadOnlyCommandHandler(CustomerCommand.CustomerInfo.class,
               (cmd, ctx) -> ctx.reply(state().toString()));

       return b.build();
   }

}

Finally a handler definition in the code listing, a ‘read only command handler’ is being created. You are not allowed to mutate any state through this handler, but it can be used to query the current state of the entity.

The BehaviorBuilder can also contain business logic, for example to mutate state differently when a customer already exists and as such has to be updated instead of created. The AddedCustomerEvent is identical to the AddCustomerCommand except for having the e-mail address, because we’ll need it later on.
Missing until now from the code listings is the CustomerState, which you can see below. The fields are all of type Optional because the initial state for a certain customer is ’empty’.

public final class CustomerState implements Jsonable {

   public static final CustomerState EMPTY = new CustomerState(Optional.empty(), Optional.empty, Optional.empty, Optional.empty, Optional.empty);

   private final Optional email;
   private final Optional firstName;
   private final Optional lastName;
   private final Optional birthDate;
   private final Optional comment;

   @JsonCreator
   public BlogState(Optional email, Optional firstName, Optional lastName, Optional birthDate, Optional comment) {
       this.email = email;
       this.firstName = firstName;
       this.lastName = lastName;
       this.birthDate = birthDate;
       this.comment = comment;
   }

   @JsonIgnore
   public boolean isEmpty() {
       return !email.isPresent();
   }
}

Read-side with JDBC in Lagom

In a CQRS (Command Query Responsibility Segregation) architecture the manipulation of data is separated from the querying of data. One of the more interesting aspects about this separation is that the read-side can be optimized for querying. Specifically by using denormalized tables on the read-side, grouping data in the most efficient way and by duplicating data where needed. This keeps queries simple and fast.

Additionally this will prevent so-called ORM impedance mismatch; the conceptual and technical difficulties of translating object structures to relational tables, for example translation of inheritance and encapsulation to relational schemas.
As I have shown above Lagom will automatically take care of storage and processing of events in the same way the framework supports storing of data on the read-side inside denormalized tables, shown in Figure 2.
CQRS Separate Read Write
Fig 2: Separated ‘read’ and ‘write’ side in line with CQRS
© Microsoft – CQRS Journey

Within Lagom you can define “ReadSideProcessor”s that can receive and process events and thereby store the data in a different form. The next listing shows an example of a ReadSideProcessor.

public class CustomerEventProcessor extends ReadSideProcessor {

   private final JdbcReadSide readSide;

   @Inject
   public CustomerEventProcessor(JdbcReadSide readSide) {
       this.readSide = readSide;
   }

   @Override
   public ReadSideHandler buildHandler() {
       JdbcReadSide.ReadSideHandlerBuilder builder = readSide.builder("votesoffset");

       builder.setGlobalPrepare(this::createTable);
       builder.setEventHandler(CustomerEvent.AddedCustomerEvent.class, this::processCustomerAdded);

       return builder.build();
   }

   private void createTable(Connection connection) throws SQLException {
       connection.prepareStatement(
               "CREATE TABLE IF NOT EXISTS customers ( "
                       + "id MEDIUMINT NOT NULL AUTO_INCREMENT, "
                       + "email VARCHAR(64) NOT NULL, "
                       + "firstname VARCHAR(64) NOT NULL, "
                       + "lastname VARCHAR(64) NOT NULL, "
                       + "birthdate DATETIME NOT NULL, "
                       + "comment VARCHAR(256), "
                       + "dt_created DATETIME DEFAULT CURRENT_TIMESTAMP, "
                       + " PRIMARY KEY (id))").execute();
   }

   private void processCustomerAdded(Connection connection, CustomerEvent.AddedCustomerEvent event) throws SQLException {
       PreparedStatement statement = connection.prepareStatement(
               "INSERT INTO customers (email, firstname, lastname, birthdate, comment) VALUES (?, ?, ?, ?, ?)");
       statement.setString(1, event.email);
       statement.setString(2, event.firstName);
       statement.setString(3, event.lastName);
       statement.setDate(4, event.birthDate);
       statement.setString(5, event.comment.orElse(""));
       statement.execute();
   }

   @Override
   public PSequence> aggregateTags() {
       return TreePVector.singleton(CustomerEvent.CUSTOMER_EVENT_TAG);
   }
}

Now the ReadSideProcessor can be registered in the service implementation as follows (showing the full constructor for the sake of completeness):

@Inject
public CustomerServiceImpl(PersistentEntityRegistry persistentEntityRegistry, JdbcSession jdbcSession, ReadSide readSide) {
   this.persistentEntityRegistry = persistentEntityRegistry;
   this.persistentEntityRegistry.register(CustomerEntity.class);
   this.jdbcSession = jdbcSession;
   readSide.register(CustomerEventProcessor.class);
}

For the Event class a ‘tag’ needs to be defined as shown in the following listing, so Lagom can keep track of which events have been processed. This is important particularly for restarts or crashes, so that the data can be kept consistent between write- and read-side.

AggregateEventTag CUSTOMER_EVENT_TAG = AggregateEventTag.of(CustomerEvent.class);

@Override
default AggregateEventTag aggregateTag() {
   return CUSTOMER_EVENT_TAG;
}

Now that the processing of events is implemented and data is stored in denormalized tables, it can be easily queried using SQL queries. For example the next listing shows a simple query for the average age of customers in the system, added to the service implementation.

@Override
public ServiceCall getCustomerAverageAge() {
   return request -> jdbcSession.withConnection(connection -> {
       ResultSet rsCount = connection.prepareStatement("SELECT COUNT(*) FROM customers").executeQuery();
       ResultSet rsAverage = connection.prepareStatement("SELECT AVG(TIMESTAMPDIFF(YEAR,birthDate,CURDATE())) FROM customers").executeQuery();

       if (rsCount.next() && rsAverage.next() && rsCount.getInt(1) > 0) {
           return String.format("# %s customers resulted in average age; %s", rsCount.getString(1), rsAverage.getString(1));
       } else {
           return "No customers yet";
       }
   });
}

Conclusion

CQRS and Event Sourcing are a powerful means to optimize the write- and read-side for a service separately. And while a NoSQL store certainly has its advantages, a relational database is highly suitable for querying over multiple object structures.
I hope to have shown you how Lagom supports this architecture perfectly and supports different solutions for persistence. With the principle of ‘convention over configuration’ developers can focus on implementing business logic instead of typing boilerplate code.

Lagom recently arrived at version 1.2.x and you will sometimes note this is still a young framework in some minor issues. Partly because of this I advise to take some caution and thoroughly evaluate whether Lagom is suitable for your production use-cases. But it certainly is a framework to keep an eye on.

The post CQRS and Event Sourcing with Lagom appeared first on codecentric AG Blog.

Integration testing strategies for Spring Boot microservices part 2

$
0
0

This is the second part of my earlier post about strategies for integration-testing Spring Boot applications that consist of multiple (rest) services.

You can find the accompanying sample application in my gitlab account:

git clone git@gitlab.com:jsprengers/springboot-testing-tips.git

In my previous post I pointed out that unit tests are great for checking code correctness — while not actually proving it, but let’s not go there). They are also an essential tool to safeguard coding standards: if you can’t test it in a unit test, it’s probably badly designed.

However, the tool stacks used to build today’s server applications typically follow a convention-over-configuration approach, where you can leave out much of the boilerplate configuration in favour of a sensible default. This saves you the time and effort to gain in-depth knowledge of the framework yet makes it all the more important to check that the framework is behaving as expected. Fortunately Spring also supplies you with the tools to do just that and we looked at some integration tests that make a Spring context available for testing purposes.

Spring integration tests are great to check if our software uses the tools/middleware correctly: although they can run alongside regular unit tests, the important difference is that the scope of our test is no longer a unit of code isolated from the rest by means of mocking, but an entire server instance.

Advantages of using Spring Unit test:

  • Easy to set up: @RunWith(SpringRunner.class)
  • Quick to run
  • very flexible: inject dependencies to manipulate input and validate state

Drawbacks:

  • Not really a faithful representation of the actual live runtime.
  • Tests only a single spring context, not the interaction between multiple services

Let’s return to our weather server. Remember that we’re building a central REST server that resolves a Dutch postcode to a location of a network-enabled weather station for up-to-date readings. Suppose that the Java part of our application landscape is nearing completion. The only thing missing is the code to read out the physical devices, but as the Lead Systems Architect you have given that team the required interfaces, so they can’t hold you back from running a test of the full system while mocking the actual reading of the devices with ThermometerAdapter and HygrometerAdapter and set a fixed return value through application properties at startup.

@Service
@Profile(“test”)
public class StubThermometer implements Thermometer {

    @Value("${temperature:42}")
    private double temperature;

    public double getTemperatureInCelsius() {
        return temperature;
    }
}

At a later date you can get rid of the stubs, or keep them around, annotated with @Profile(“test”), and annotate the ‘actual’ thermometer with @Profile(“!test”). Purists will argue that you should never have anything intended for testing purposes mixed with your production code, but I disagree. The mocks are not a temporary kludge but the only way to to run a repeatable automated test of our system.

In this stage of our integration test we want the multiple hosts to run in separate JVMs, with all interaction between them going through the network.
This is a step up from the @SpringRunner integration test towards the eventual live situation, though bear in mind that it is still not a true end-to-end test: all server instances will run on the same metal instead of on separate IoT-machines over the country, and the physical measuring devices are still stubbed.
For this test I have created a separate maven project atdd (acceptance-test driven development) that has no source dependencies on either the weather server or weather station projects. For convenience it depends on the weatherapi project, but this is only to use the data transfer objects (dtos) returned from and received by the weather server rest endpoints.

The atdd tests does basically the same as the weather server Spring integration test in terms of queries and validation but with the crucial difference that the servers run on separate TCP ports and that the framework will orchestrate the start-up and shutdown of all service, and it will do so with the packaged jar artefacts. I have chosen the Cucumber test framework to write these test scenarios in a more readable format using the gherkin DSL:

Feature: Get hard-coded temperature for various postal codes

  Scenario: get temperature for correct postal codes in celsius
    When I get the temperature for postal code 2000 in celsius
    Then the temperature is -3.5 in celsius
    When I get the temperature for postal code 4000 in celsius
    Then the temperature is 2 in celsius
    When I get the temperature for postal code 7000 in celsius
    Then the temperature is 6.3 in celsius

  Scenario: get temperature for correct postal codes in fahrenheit
    When I get the temperature for postal code 2000 in fahrenheit
    Then the temperature is 25.7 in fahrenheit
    When I get the temperature for postal code 4000 in fahrenheit
    Then the temperature is 35.6 in fahrenheit
    When I get the temperature for postal code 7000 in fahrenheit
    Then the temperature is 43.3 in fahrenheit

  Scenario: get humidity for correct postal codes
    When I get the temperature for postal code 2000 in celsius
    Then the humidity is 30 per cent
    When I get the temperature for postal code 4000 in celsius
    Then the humidity is 35 per cent
    When I get the temperature for postal code 7000 in celsius
    Then the humidity is 40 per cent

The text of these so-called gherkin glue lines can contain placeholders for temperature and post code values. TemperatureRetrievalSteps contains the Java meat to do the actual rest calls, using the same RestTemplate we came across earlier.

public class TemperatureRetrievalSteps {
    private RestTemplate client = new RestTemplate();
    private ResponseEntity response = null;

    @When("^I get the temperature for postal code (\\d+) in (celsius|fahrenheit)")
    public void getTemperature(String postCode, String unit) {
        String url = String.format("http://localhost:%d/weather?postCode=%s&unit=%s", ServerEnvironment.SERVER_PORT, postCode, getUnitCode(unit));
        response = client.getForEntity(url, WeatherReportDTO.class);
    }

    @Then("^the temperature is (.*?) in (celsius|fahrenheit)$")
    public void theTemperatureIs(double temperature, String unit) {
        assertCorrectResponse("Did not retrieve temperature: ");
        assertThat(response.getBody().getTemperature()).as("temperature").isEqualTo(temperature, Offset.offset(0.1));
        assertThat(response.getBody().getUnit().name()).as("temperature unit").isEqualTo(getUnitCode(unit));
    }
    //rest of content omitted
}

Very nice; but before running the cucumber scenarios we need to set up an environment of running servers. Let’s look at two very different ways to accomplish this, each with its own drawbacks and advantages. The first is to build a fully executable file and register it as a (Unix) service. You can then use start, stop, status commands like any other Unix service by means of the standard ProcessBuilder class. There is a similar way to do this for Windows, but it should be clear that this is not the most portable way. It requires extra privileges to register a service which is maybe not at all what you want for testing purposes.
A pure Spring solution is to use the actuator project, which among many other features supplies methods to monitor and shut down services over the network. This is the approach I will follow here. We still need a system call to start the application, but after that we can monitor and shutdown the instance through the web api.

 
            org.springframework.boot
            spring-boot-starter-actuator
 
 

By default the shutdown hook is not supported, for obvious reasons. The following configuration will also disable security for our test purposes.

--endpoints.shutdown.sensitive=false --endpoints.shutdown.enabled=true --management.context-path=/manage

Needless to say you should not put this in the application.properties files of your production code. Better even still to use a build profile that excludes the actuator project entirely for the production build.

The ServerEnvironment constructs the start-up command that contains all the necessary hard-coded application properties for our test.

private static SpringApplicationWrapper createWeatherServerInstance(List defaultArgs) {
     String jarFilePath = PathUtil.getJarfile(PathUtil.getProjectRoot() + "/springboot-testing-tips/weatherserver/target");
     defaultArgs.add("--server.port=" + SERVER_PORT);
     defaultArgs.add("--weather.host.1000-3000=http://localhost:" + STATION_1_PORT + "/weather");
     defaultArgs.add("--weather.host.3001-6000=http://localhost:" + STATION_2_PORT + "/weather");
     defaultArgs.add("--weather.host.6001-9999=http://localhost:" + STATION_3_PORT + "/weather");
     return new SpringApplicationWrapper("http://localhost:8090/manage", jarFilePath, defaultArgs);
  }

The SpringApplicationWrapper stands for a single server instance and is responsible for invoking the java -jar executable_file.jar command and issuing the /mappings and /shutdown calls.
All that’s left is to start-up and shutdown the environment using the @BeforeClass and @AfterClass hooks in JUnit. Note that cucumber runs each scenario as if it were a separate JUnit test class. If we had annotated the setup/teardown hooks with @Before and @After they would have been invoked for each scenario. For performance reasons we want to start-up and shutdown the environment only once, so we have to use static method calls, which is less pretty but unavoidable.

@RunWith(Cucumber.class)
@CucumberOptions(features = "classpath:features", format = {"pretty", "json:target/cucumber-html-reports/testresults.json"})
public class RunCucumberTest {
    @BeforeClass
    public static void setup() {
        ServerEnvironment.start();
    }

    @AfterClass
    public static void tearDown() {
        ServerEnvironment.shutdown();
    }
}

And there’s more…

Once the UI is in place we can integrate it in this project and initiate a browser session with Selenium to manipulate the UI controls. The control over our running servers allows you even to run recovery scenarios by stopping and starting services and ensure that the other nodes react appropriately. Remember that caching the readings from the weather stations is an important task of the weather server, since bandwidth is limited and too frequent readings make no sense. You could programmatically stop one of the weatherstation instances, do a request to the weatherserver, which will serve the response from cache and not even notice the station has gone offline, provided it is back up before the cache timeout expires. All this you can test with the kind of setup I outlined. There is somewhat more boilerplate involved than with a regular integration test but you can extract much of it to a separate testing library and re-use it for new atdd projects. I wish you happy testing.

The post Integration testing strategies for Spring Boot microservices part 2 appeared first on codecentric AG Blog.

Viewing all 181 articles
Browse latest View live