Skip to main content


Showing posts from 2013

Proxies done right with Guava's AbstractInvocationHandler

Snarøya Not too often but sometimes we are forced to write custom dynamic proxy class using java.lang.reflect.Proxy . There is really no magic in this mechanism and it's worth knowing even you will never really use it - because Java proxies are ubiquitous in various frameworks and libraries. The idea is quite simple: dynamically create an object that implements one or more interfaces but every time any method of these interfaces is called our custom callback handler is invoked. This handler receives a handle to a method that was called ( java.lang.reflect.Method instance) and is free to behave in any way. Proxies are often used to implement seamless mocking, caching, transactions, security - i.e. they are a foundation for AOP. Before I explain what the purpose of from the title, let's start from a simple example. Say we want to transparently run methods of given interface asynchronously in a thread pool. Popular sta

Promises and CompletableFuture

From During my talk at Warsaw Java Users Group about functional reactive programming in Java a few interesting questions came up regarding CompletableFuture capabilities. One person was interested whether it's possible to wait for the first completed future that is passing a given predicate rather than just the first one (like CompletableFuture.anyOf() ). This is similar requirement to Future.find() in Scala. It's not built into CompletableFuture but quite easy to implement using the concept of promises . Our custom implementation will take two parameters: a list of homogeneous futures and a predicate. The first future to complete that matches given predicate wins. If no future matched resulting future never ends (rather easy to change that behaviour). We will use a thread-safe and lightweight AtomicBoolean completed flag because callbacks will be invoked from multiple threads. public static <T> CompletableFuture<T> firstMatching(Predicat

Playing with Scala futures

View from Kolsåstoppen During job interviews we often give Scala developers a simple design task: to model a binary tree. The simplest but not necessarily best implementation involves Option idiom: case class Tree[+T](value: T, left: Option[Tree[T]], right: Option[Tree[T]]) Bonus points for immutability, using case class and covariance. Much better but more complex implementation involves two case classes but at least allows modelling empty trees: sealed trait Tree[+T] case object Empty extends Tree[Nothing] case class Node[+T](value: T, left: Tree[T], right: Tree[T]) extends Tree[T] Let's stick to the first idea. Now implement building a tree with arbitrary height: def apply[T](n: Int)(block: => T): Tree[T] = n match { case 1 => Tree(block, None, None) case _ => Tree( block, Some(Tree(n - 1)(block)), Some(Tree(n - 1)(block)) ) } In order to build a tree with 1024 leaves and all random variable it&

brainfuck in Clojure. Part II: compiler

Oslofjord Last time we developed brainfuck interpreter in Clojure . This time we will write a compiler. Compilation has two advantages over interpretation: the resulting program tends to be faster and source program is lost/obscured in binary. It turns out that a brainfuck compiler (to any assembly/bytecode) is not really that complex - brainfuck is very low level and similar to typical CPU architectures (chunk of mutable memory, modified one cell at a time). Thus we will go for something slightly different. Instead of producing JVM bytecode (which some already did ) we shall write a Clojure macro that will generate code equivalent to any brainfuck program. In other words we will produce Clojure source equivalent to brainfuck source - at compile time. This task is actually more challenging because idiomatic Clojure is much different from idiomatic brainfuck (if such thing as " idiomatic brainfuck " ever existed). Let's first think how such a Clojure code could look

brainfuck in Clojure. Part I: interpreter

Snarøya coast Brainfuck is one among the most popular esoteric programming languages . Writing a Brainfuck interpreter is fun, in contrary to actually using this "language". The syntax is dead simple and semantics are rather clear. Thus writing such interpreter is a good candidate for Kata session, TDD practice, etc. Using Clojure for the task is slightly more challenging due to inherent impedance mismatch between imperative Brainfuck and functional Clojure. However you will find plenty of existing implementations ( [1] , [2] , [3] , [4] ), many of them are less idiomatic as they use atoms to mutate state in-place ( [5] , [6] , [7] , [8] , [9] ). Let's write a simple, idiomatic brainfuck interpreter ourselves, step by step. It turns out that the transition from mutability to immutability is quite straightforward - rather than mutating state in-place we simply exchange previous state with the new one. In Brainfuck state is represented by cells (memory), cell (poin

"Beginning Java EE 7" by Antonio Goncalves review

Don't be fooled by the " beginning " in the title. This 600-pages book is a comprehensive and complete walk-through of all components and technologies comprising Java EE 7 stack. Antonio Goncalves , Java EE evangelist and Java Champion, wrote a reference book for all enterprise software developers. " Beginning Java EE 7 " is not a collection of random tutorials. Instead this publication covers thoroughly pretty much every aspect of Java EE you might encounter on a daily basis: CDI (Contexts and Dependency Injection) JPA (Java Persistence API) EJB (Enterprise JavaBeans) JTA (Java Transaction API) JMS (Java Message Service) SOAP/REST/XML/JSON processing JSF (JavaServer Faces) ...and even more As you can see the book covers all the layers from back-end to API and front-end development. Moreover due to solid size of the publication each of these subjects is treated with care. Expect plenty of end-to-end examples including maven configuration. Som

instanceof operator and Visitor pattern replacement in Java 8

Bridge to Ulvøya I had a dream where instanceof operator and downcasting were no longer needed but without clumsiness and verbosity of visitor pattern . So I came up with the following DSL syntax: Object msg = //... whenTypeOf(msg). is(Date.class). then(date -> println(date.getTime())). is(String.class). then(str -> println(str.length())). is(Number.class). then(num -> println(num.intValue())). orElse(obj -> println("Unknown " + obj)); No downcasting, clean syntax, strong-typed and... perfectly achievable in Java 8. Using lambdas and a little bit of generics I created a tiny library called typeof that is clean, easy to use and more robust than instanceof and Visitor pattern taken together. Advantages include: no explicit downcasting avoids instanceof clean and easy to use strongly typed works with classes that we have no control over, including JDK This small utility was developed with Akka and Java API i

Optional in Java 8 cheat sheet

Maridalsvannet java.util.Optional<T> in Java 8 is a poor cousin of scala.Option[T] and Data.Maybe in Haskell . But this doesn’t mean it’s not useful. If this concept is new to you, imagine Optional as a container that may or may not contain some value. Just like all references in Java can point to some object or be null , Option may enclose some (non-null!) reference or be empty. Turns out that the analogy between Optional and nullable references is quite sensible. Optional was introduced in Java 8 so obviously it is not used throughout the standard Java library - and never will be for the backward compatibility reasons. But I recommend you at least giving it a try and using it whenever you have nullable references. Optional instead of plain null is statically checked at compile time and much more informative as it clearly indicates that a given variable may be present or not. Of course it requires some discipline - you should never assign null to any variable any

Asynchronous retry pattern

Office building in Fornebu When you have a piece of code that often fails and must be retried, this Java 7/8 library provides rich and unobtrusive API with fast and scalable solution to this problem: ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor(); RetryExecutor executor = new AsyncRetryExecutor(scheduler). retryOn(SocketException.class). withExponentialBackoff(500, 2). //500ms times 2 after each retry withMaxDelay(10_000). //10 seconds withUniformJitter(). //add between +/- 100 ms randomly withMaxRetries(20); You can now run arbitrary block of code and the library will retry it for you in case it throws SocketException : final CompletableFuture<Socket> future = executor.getWithRetry(() -> new Socket("localhost", 8080) ); future.thenAccept(socket -> System.out.println("Connected! " + socket) ); Please look carefully! getWithRetry() does not

Managing congested actors in Akka

Hovedøya There comes a time in an Akka application when an actor can longer handle increasing load. Since each actor can only handle one message at a time and it keeps a backlog of pending messages in a queue called mailbox , there is a risk of overloading one actor if too many messages are sent to one actor at the same time or actor fails to process messages fast enough - queue will keep growing and growing. This will negatively impact responsiveness of the system and might even result in application crashing. It’s actually very easy to simulate such load by simply sending continuous stream of messages to an actor as fast as possible: case object Ping class PingActor extends Actor { def receive = { case Ping => //don't do this at home! Thread sleep 1 } } object Main extends App { val system = ActorSystem("Heavy") val client = system.actorOf(Props[PingActor], "Ping") while(true) { client

Fake system clock pattern in Scala with implicit parameters

Sjømannsskolen Fake system clock is a design pattern addressing testability issues of programs heavily relying on system time. If business logic flow depends on current system time, testing various flows becomes cumbersome or even impossible. Examples of such problematic scenarios include: certain business flow runs only (or is ignored) during weekends some logic is triggered only after an hour since some other event when two events occur at the exact same time (typically 1 ms precision), something should happen … Each scenario above poses unique set of challenges. Taken literally our unit tests would have to run only on specific day (1) or sleep for an hour to observe some behaviour. Scenario (3) might even be impossible to test under some circumstances since system clock can tick 1 millisecond at any time, thus making test unreliable. Fake system clock addresses these issues by abstracting system time over simple interface. Essentially you never call new Date() , new G

Macro lifecycle in Clojure

Oslofjord from Snarøya If you still struggle to understand what are macros in Clojure and why are they so useful, I will guide you through another example today. We will learn when macros are recognized, evaluated, expanded and executed. I believe the most important concept is their similarity to normal functions. As I described last time , macros are ordinary functions but executed at compile time and taking code rather than values as arguments. The second difference is slightly artificial since Clojure code is a value in sense that it can be passed around. So let us focus on when macros are actually expanded and executed. We will start from trivial GCD implementation in Clojure as a normal function: (defn gcd [a b] (if (zero? b) a (recur b (mod a b)))) Calling this function will result in a tail-recursive loop executed at runtime every time it is encountered: user=> (gcd 18 12) 6 user=> (gcd 9 2) 1 user=> (gcd 9 (inc 2)) 3 Not very exciting

su and sudo in Spring Security applications

Kolsåstoppen Long time ago I worked on a project that had a quite powerful feature. There were two roles: user and supervisor. Supervisor could change any document in the system in any way while users were much more limited to workflow constraints. When a normal user had some issue with the document currently being edited and stored in HTTP session, supervisor could step in, switch to special supervisor mode and bypass all constrains. Total freedom. Same computer, same keyboard, same HTTP session. Only special flag that supervisor could set by entering secret password. Once the supervisor was done, he or she could clear that flag and enable usual constraints again. This feature worked well but it was poorly implemented. Availability of every single input field was dependent on that supervisor mode flag. Business methods were polluted in dozens of places with isSupervisorMode() check. And remember that if supervisor simply logged in using normal credentials, this mode was sort

Clojure macros for beginners

Bjørvika This article will guide you step-by-step (or even character-by-character ) through the process of writing macros in Clojure. I will focus on fundamental macro characteristics while explaining what happens behind the scenes. Imagine you are about to write an assertions library for Clojure, similar to FEST Assertions , ScalaTest assertions or Hamcrest . Of course there are such existing but this is just for educational purposes. What we essentially need first is a assert-equals function used like this: (assert-equals (count (filter even? primes)) 1) Of course this is more than trivial: (defn assert-equals [actual expected] (when-not (= actual expected) (throw (AssertionError. (str "Expected " expected " but was " actual))))) Quick test with incorrectly defined primes vector: user=> (def primes [0 2 3 5 7 11]) #'user/primes user=> (assert-equals (count (filter even? primes)) 1) Asser

Mapping enums done right with @Convert in JPA 2.1

Torshovdalen If you ever worked with Java enums in JPA you are definitely aware of their limitations and traps. Using enum as a property of your @Entity is often very good choice, however JPA prior to 2.1 didn’t handle them very well. It gave you 2+1 choices: @Enumerated(EnumType.ORDINAL) (default) will map enum values using Enum.ordinal() . Basically first enumerated value will be mapped to 0 in database column, second to 1 , etc. This is very compact and works great to the point when you want to modify your enum. Removing or adding value in the middle or rearranging them will totally break existing records. Ouch! To make matters worse, unit and integration tests often work on clean database, so they won’t catch discrepancy in old data. @Enumerated(EnumType.STRING) is much safer because it stores string representation of enum . You can now safely add new values and move them around. However renaming enum in Java code will still break existing records in DB. Even more

Null safety in Kotlin

Gjersjøen lake Kotlin is a statically typed JVM language developed by Jetbrains . It has some good documentation so today I will focus on a tiny part of it - null safety . There are at least couple of approaches to null handling in JVM languages: Java doesn’t go much further than C - every reference (“pointer”) can be null , whether you like it or not. If it’s not a primitive, every single field, parameter or return value can be null . Groovy has similar background but adds some syntactic sugar , namely Elvis Operator ( ?: ) and Safe Navigation Operator ( ?. ). Clojure renames null to nil , additionally treating it as false in boolean expressions. NullPointerException is still possible. Scala is first to adopt systematic, type safe Option[T] monad (Java 8 will have Optional<T> as well!) Idiomatic Scala code should not contain null s but when interoperating with Java you must sometimes wrap nullable values. Kotlin takes yet another approach. Ref

Lazy sequences implementation for Java 8

I just published LazySeq library on GitHub - result of my Java 8 experiments recently. I hope you will enjoy it. Even if you don't find it very useful, it's still a great lesson of functional programming in Java 8 (and in general). Also it's probably the first community library targeting Java 8! Introduction Lazy sequence is a data structure that is being computed only when its elements are actually needed. All operations on lazy sequences, like map() and filter() are lazy as well, postponing invocation up to the moment when it is really necessary. Lazy sequence is always traversed from the beginning using very cheap first / rest decomposition ( head() and tail() ). An important property of lazy sequences is that they can represent infinite streams of data, e.g. all natural numbers or temperature measurements over time. Lazy sequence remembers already computed values so if you access Nth element, all elements from 1 to N-1 are computed as well and cached. Des

Java 8: CompletableFuture in action

After thoroughly exploring CompletableFuture API in Java 8 we are prepared to write a simplistic web crawler. We solved similar problem already using ExecutorCompletionService , Guava ListenableFuture and Scala/Akka . I choose the same problem so that it's easy to compare approaches and implementation techniques. First we shall define a simple, blocking method to download the contents of a single URL: private String downloadSite(final String site) { try { log.debug("Downloading {}", site); final String res = IOUtils.toString(new URL("http://" + site), UTF_8); log.debug("Done {}", site); return res; } catch (IOException e) { throw Throwables.propagate(e); } } Nothing fancy. This method will be later invoked for different sites inside thread pool. Another method parses the String into an XML Document (let me leave out the implementation, no one wants to look at it): private Document parse(St

Java 8: Definitive guide to CompletableFuture

Java 8 is coming so it's time to study new features. While Java 7 and Java 6 were rather minor releases, version 8 will be a big step forward. Maybe even too big? Today I will give you a thorough explanation of new abstraction in JDK 8 - CompletableFuture<T> . As you all know Java 8 will hopefully be released in less than a year, therefore this article is based on JDK 8 build 88 with lambda support . CompletableFuture<T> extends Future<T> by providing functional, monadic (!) operations and promoting asynchronous, event-driven programming model, as opposed to blocking in older Java. If you opened JavaDoc of CompletableFuture<T> you are surely overwhelmed. About fifty methods (!), some of them being extremely cryptic and exotic, e.g.: public <U,V> CompletableFuture<V> thenCombineAsync( CompletableFuture<? extends U> other, BiFunction<? super T,? super U,? extends V> fn, Executor executor) Don't worry, but keep r

Synchronising Multithreaded Integration Tests revisited

I recently stumbled upon an article Synchronising Multithreaded Integration Tests on Captain Debug's Blog . That post emphasizes the problem of designing integration tests involving class under test running business logic asynchronously. This contrived example was given (I stripped some comments): public class ThreadWrapper { public void doWork() { Thread thread = new Thread() { @Override public void run() { System.out.println("Start of the thread"); addDataToDB(); System.out.println("End of the thread method"); } private void addDataToDB() { // Dummy Code... try { Thread.sleep(4000); } catch (InterruptedException e) { e.printStackTrace(); } } }; thread.start(); System.out.println("Off and running..."

Lazy sequences in Scala and Clojure

Lazy sequences (also known as streams ) are an interesting functional data structure which you might have never heard of. Basically lazy sequence is a list that is not fully known/computed until you actually use it. Imagine a list that is very expensive to create and you don't want to compute too much - but still allow clients to consume as much as they want or need. Similar to iterator, however iterators are destructive - once you read them, they're gone. Lazy sequences on the other hand remember already computed elements. Notice that this abstraction even allows us to construct and work with infinite streams! It's perfectly possible to create a lazy sequence of prime numbers or Fibonacci series . It's up to the client to decide how many elements they want to consume - and only that many are going to be generated. Compare it to eager list - that has to be precomputed prior to first usage and iterator - that forgets about already computed values. Remember however