A tale of two mediatypes

I’ve spent the last several weeks picking up something I started back in 2015. Way back then, in the airport departing SpringOne, I started working on the third mediatype for Spring HATEOAS. There were already two: the original one based on pure Jackson and HAL.

This was Collection+JSON by Mike Amundsen. As a traveling consultant, speaker, and author, Mike has actually penned several mediatypes while also advocating for evolable, adaptable RESTful architectures. Working on this one, I made considerable progress. The progress was really about me learning how Jackson worked.

When your pull request gets turned down

You see, Spring HATEOAS doesn’t just crank out hypermedia. We have a collection of domain classes that represent a neutral format. Your Spring MVC controllers can actually be used to generate multiple forms of hypermedia. At least, that’s the idea, and in it’s formative six years of development, that premise hasn’t been dropped.


So after supposedly finishing the job and submitting it to the project lead, Oliver Gierke, I wondered why my work wasn’t getting merged. I chatted with him and learned of another effort afoot that was blocking its acceptance: Affordances. Another group of people were working on a very deep, extensive enhancement to Spring HATEOAS, and Oliver didn’t want to complicate things by having yet another mediatype in the baseline.

Two years later, when I rejoined the Spring Data team, I picked up the mantle of Spring HATEOAS. Oliver’s responsibilities had grown including become the official manager for the Spring Data umbrella. One of my biggest undertakings was to start reading through this Affordances API handiwork and bring it forward. Doing so required months of reading, testing, experimenting, polishing, and coding.

When your pull request gets turned down…again!

My months of effort were clobbered by the inalienable fact that the code had suffered major feature creep. It had also been forked into a couple other branches. Despite months of massaging a pull request that had over 200 commits and getting all of its tests to pass, Oliver and I agreed to shelve that work in favor of starting over.

Yes, I proposed starting over and trying to implement the same concept from scratch. If you want a hint of how long this took, you can see it in the commit logs. Because we squash pull requests, this causes the timestamps to go back to the original, first commit. This image shows code merged on November 29th, yet the commit for the Affordances work dates July 13th.

Ultimately, I crafted a much lighter weight API that allows chaining multiple Spring MVC methods together. With that, Spring HATEOAS can unpack these related links into the right format for each mediatype. For example, HAL-FORMS looks at any POST or PUT methods, finds the input type, and extracts its properties.

Testing an API by using it with another mediatype

So my big test with Collection+JSON was to take this two-year-old effort, rebase it against all of Spring HATEOAS’s improvements, and see if a completely different layout of JSON could be derived from the same vendor neutral representations INCLUDING this new Affordances API.

Step one was to rebase this entire chunk of code. Since it had previously been written against Java 6 and Spring HATEOAS was now on Java 8, that alone was a bit of effort. Several contracts had been altered meaning not all the implementations worked. Had to hammer that out.

Then in the midst of revamping things and reading the spec in detail, I noticed something unfortunate: I had completely misread the format of data. I thought turning a POJO into a simple JSON object was okay:

It looked like Collection+JSON’s “data” field would let any type of nested domain object get turned into a simple JSON structure.

Turns out, Collection+JSON requires a very explicit array of key-value pairs:

This is not nestable unless done out-of-band.

Additionally, it required that I code up the ability to extract property names and values from objects for serialization, and go the other way for deserialization. This is kind of like serializing inside the serializer. Yech! But I got it done. So after several weeks, I managed to rewrite half if not more of my original code and get things working.

Next step–Affordances!

I had to hook into the Affordances API and generate custom things pursuant to this new mediatype. Two years ago, I was very aware of Collection+JSON’s “template” and “queries” sections. And I hadn’t a clue how to populate them. Until now.

This would be the moment of truth at how well I had designed this API for HAL-FORMS. Turns out, the API was quite well suited to handling this.

Coding the Affordances API, Oliver had recommended creating an AffordanceModel and AffordanceModelFactory to allow mediatype-specifics to be neatly encapsulated behind a incredibly simple interface until the mediatype’s serialization code could unpack it. Turns out that was a very good decision.

Debugging the code, I could see that a model was now being generated for both Collection+JSON as well as HAL-FORMS, and I only need one of these for any given situation. Having this complex, detailed code is most important, and handily doesn’t leak into unnecessary areas.

I had allowed some Spring MVC specifics to leak into my HAL-FORMS handlers. That was a no-go, because the whole idea is to have things be tech neutral at this point.

One of my last tests was to have one Spring MVC controller wired up, and have it serve queries for two different mediatypes.

With this in place, I began to engage in classic jürgenization. This has taken me several days. I discovered a missing pair of units (serialize/deserialize custom extension of ResourceSupport) and hence missed a critical corner case. Crisis averted, I have pushed off the final set of commits to Travis CI and submitted it to Oliver for review.

Using one mediatype to develop another

Having built HAL-FORMS, I enjoyed leveraging all the lessons learned to build up Collection+JSON. It gave me a starting point for tests that needed to be written, a nice repeatable pattern of serializers and deserializers that also had to be tackled.

And having just recently performed the work for HAL-FORMS, I was ecstatic that when I solved a particular problem for Collection+JSON, I was able to refactor HAL-FORMS to use the same solution, ensuring consistency between the two.

Every time I work on another mediatype, the whole project gets stronger, better, more reliable. That’s why I can’t wait to circle and resume my efforts with UBER Hypermedia. Most of the work is done. Or at least, that’s how it appeared when I was last there!

But I wanted to let it rest, and circle back. WIth three mediatypes implemented (HAL, HAL-FORMS, and Collection+JSON), UBER will be a real sizzler. After that, SIREN, and then possibly XHTML.

See you then!

How TDD is wrong

I’m a pretty big test advocate. After all, it’s in my profile. So how can I say that TDD is wrong?

“Test-bitten script junky…” — opening of my profile


The “test-bitten” means I’ve been bitten by the automated testing bug. In a previous post, I mentioned having built the equivalent of a CI solution early in my career without knowing it. So how can I advocate such a heretical point of view?

The answer is subtle. To me, the benefit of automated testing is in having the testing and automating it.

  • Nowhere do I see primary benefit in writing the test first.
  • Nowhere do I see it better to write the test then write the code that solves the test.

Transitioning into my current role on the Spring team has moved me into the land of framework design. Building frameworks is quite different than end-user apps. That’s because the code you write is meant to serve not one but many. And in the case of Spring, we’re talking millions of developers (no exaggeration).

When serving this many people, you are building APIs, implementations of APIs, and ensuring that all kinds of scenarios don’t break that API. So I often have to start writing the API first. I try to create some fields. Add the accessors I need. Try to chain stuff together. And then I begin to write test cases that poke at it.

Several Spring projects also use Project Lombok. This is a really neat toolkit that I’ve known about for years, but only in the past two years have I truly come to appreciate it’s power. It makes it possible to stop writing getter/setters/equals/hashCode functions, customize visibility of accessors, define data classes, value classes, builders, and other stuff. All with a handful of easy-to-read annotations.

Trying to write a test case and then writing a Lombok-based class is ineffective. I’d rather create the class first and then use it’s pre-packaged structure in the unit test. Using Lombok in this way ensures a lot of very consistent structure that makes the overall API easier to consume. For example, it’s Builder annotation produces an API that looks like this:

This example is the builder I defined for the hypermedia format of Collection+JSON. It lets you lay out all the parts of a CollectionJson record, which is then serialized into JSON. The class behind it looks like this:

This class has several things, so let’s take it apart.

  • @Data and @Value create setters and getters, along with equals, hashCode and toString methods. This is the core for a Java object, but not the bits directly needed for a builder.
  • @Builder creates the fluent builder shown earlier with collectionJson the static function to create a new instance of this class.
  • @JsonCreator is simply used to connect Jackson to this class when it comes to deserialization.
  • And because the Item class is also a builder, I have item() as a convenience method to dive into this “sub” builder.

That’s it! This class is highly coherent because there is little “code” as in logical stuff being done. Instead, it’s mostly declarative. This class isn’t buried in logic, because it’s focused on defining a data model.

I can’t imagine noodling my way through this in a unit test and then trying to bend Lombok to support my test case. Like I said, it’s easier to define all the properties (version, href, links, and items), then flagging it as a Builder, Data, and Value class. Go into the unit test code, and start using it. Avoid much heartache.

And because I can still submit pull requests with gobs of test cases, achieving 100% coverage of things, I see little value in this “test first” approach advocated by TDD.

So…am I wrong? In what way? Jump in on the comment cause I’m dying to hear.

Why I converted a Gradle project to Maven

It may sound quirky, but I had finally HAD IT with a project I manage: Spring Web Services. There were several aspects of the build file that kept getting in my way.

Well, I was about to embark upon a critical change from which Gradle pushed me over the edge. To the point that not only did I convert to Maven, but ALSO moved the reference documentation from DocBook to Asciidoctor.

So what’s wrong with Gradle?

I know this is controversial, but I have worked with many projects. On the Spring team, it’s a common paradigm that if you need either a feature or a bug fix applied to some aspect of the portfolio, you check it out and submit a patch.

The community embraces this, and sometimes, if you really need something, the fastest route is to do it yourself.

Spring Framework, Spring Security, Spring Cloud Stream, Spring Session, Spring Session MongoDB, Spring Data Commons, blah blah blah. Just a handful of the projects I have dipped my toe in, some more than others.

Many use Maven, many use Gradle. I don’t mind Gradle for the occasional patch request. But when it came to an up-and-coming switch Spring Web Services had to make, Gradle wasn’t going to cut it.

Spring Web Services was going to create a new major release. We had supported 2.4 for a couple years, and 3.0 was about to get launched. This was the version where I could bump up the minimum to Java 8, Spring Framework 5, and other key 3rd party libraries.

That wasn’t the tricky part. The fact that we had to maintain the 2.4 branch at the same time, meant I needed a simpler build model.

Gradle comes loaded with a programming model that lets you declare things, and then hook in extra code to tailor things. And that is the problem!

Declarative vs. programmatic

Maven has this pattern of declaring what you want, and it goes and carries it out. You declare things like:

  • Artifact coordinates
  • Dependencies
  • Plugins

Because people frequently need to adjust settings for different scenarios, Maven supports the concept of profiles.

Gradle starts with the concept of a declarative build system as well. You can also declare:

  • Artifact coordinates
  • Dependencies
  • Plugins

But guess what? No first class support for profiles. Most people say, “no problem! Just code it.” Gradle uses Groovy, so you can write a little glue code, and BAM you have the concept of profiles.

Why do I need profiles? To make backwards compatibility maintainable, I always test each patch against the latest GA release of Spring Framework, as well as the latest snapshot release and the next major version. Each of these, captured in a separate profile, allows me to command my CI server to run every single patch across multiple versions of Spring Framework/Spring Security, and verify I’m not breaking anything.

Gluing this together with Groovy code was okay when I maintained a single branch. But migrating into maintaining two branches (master for 3.0 stuff and 2.x for the 2.4 stuff), that frankly wasn’t going to cut it. You see, Maven profiles are supported inside IntelliJ IDEA, but some glue code that emulates the same concept in Gradle doesn’t work AT ALL. It drives me to use the command line when my IDE should be supporting me.

This isn’t the only thing I’ve seen Gradle come short on where one must write code in their build file.

Frankly, I don’t WANT to write code there. That’s what I was able to do with Ant, and Ant drives one to spend too much time in the build file. Gradle is by no means as bad as Ant, but it seems a tad ant-ish.

Anything else?

Since you asked, there is another irritant. Gradle has plugins and that is great. Except that the blocks they add to your build file are at the global level. That means any construct you spy with settings inside it doesn’t identify the plugin they came from. You want to debug some piece of the build? You can’t jump to the plugin’s docs. You must first rummage around Google to determine which plugin is at hand.

To drive my point home, I’ve checked out projects where the developer built a handful of custom gradle plugins. That’s nice and handy, but it raises the need for me to READ every single plugin to know what’s happening.

Frankly, I’d prefer a block of XML, where the coordinates of the plugin AND its configuration details are RIGHT THERE. Maven may not offer the same magic of codability, but I fear that feature is shadowed by lack of discoverability.

Why the hate?

Okay, I know that Gradle is pretty cool out there. Many have been driven over the cliff by Maven. But to be honest, I haven’t seen enough payoff to warrant sticking with Gradle.

Since I could be wrong, feel free to dive into the comments and give me your pluses and minuses about Gradle vs. Maven!

Learning Spring Boot 2nd Edition released!

I’ve been working on this for over a year, and today is the day. Learning Spring Boot 2nd Edition released!

What’s in it? A whole lot:

  • Web Development
  • Data Access
  • Testing
  • Developer Tools
  • AMQP Messaging
  • Microservices
  • WebSockets
  • Security
  • more!

To top it off, the WHOLE BOOK is written using the new Reactor paradigm found in Spring Framework 5.0. This means Reactive Web, Reactive Data Access, Reactive Messaging, Reactive WebSockets, and even Reactive Security, all groundbreaking technologies.

In fact, the following key technologies are used through the book:

  • Spring Framework 5.0.0.RELEASE
  • Spring Data Key-RELEASE
  • Reactor Bismuth-RELEASE
  • Spring Boot 2.0.0.M5
  • Spring Cloud Finchley.M3

There isn’t any other book on the market like this. And not only is this a fantastic stack, but in each chapter, we build up a social media platform. Chapter by chapter, we add critical features for demonstration purposes, and in the process, are able to learn how to do these things reactively, in ways you can easily adapt to the problems you are solving today.

In short, not only is it groundbreaking, it’s practical and pragmatic. Coding examples you can put to work immediately in whatever you are trying to solve.

It may take a little bit of time to get pushed all the way the Amazon, so stay tuned.

Happy reading AND happy coding!

How @ATT almost conned me into buying fiber internet at my house

Last week, a salesman knocked on my door from AT&T. Ready to shoo him away, as I had done for the past four years since moving in, he indicated he was here to sell AT&T Fiber.


The only salesmen I’d seen to this point were hawking AT&T DSL or as they rebranded it, AT&T Uverse. I work at home. I absolutely do NOT need that terrible latency of internet traveling over copper phone lines. So discovering that fiber was available, simply not offered due to capital overhead, stirred my interest.

I asked many pointed questions, like why now? He indicated that fiber was in some of these newer neighborhoods, but to hook it up back at the center took a bit of capital outlay, so they had waited until there was enough consumer interest. We switched to price, and I discovered I would save $100/month off my cable internet bill.

I signed on the dotted line.

That was last Friday. This is today, an hour after the service tech has left my house.

You see, she showed up at noon to wire things up. I kept waiting for her to start digging the trench the sales guy had indicated to run the fiber line to my house. Always interested, I asked her about it.

“I don’t need to do that.”

I blinked. “What? Don’t you have to dig a trench to hook up the fiber?”

“There’s no fiber in this neighborhood.” She didn’t hold anything back.

“What are you hooking up?”

“AT&T Uverse. 50 down/10 up.”

That is NOT what this guy said. I asked had asked him about 1000MB service, and he had nodded at me. In fact, when the work order arrived in my inbox an hour after he left it read “ATT 50”. Perplexed, I had called him and asked, “Why does it say 50?” His response? “It says 50, but the tech will hook up fiber.”

Well that was a direct lie. To my proverbial face.

The tech nodded at me. “I hear that twice a week. There’s fiber running to the street corner.” She pointed at the major intersection 400 feet away. “But people just here ‘fiber’ and go for it. We’ve complained to their sales managers to try and put a stop to that.”

The tech was real gracious. She stopped while I left a voicemail message with the sales guy that had conned me big. After packing up, she actually asked for the name of the sales manager so she could give it to her manager. As if that will help. The guy was a bit young, so probably carrying the bidding of his manager.

Nevertheless, I feel bad for the non-tech people all around me. All these friends and neighbors of my community getting ripped off by these lies.

And I’m a little embarassed that I got taken in. But I’m actually more mad than anything. Mad enough to blog about it and hopefully tip off someone else.

So in the meantime, burn AT&T. You earned it with your slimy Empire-like sales tactics.

Why every press needs a code monkey

It’s not secret that I’m writing a novel. (If you HAVEN’T heard this, then I’m just not communicating very well!) It’s scheduled for release in March with a relatively small press called Clean Reads.

And its owner wears many hats. An author herself, the owner of Clean Reads is also an editor, content manager, graphic designer, mother, aunt, and many other things. But I think you get the idea. She is fundamentally running the press herself with a handful of freelance editors and cover artists.

Anything missing in there? Oh yeah. Her press has a website. Did I mentioned she’s a code geek? A bit burner? A web designer? No? Well that’s because she isn’t. So what has happened because of this? What happens to every business caught up in the 21st century stuck with 20th century business practices.

She has been doing the best she can with Facebook, Twitter, and Instagram. Ahh, the good ole social media triumvirate. We keep hearing word that you must get on social media if your business is to succeed.

Well that’s one big lie.

Social media is a channel to reach people, but since the inception of Facebook, it has steadily gone down hill. How? In the beginning, when someone “liked” your group, they would then get EVERYTHING you posted. Today? They might get 10%. Unless you pay Facebook a little deneiro.

People that built up empires of followers on Facebook years ago, watched their sales plummet as they could no longer REACH their followers. That is, until Facebook started extorted…err…charging money to reach YOUR followers. Evidence that they aren’t YOUR followers but Facebook’s.

My wife’s research into social media revealed one author that took a three month break from writing. To do what? That’s right, focus on facebooking, tweeting, and instagramming. And in three months of effort, got maybe a 30% reach of her social media followers. Yikes!

This is why it is SO important to build a mailing list. A website you control and a mailing list that reaches EVERYONE is the most important thing you can build for your business. And if you don’t know how to stand up a website and configure it to harvest emails, you will be left in the cold.

At the Kentucky Christian Writer’s Conference, I met my publisher face to face. We chatted over lunch. And in the midst of that, is when I learned that Stephanie needed help raising her website from the ashes of a previous outage. They say never pass up an opportunity, so I haggled out a deal where she’d remove a certain criteria she had for going to paperback, and I’d get her website operational. With a handshake, I got on the ball.

Since then, we have loaded over four hundred backlisted titles along with cover art and e-books. Visit the site and you can BUY books. Join our newsletter (trust me, you’ll get the opportunity if you visit), and you’ll get a super secret coupon code to buy ANYTHING at 20% off.

I’m not here so much to hawk that site, as to instead hawk the idea that when you form a business, whether it’s a publishing press or ANYTHING ELSE, you need SOMEONE that can do this stuff. I know WordPress. I’ve used it to run my blog for years. It’s not perfect, but it gets the job done. With a little sweat equity and some time reading tips and tricks, you can really put out a decent store for your business.

If you are even THINKING of running a business in the 21st century you need to:

  • Learn how to standup and run a WordPress site (an independent consultant could cost you $10,000 easily)
  • Start building a newsletter
  • Use Facebook and Twitter. Don’t let them use you. The idea is to make money for YOU, not be the one making money for THEM.

Stay safe out there!

Learning Spring Boot – 2nd Edition Rewrite is complete

I have finished updating every chapter of this book to the Reactor-based paradigm. And boy did it take a lot!

In case you missed it, when I embarked upon writing this book for Packt Publishing, I decided to start from scratch. The previous edition, as much as I enjoyed it, wasn’t bold enough. This time, I wanted to build an application chapter by chapter, and make it as real as possible.

So I dug into a demo app I had used at several conferences called Spring-a-Gram. That app started from my desire to snap a picture of the audience and upload it to the website, LIVE. Great eye candy, right? Along the way, I learned lots of aspects of the Spring portfolio that this app could leverage.

And when faced with the prospect of writing a new title on Spring Boot, I picked up and ran with that app. Only this time, EVERYTHING would be done asynchronously and without blocking, i.e. with Reactor, Pivotal’s implementation of the Reactive Streams spec.

So this book is stocked with asynchronous web, data access, AMQP messaging, WebSockets, microservices, security, metrics, developer tools, and production tips. A solid cross section of what developers need to build apps.

The trick with this lauded goal is that when I started writing about 15 months ago, there was no Spring Boot 2.0. Spring WebFlux was being scratched out in an experimental repository. Spring Boot WebFlux starters appeared later in a different repository.

Talk about taking on a risk!

So I started writing the manuscript using Boot 1.4 and servlets, with plans to rewrite everything once the Reactor bits were available. That has been a LOT of effort, but worth it.

Because I’m quite proud of being able to show people how to build apps rooted in Spring, the de facto Java toolkit for application development, but now sitting atop Project Reactor, the bedrock of Reactive Streams apps.

Until yesterday, we were focusing on Spring Boot 2.0.0.M4 as the release target, but decided to wait for 2.0.0.M5 coming in a couple weeks, with plans to polish, revise, and release around the second week of October.

Hopefully we can hold onto that release date! There appears to be high demand for what will be the first book on Spring WebFlux as well as Project Reactor to hit the markets.

I’m excited to get this book into everyone’s hands and watch as people began to write scalable apps with Reactor on top of the Spring portfolio.

So stay tuned! The end is in site.

OOP is dead, long live OOP

Raise your hand if you remember the golden hammer of Object Oriented Programming. You know, where you only have to code something once and then reuse it in all your subclasses? Remember those days?

Maybe this is what they still teach in dusty lecture halls amidst today’s Computer Science departments. But if you have spent any significant time coding in the trenches, you will have come to realize the lie that this mantra is.

You see, grandiose hierarchies of objects over time become nigh impossible to manage. At a certain point, one more subclass bolted onto the grand structure is deemed impossible and you must fork the solution. No, there is a different structure out there that we must adopt. And Spring is the platform that carries its banner.

Interface Oriented Programming. Let’s call it IOP since there aren’t enough acronyms in our industry. IOP is the premise that different slices and layers should talk to outsiders through a nice, clean interface. The backing service on the other side should provide a concrete implementation, but the caller need not know about it.

Why do I mention Spring as the champion of IOP? Because Rod Johnson’s foray onto the  Java scene was to craft a dependency injector whereby this interface-based contract could be satisfied. Before I learned of Spring, the concept of Java interfaces was foreign. Sure they were mentioned in my college textbook Java in a Nutshell (dating myself?) but I saw little value in using them. Why?

Because when you are “newing” everything yourself, there appears little value in defining an interface and then assigning the object to it. You already know what is! What is to be gained from all the extra typing? But delegate that task to a DI container, and suddenly the cost vanishes. Expressing dependency graphs between beans with interfaces becomes much smoother when the container takes over the job of creating everything.

This goes along with the Liskov Principle where you can plug in any implementation without knowing what it is. Now sometimes people drag out the old square-is-a-rectangle example. I hate that one because geometry is a terrible domain to model semantic software concepts.

Digging in, interfaces don’t have to be complex. Instead, think of a handful of “getters” and go from there.

This fragment is part of an ongoing effort to add Affordances to Spring HATEOAS. Since we might support multiple web frameworks, having a clean, unencumbered interface is critical for getting started. It also helps avoid saddling the interface with details from Spring MVC, Spring WebFlux, or JAX-RS. Instead, this interface avoids all of that, forcing the concrete details to be nicely contained apart from each other.

Abstract classes and long hierarchies are often tricky to evolve over time, so I try to dodge that as much as possible. Composition of objects through IOP-driven strategies tends to be more amenable to change. Having said all that, what happens when you need this?

This is ONLY an abstract class because the project is currently on Java 6 and we can’t plugin a default implementation for supports(). With Java 8, this whole thing can be rolled back into an interface. Abstract vs. default interface methods aside, this is the NEW OOP. Do as much as you can to code 1 interface-to-many classes, avoiding intermediaries. Sometimes it’s unavoidable, but don’t give up too fast.

Let IOP be your OOP.

And the more you learn about the whole code base, be willing to revisit some of those hierarchies and see if you can’t “move around” stuff to get away from the hierarchies. You might be surprised at how perky your code becomes. Some of those requested features might suddenly seem possible.

The power to say no

One of the most important things we can do is to say no. A lot of things arise in work, in life. And the hardest thing is to sometimes say no. To indicate this shouldn’t be done. To voice our objection. Not argue, just say no.

Since Berlin, I have been working on adding Affordances to Spring HATEOAS. We’re talking a feature branch that has over 200 commits from the original author. The effort was pursued a couple years ago. It stalled out because no one on the team had time to champion this effort. Another team of developers actually forked the branch and added more. Finally, I joined earlier this year and started reading this vast sum of new code.

And I read, and read. And read. And. Read. We’re talking three months of reading, editing, polishing, changing, and rebasing. Did I mention reading?

So earlier this month, Oliver and I have a Google Hangout code review. I have pushed this code as far as I can. I have polished as much as possible. But when it comes time to defend it before my manager, one who maintains almost as high of a coding standard as Juergen Holler, things quickly fall apart.

“I just see piles and piles of code,” Ollie said. And I nodded along. He was right. I didn’t want it to be true, but it was clear. He quickly spotted my sample where both a PUT and a PATCH were shown, the outcome that one method required every parameter, while the other required none.

“Couldn’t we spot the method and flip that bit automatically?” he asked. More nodding from my end. He was gosh awful right. And I knew this current branch of code was never getting near the master branch.

So what did we do with three months of effort? We regrouped.

“What if we start clean? The serializers for HAL-Forms are solid. Why don’t we work on a fresh Affordance API, and see how much of Spring MVC we can leverage to find the other details?” I said.

And here I am, two weeks later, closing in on a feature branch. It includes HAL-Forms mediatype plus a VERY lightweight API for defining an Affordance.

This small bit of Spring MVC + Spring HATEOAS code is lifted from a test case.

  • It’s the handler for a GET /employees/{id} call.
  • It creates two links: self and employees.
  • Using a new method (withAffordances) and a new utility method (byLink), it has access to everything about this the PUT and the PATCH operations, and can glean the attributes needed to run those two methods.

And it doesn’t involve piles and piles and piles of code. I think I spent two days trying to model where these “Affordances” would live. (Ended up in the Link itself, since a Link can have one or more). I spent another day or two digging through the method invocation to build these links, and harvested extra Spring MVC details, like the route, the incoming request body’s type, and a little more.

I think I’ve spent maybe two days ensuring these bits of code don’t intrude on each other. For example, the code that reads Spring MVC annotations isn’t part of the Affordance interface. That will make it easier to write a JAX-RS version down the road. We also need to support Spring WebFlux at some point as well. And the Affordances stuff cannot leak into the mediatype.

So focusing on getting single resource, collections of resources, and pages of resources working has taken a total of maybe two weeks. (Today was the day I got pages working).

Oliver is quite excited about this. As am I. This will set the pace for implementing other hypermedia types. We currently have HAL and are adding HAL-Forms. Coming: Uber, XHTML, and SIREN. And when these arrive, they will become available to Spring Data REST.

Additionally, my handful of PRs I worked on over the past three months are ALSO getting reviewed, so there is indeed a lot of motion of Spring HATEOAS.

I’m really happy Oliver say “no”. Remember, it’s your option to say as well. Just be ready to defend it with all your might.

Streams of messages are the way to go

I have been diligently getting all my code up to date for the release of Learning Spring Boot 2nd Edition this September. Digging into Chapter 8, WebSockets with Spring Boot, I realized I had a bigger challenge than expected.

You see, I’d chatted with Rossen, the lead developer on Spring Web. In the past, his job was overseeing Spring MVC, but in the past year, that has expanded to our new Reactive Streams story and the module Spring WebFlux. In short, Rossen informed me that there was no messaging available with WebSockets in WebFlux.

I’m not sure you’re aware what this means. “Message” was a paradigm invented by Mark Fisher in the early days of Spring Integration with a nice little container class called Message<T> that included a payload and optional headers. This was handy when it came to enterprise service buses, but people spotted the paradigm as more universally applicable.

Hence, they put a messaging layer on top of WebSockets, making it super easy to pipe messages from clients to the server and back. Anywhere in the server side code, you can get your hands on a SimpMessagingTemplate and publish a message, targeted for either a server side or client side endpoint. The runtime would handle it all.

And none of this was available with Spring 5’s WebFlux-based WebSocket solution. As Rossen said, “think of it as streams of WebSocket messages.”

That was tricky.

So I dug in and started learning the API. In the reference docs, they show how to register a WebSocketHandler. You tie that API to a URL and inside the API, are handed a WebSocketSession. I kind of stared at this API for five minutes before noodling around with it.

Unsure about what to do, I cracked open the Spring Framework source code and started reading their unit tests. Since my chapter had a “chat” server (the de facto demo for WebSocket technology), I was ecstatic to see something similar right there in WebFlux’s unit tests. It was an EchoServer.

I copy and pasted the code into my book’s code. Tweaked the JavaScript to hook up. Fired things up, and started sending in messages. And it worked.

Until I opened a second tab sporting a different WebSocket session. That’s when I noticed that the messages posted by one tab didn’t appear in the other.

And then I KNEW what they meant by “EchoServer”. Receiving the messages on a session and sending them right back ONLY WORKS ON THE SAME SESSION.

I shook my head, remembering the fact that Spring 4’s WebSocket configurations SHOWS you configuring a broker. That’s what is needed!

I needed a broker and I didn’t have one. At this point, I had gotten barebones WebSockets registered on the server and my client connected. But to noodle this thing out, I stopped to get a coffee. As the java was brewing, so was my noggin. That’s when the light bulb went off.

I already had a broker. It was right there.

You see, I had gambled on putting Spring Cloud Stream in my book when it was in its infancy. In Chapter 7, AMQP Messaging with Spring Boot, I kick things off with Spring AMQP’s RabbitTemplate, which is great for small stuff. As if often the case Spring’s template approach makes the AMQP APIs very pliable. They also adopted the Message paradigm from Spring Integration so you can either send your own POJOs, or you can do it Message<Pojo> style, which moves your abstraction up a level, making things simpler.

But Spring Cloud Stream is even better. It moves things up even higher. You aren’t thinking in terms of message brokers. Instead, you are thinking about chaining together streams of messages. (Which, by the way, dovetails PERFECTLY with Reactor).

Whether you are using RabbitMQ or Kafka (or whatever) is simply an implementation detail. With a few property settings, you can put together any sort of messaging you want.

And I was already doing that!

Now I’m no genius. This is stuff that had I just spoken with Rossen and Marius (lead for Spring Cloud Stream) they would have pointed out in no time. But there is that thrill of discovering something for yourself that is unbeatable.

So I hammer away at a service that listens for WebSocket messages and pipes them into a broker via Spring Cloud Stream. (Thanks Artem for showing me to do THAT with Reactor!) I code another Spring service that listens for a stream of messages coming from the broker and pipes them out to a WebSocket stream.

And I’m done. In maybe 20 minutes. (I did goof up by not pointing these two servers at the same AMQP exchange).

I fire up my system, and the chat service is working. Flawlessly. A message written in one tab is shipped over a WebSocket to the server. The server pipes it into RabbitMQ. The other service scoops up the message, and pipes it out to the WebSocket client. And this is happening for every WebSocket session that has registered. This thing is a knockout punch, because I knew I architected it right.

Poof! (Plus, the concept feels totally righteous. Streams of messages, flowing through the system.)

That’s when another realization hits me. When I previously drafted this chapter, I attempted to use Spring WebSocket’s RabbitMQ option, but never could get it to properly bind to my RabbitMQ instance. Sadly, I switched to their in memory broker. This meant the solution wouldn’t work if you ran multiple instances in the cloud.


Because it’s piping stuff right into RabbitMQ, it will work perfectly. (Partly thanks to Consumer Groups).

To wrap up the chapter, I even went so far as to show off SimpMessagingTemplate’s sendToUser API. It nicely lets you send a message to a single user. I coded a little “@bob Did you get this?” magic, where it would parse the @’d user, and then convertAndSendToUser(parsedUser). Well, I had none of that API, remember?

How can I pull this off? Must be too much, right? Wrong! Since every message is traveling to everyone, and it’s using the Message<T> construct, it takes no effort to add a header including the original user’s name.

The broker-to-client service can simply parse the message and compare against the user or themselves, and decided whether or not to let it on through. (Send a copy to both parties!) It’s basically an extra filter layered on top, which is why Reactor makes this type of thing so easy to apply.

Anywho, with a solid day of work, I manage to code the entire thing, top to bottom, using RabbitMQ, WebSockets, Project Reactor, and even have user-to-user messaging. Freakin’ wicked it was to put it all together.

And I know this is rock solid not because of what I coded. But because of the powerful underlying concepts orchestrated by Rossen, Mark Fisher, Artem, Gary, and Marius.

Can’t wait to apply security filtering in the next chapter to these WebSocket streams.