Author Archives: Greg Turnquist

Streams of messages are the way to go

I have been diligently getting all my code up to date for the release of Learning Spring Boot 2nd Edition this September. Digging into Chapter 8, WebSockets with Spring Boot, I realized I had a bigger challenge than expected.

You see, I’d chatted with Rossen, the lead developer on Spring Web. In the past, his job was overseeing Spring MVC, but in the past year, that has expanded to our new Reactive Streams story and the module Spring WebFlux. In short, Rossen informed me that there was no messaging available with WebSockets in WebFlux.

I’m not sure you’re aware what this means. “Message” was a paradigm invented by Mark Fisher in the early days of Spring Integration with a nice little container class called Message<T> that included a payload and optional headers. This was handy when it came to enterprise service buses, but people spotted the paradigm as more universally applicable.

Hence, they put a messaging layer on top of WebSockets, making it super easy to pipe messages from clients to the server and back. Anywhere in the server side code, you can get your hands on a SimpMessagingTemplate and publish a message, targeted for either a server side or client side endpoint. The runtime would handle it all.

And none of this was available with Spring 5’s WebFlux-based WebSocket solution. As Rossen said, “think of it as streams of WebSocket messages.”

That was tricky.

So I dug in and started learning the API. In the reference docs, they show how to register a WebSocketHandler. You tie that API to a URL and inside the API, are handed a WebSocketSession. I kind of stared at this API for five minutes before noodling around with it.

Unsure about what to do, I cracked open the Spring Framework source code and started reading their unit tests. Since my chapter had a “chat” server (the de facto demo for WebSocket technology), I was ecstatic to see something similar right there in WebFlux’s unit tests. It was an EchoServer.

I copy and pasted the code into my book’s code. Tweaked the JavaScript to hook up. Fired things up, and started sending in messages. And it worked.

Until I opened a second tab sporting a different WebSocket session. That’s when I noticed that the messages posted by one tab didn’t appear in the other.

And then I KNEW what they meant by “EchoServer”. Receiving the messages on a session and sending them right back ONLY WORKS ON THE SAME SESSION.

I shook my head, remembering the fact that Spring 4’s WebSocket configurations SHOWS you configuring a broker. That’s what is needed!

I needed a broker and I didn’t have one. At this point, I had gotten barebones WebSockets registered on the server and my client connected. But to noodle this thing out, I stopped to get a coffee. As the java was brewing, so was my noggin. That’s when the light bulb went off.

I already had a broker. It was right there.

You see, I had gambled on putting Spring Cloud Stream in my book when it was in its infancy. In Chapter 7, AMQP Messaging with Spring Boot, I kick things off with Spring AMQP’s RabbitTemplate, which is great for small stuff. As if often the case Spring’s template approach makes the AMQP APIs very pliable. They also adopted the Message paradigm from Spring Integration so you can either send your own POJOs, or you can do it Message<Pojo> style, which moves your abstraction up a level, making things simpler.

But Spring Cloud Stream is even better. It moves things up even higher. You aren’t thinking in terms of message brokers. Instead, you are thinking about chaining together streams of messages. (Which, by the way, dovetails PERFECTLY with Reactor).

Whether you are using RabbitMQ or Kafka (or whatever) is simply an implementation detail. With a few property settings, you can put together any sort of messaging you want.

And I was already doing that!

Now I’m no genius. This is stuff that had I just spoken with Rossen and Marius (lead for Spring Cloud Stream) they would have pointed out in no time. But there is that thrill of discovering something for yourself that is unbeatable.

So I hammer away at a service that listens for WebSocket messages and pipes them into a broker via Spring Cloud Stream. (Thanks Artem for showing me to do THAT with Reactor!) I code another Spring service that listens for a stream of messages coming from the broker and pipes them out to a WebSocket stream.

And I’m done. In maybe 20 minutes. (I did goof up by not pointing these two servers at the same AMQP exchange).

I fire up my system, and the chat service is working. Flawlessly. A message written in one tab is shipped over a WebSocket to the server. The server pipes it into RabbitMQ. The other service scoops up the message, and pipes it out to the WebSocket client. And this is happening for every WebSocket session that has registered. This thing is a knockout punch, because I knew I architected it right.

Poof! (Plus, the concept feels totally righteous. Streams of messages, flowing through the system.)

That’s when another realization hits me. When I previously drafted this chapter, I attempted to use Spring WebSocket’s RabbitMQ option, but never could get it to properly bind to my RabbitMQ instance. Sadly, I switched to their in memory broker. This meant the solution wouldn’t work if you ran multiple instances in the cloud.

THIS SOLUTION WILL!

Because it’s piping stuff right into RabbitMQ, it will work perfectly. (Partly thanks to Consumer Groups).

To wrap up the chapter, I even went so far as to show off SimpMessagingTemplate’s sendToUser API. It nicely lets you send a message to a single user. I coded a little “@bob Did you get this?” magic, where it would parse the @’d user, and then convertAndSendToUser(parsedUser). Well, I had none of that API, remember?

How can I pull this off? Must be too much, right? Wrong! Since every message is traveling to everyone, and it’s using the Message<T> construct, it takes no effort to add a header including the original user’s name.

The broker-to-client service can simply parse the message and compare against the user or themselves, and decided whether or not to let it on through. (Send a copy to both parties!) It’s basically an extra filter layered on top, which is why Reactor makes this type of thing so easy to apply.

Anywho, with a solid day of work, I manage to code the entire thing, top to bottom, using RabbitMQ, WebSockets, Project Reactor, and even have user-to-user messaging. Freakin’ wicked it was to put it all together.

And I know this is rock solid not because of what I coded. But because of the powerful underlying concepts orchestrated by Rossen, Mark Fisher, Artem, Gary, and Marius.

Can’t wait to apply security filtering in the next chapter to these WebSocket streams.

Learning Spring Boot requiring more tweaks to get off the ground

With things bearing down on my September due date for Learning Spring Boot 2nd Edition, I’ve gotten in gear and started hammering at the codebase.

For one thing, I have strived to catch up ALL the code to Spring WebFlux, no longer using the servlet-based Spring MVC. In the process of doing that, I discovered a breaking change in Spring Framework that conflicts with the latest milestone of Thymeleaf. I switched to snapshots, and moved forward.

The big change coming down the pike has been WebSocket support. In Spring 4, there is incredibly detailed support for various messaging protocols, messaging brokers, and all sorts of goodies involving Spring’s WebSocket support.

With Spring WebFlux, they had to start from scratch and build things up. Suffice it to say, streams of WebSocket messages in and out has a very basic API. Hence, I’m rewriting the code in Chapter 8 to leverage this, chuck aside things like STOMP and SockJS. I rewrote the server side code thanks to the project lead of Spring Integration, and revved up to test it out, only to get stymied by something else.

Somehow, Reactive Web was switched off in favor of Servlet Web.

Huh???

Spending a couple hours on Slack with some super sharp teammates, I uncovered a dependency from Spring Cloud Netflix that pulled in Tomcat along with another transitive dependency on Spring MVC. Excluding them got things humming again on Netty, only to find a bug in Thymeleaf caused by Spring WebFlux + Spring Cloud Stream.

Sigh. I updated the issue filed with Thymeleaf asking about another release to see how to move forward. Until then, this book is paused. Yet again.

When that is fixed, hopefully I can push forward and get the last bits in line.

Cheers!

Facebook marketing ain’t what it used to be

I have recently been learning a bunch about marketing. My wife has started marketing her fourth novel. Along the way, we have also started boosting Facebook posts about the freebie opportunities. And in my quest to better understand these types of campaigns, I have stumbled into a concept called “organic reach”. That is cute language for “reaching people without spending a nickel.”

“Organic reach on Facebook has been in decline for a few years now and has almost hit zero. If you want to break through now, Facebook is all but a pay-to-play network.” —The Complete Guide to Facebook Advertising

In the olden days, people would “like” your group/page and then see anything you posted there. Today, not so. Facebook has algorithms that “decide” what shows up in people’s feeds. Sponsored postings can interrupt a newly minted post on your page that a friend liked, causing them to miss content.

Yup. Someone that you catered to, who decided they liked your content, isn’t guaranteed to see everything posted on that page. Suffice it so say, many are up in arms over this. It backs up a rule I teach at any of my Blogging & Marketing sessions – build your own platform.

Yes. Build your own platform. Craft a site. Curate a mailing list. It will take awhile, but in the long run, you will be running things your way. Sure, you will continue posting things to Facebook, Twitter, and what not.

But don’t presume that the rules of yesterday and today won’t be different tomorrow. Such are the woes of using other people’s platforms.

The “spice” of Pre-Edits

This blog post is coming to you late because I’m neck deep in pre-edits. Since I signed a contract for Darklight, I have had a checklist of things to get done for my publisher, Clean Reads. The most exhausting: pre-edits.

Pre-edits are an opportunity to clear away the proverbial brush. There are LOTS of things we authors do when banging out a first draft.

  • Start three sentences in a row with the same word
  • Use adverbs ALL OVER THE PLACE
  • And use that insidious word “very” WAY too much. In fact, that word is so overused (without conveying any extra meaning) that my publisher makes no promises about what will happen if she sees it in a manuscript.
  • Point of View violations
  • Commas, commas, and commas

I can tell you right now: I LOATHE PRE-EDITS! I am slogging my way through the manuscript AGAIN. Trying to polish it up. (That’s what I’ve done multiple times over the past year, if not years).

Taking a calming breath, it’s important to understand that my publisher isn’t out to get me. Instead, she wants to clean up the stuff that can be easily sifted through out of the way. That way, my editors can focus on deeper, more important stuff. Like…

  • Does the story evolve in a way that holds the reader’s interest?
  • Are there too many points of view in the story? Not enough?
  • Does the dialog balance the prose well?

Stuff like this helps take a “neat idea” and catapult it. Readers may not “know” all this writing craft, or how to name it. But trust me, readers can tell a good story from a great one. (And a badly written one as well).

So for the umpteenth time, I am walking through Darklight, scene by scene, trying to clean out obvious junk and give it a final buff before I ship it off. And considering its due in 48 hours, I even took this week off to focus!

As Baron Vladimir Harkonnen likes to remind us, he who controls the spice controls the universe. Well we are the ones in charge of the spice of our novel, and having good writing without clunky junk in the way is the path toward a universe of excited readers.

Happy writing!

Why do we need experts?

In this day and age, the DIY (Do It Yourself) movement is strong. The internet has made it super simple to start reading about somthing, but a few things from Amazon, and reach and get going! On one hand, that’s really good. I’ve seen countless businesses launched in this fashion. Many show up on Shark Tank. But sometimes, we need experts. Sometimes we need people with a very focused sense of knowledge, and if we don’t hire them, we’ll end up either paying too much, or losing out on opportunity.

Real Estate Experts

I have a handful of rental properties. In fact, it’s the principal part of my retirement portfolio, given 401K funds don’t work. Every month, my tenants send in a check to my property manager. Property manager collections 10% fee, sends the rest to me. And I then send in a payment to my lender, with extra rent used to pay down the debt faster.

See how that works? It’s not hard. But it’s something in which you need the right experts in the right places.

My property management company are experts at writing leases, enforcing leases, changing locks with tenants, maintaining the property, doing background checks, and maintaining quality of that process.

But their motivation is to keep the property rented, so they resist raising rents. I recently got notice of a tenant wanting to vacate, and my property manager showed me the proposed rent rate. Within five minutes, I ALSO got an email from my other agent.

Other agent? Who’s that? I have a real estate agent on retainer whose job is to find me new tenants when my properties are available. She gets paid a month of rent for this service, meaning she is motivated to keep the rates up.

Her email tipped me of that the market rent rate of these properties had risen about 12% from what my property manager was doing. This is called “opportunity cost”, something only the right experts in the right places can alert you to.

Other Experts

But this isn’t confined to real estate. There are many areas in life where we should listen to experts. I have seen people slap together “RESTful” interfaces in many places that had nothing to do with REST. For example, I tinkered with the API for my house thermostat, only to discover it was NOTHING like REST.

curl -s -H 'Content-Type: text/json' -H 'Authorization: Bearer ACCESS_TOKEN' 'https://api.ecobee.com/1/thermostat?format=json&body=\{"selection":\{"selectionType":"registered","selectionMatch":"","includeRuntime":true\}\}'

This thing barely scratches the surface of an API. There is no hypermedia. The URIs aren’t very “pretty” (which isn’t REALLY a REST thing). It’s more of a query language than anything, which as I’ve said before, isn’t really REST.

My impression after struggling with their API, is that they are experts at building thermostats, not APIs.

Spring Experts

So if you’re seeking to work with a certain technology, a certain business, or other, it pays to go and learn what the most experienced people are doing. It’s part of why I love going to SpringOne Platform every year. I meet people that are ALSO using Spring in incredible detail against some of the largest systems.

It’s an opportunity to see how they have conquered many problems, and an opportunity to share things they may not know, making us all better Spring experts.

I signed a contract!

Something I’ve been working on for seven years now has taken a big turn. I signed a contract with Clean Reads to publish my novel, DARKLIGHT!

I’m pretty stoked about this. Of course it will be lots of work. Not even sure if it’s coming out this year or next. Either way, it’s going to be fun.

In the meantime, I invite you to go grab my short story prequel FOR FREE.

The Power of REST – Part 3

Last week in The Power of REST – Part 2, I talked about how investing effort in backward compatibility and having flexible settings, it’s possible to avoid a lot of “API versioning”, something Roy Fielding has decried. In this article, we’ll look more closely at the depth and power of hypermedia.

How does the web do it?

Let’s remember that REST is about bringing the scaleable power of the web to APIs. One of those classic features is “Hey, go to XYZ.com, click on ‘Books’, then click on ‘Fiction’, then click on ‘The Job’ and you can get a free copy.”

Recognize this pattern? No one can remember long URLs, but we all know about following a chain of menu items based on the shorthand text.

Hypermedia is providing links all along the way. We sometimes refer to this as a “discoverable” API. I once likened it to the classic ZORK text adventure, where you have a confined set of moves, yet can explore a very complex world.

When clients hard code URLs, updates become brittle. Everyone knows this, otherwise there wouldn’t be things like a CORBA Naming Service subspec. So when I see tools that not only support but advocate full focus on URIs, I cringe. This isn’t what REST is about.

It really is aimed at following a chain of links. Because that grants us an incredible power to migrate services.

Migrating a service

Imagine we had started things out with a service used to manage supervisors. It was basic, perhaps a tad crude. Maybe we were wet behind the ears and didn’t totally grok REST. Not a lot of hypermedia. But eventually, we migrated to a newer “manager” service.

The good news with hypermedia is that we can continue to serve data at the old URI:

If this fragment was served under the old URI, /supervisors/1, old clients could access the data found there. But with a push toward hypermedia consumption, they could navigate onto the newer version of things.

This legacy representation can be put together by the newer “manager” service to support the old clients.

This route, served to the old URI by the “manager” service, can actually retrieve information in the newer format, and repackage it as a legacy “Supervisor” record. But by supplying up-to-date links, we provide an on-ramp for clients to migrate.

Assuming we had shutdown the old “supervisor” service, the following DTO could nicely sustain the old clients until they were ready to move on up.

This immutable value type can take the new-and-improved “manager” object and extract the older representation smoothly.

Don’t argue the wrong end of things

I’ve seen a particular argument pop up in favor of REST many times. It brags about how links can change without impacting clients. Frankly, that argument comes across as lame and weak. That’s because it’s NEVER presented under the guise of backward compatibility. It’s kind of posed in a vacuum.

Building REST resources with full focus on backward compatibility, supporting old URIs and old formats, is a much stronger message in my book.

SOAP and CORBA weren’t designed to let you slip in an extra field here and there. At least, not without massive effort. It’s why no one ever did. Can you imagine going into an IDL/WSDL file and attempting to “slip in” an extra field to ease migration? It’s almost incomprehensible.

But with Jackson and robust principles applied, it’s easy to update a given route with additional data that can easily support two different clients. One REST resource supporting two clients: THAT is a strong argument. Not “links can change”.

Don’t preach it, use it

It’s why I’m giving you a sneak peek at something I’ve been wanting to write for about three months. I find myself attempting to explain the same concepts about REST on Twitter, at meetups. In Berlin, I ran into a dear friend that pinned me down and asked me hard questions on REST for over an hour. Bring it on! That affirmed my gut decision to create this:

Spring HATEOAS Examples

This repository contains several REST-based scenarios and how to implement them with Spring HATEOAS. It’s not complete. It hasn’t even been “publicly” announced, but it gives me a concentrated place to show how REST works compared to RPC-over-HTTP.

The numerous times I’ve watched Oliver defend REST, I too and picking up the banner and striving to help spread the word that if we can adopt HATEOAS and link-driven flows, we can build more sustainable systems.

For a little something to chew on, I’ll close this article with the following tweet.

https://twitter.com/olivergierke/status/458578827342794752

I strongly encourage you to click on it, and read the follow on conversation as people blink in shock “What’s wrong with Swagger?”

Python Testing Cookbook 2nd Edition is coming!

I had totally forgotten about this, but back in February, Packt Publishing approach me about writing a 2nd Edition to Python Testing Cookbook. This is a title I wrote back in 2011, and from which I still get royalty checks! It’s not huge. In fact, it’s more symbolic than anything. I get real a warm fuzzy knowing people are still using this book today to test their Python apps.

I declined the offer back in February, because I felt neither qualified to write it, nor motivated since I’ve shifted to writing Spring-based books. An email showed up two days ago that had been stuck in an acquisition editor’s drafts for months. They have found another author (congrats, Packt!), but since he or she can potentially use my material, they are dealing me in a slice of the royalty pie for this new work.

Yippee!

I wish this author the best of luck. And I appreciate that I’m being rewarded for past effort by being a part of this new one as well. It just goes to show that writing a book can open doors you never see happening.

Happy writing!

The Power of REST – Part 2

Last week, in The Power of REST – Part 1, I challenged someone’s proposal that their client-side query language could supplant the power of REST. It seemed to attack strawman arguments about REST. In this article, I wanted to delve a little more into what REST does and why it does it.

Basis of REST

REST was created to take the concepts that made the web successful, into API development. The success of the web, which some people don’t realize, can be summarized like this: “if the web page is updated, does the browser need an update?”

No.

When we use RCP-oriented toolkits with high specificity, one change can trigger a forced update to all parties. The clinical term for this is “brittle“. And people hate brittleness. When updates are being made, resources are no longer available.

Let’s take a quick peek at the banking industry. Despite what you think, the banking industry isn’t built up on transactions and ACID (Atomicity/Consistency/Isolation/Durability). Nope. It’s built on BASE (Basic Availability/Soft-state/Eventual consistency). An ATM machine can be disconnected from the home office, yet it will dispense cash. You can go over your limit, and still get money. But when things are made consistent, it’s you that will be paying the cost of overdrawing, not the bank.

When it comes to e-commerce, downtimes of hours/minutes/seconds can translate into millions or billions of lost dollars. (Hello, Amazon!)

Updating ALL the clients because you want to split your API’s “name” field into “firstName” and “lastName” will be nixed by management until you A) show that it doesn’t impact business or B) prove the downtime to upgrade won’t cause any loss of revenue.

Evolving an API

https://twitter.com/olivergierke/status/867819089879846914

To evolve an API, we need to reduce breakages. We need to be able to make changes to the API that WILL NOT IMPACT existing clients. Changes that allow existing clients to keep right on humming as if nothing has happened.

Eventually, they can catch up and take advantage of the new features. But only when they’re good and ready. And SOAP and CORBA were not built for this. Their definition languages (WSDL and IDL) don’t know how to be “flexible”.

But REST can. How? Let’s start with that example mentioned earlier, a resource that serves up a name. Perhaps a small piece of some e-commerce platform. You design this:

When an instance of this domain object is turned into a hypermedia-based JSON structure (through a controller we can imagine), it looks like this:

This nice bit of JSON flies around the system. You build powerful clients leveraging its vast data. Customer growth is exponential.

But suddenly, we’re victims of our own success. Your initial take on a customer representation was kind of scratched together. And now your manager darkens your door, saying, “We need first and last name. Can you do that?”

You nod and get cracking. Just need to replace name with firstName and lastName, and update all the clients. Except what you just said will incur downtime.  No, you need something a little smoother. Something that can roll out and not impact the existing clients. Instead of “versioning” your API, why not ADD TO your existing resource?

Ta-dah! Your updated domain object moves from having a single name to a first and last name, as requested. But that is not all. It also can generate that old name field, using the new data. And it can parse an incoming “name” field, turning it into your new ones.

What does this look like in JSON?

You are now sporting both the new fields AND the original one. Old clients, if they follow the conventions of REST, will simply ignore the new stuff and use the old stuff. If they need to POST an update, they know where to send it and can send just the fields they know. Your code can handle this!

Of course, you’ll have to migrate your data store before rolling this out. But again, you can maintain availability with just a LITTLE extra effort.

Honor the Robustness Principle

Be conservative in what you do, be liberal in what you accept from others.

This statement is known as the Robustness Principle. It’s what we accomplish using @JsonIgnoreProperties(ignoreUnknown = true). That annotation tells Jackson to NOT blow up when we receive an unrecognized property. Old servers can accept the new format, until they have a chance to catch up. And old clients can also talk to new servers thanks to our customized setName(wholeName) method.

By carrying around this extra sliver of information (two copies of a person’s name instead of one), we can save millions of dollars when used at scale.

And this is a core piece of REST. By avoiding antiquated concepts like versioned APIs (a requirement for CORBA and SOAP), and instead making our REST resources backwards compatible, we not only reduce downtime, we can make maintenance easier on ourselves.

If you enjoyed this, stay tuned for next week’s post, The Power of REST – Part 3, where we will dig into hypermedia.

Hidden Figures – A Really Cool Movie

Last night, I watched Hidden Figures with my wife and a friend. The story had me pinned to my seat the entire time.

This is set back in the days of the Mercury space program. Back then, before the days of digital computers, there were human computers to tally up columns of figures. And in those days, such brutal work was menial and deemed secretarial. To put it in a sentence, engineers and scientists decided what formulas to use and this pool of women would be tasked to carry it out.

In the movie, one of these human computers, Katherine Goble, an African-American woman that graduated college (at the age of 14 in mathematics!), gets picked to crunch the numbers for the team calculating launch and landing windows for astronauts.

What quickly escalates is the fact that she not only can she do the math, she can spot what formulas to use, and find the solution. She can do the job of both a human computer AND the engineer that would be giving out the task assignments.

That’s not all that escalates. Back in the 1960s, circumstances for minorities was horrendous. And this movie doesn’t gloss over the challenges she and her friends face during the height of the Civil Rights Movement while engaged in the great Space Race.

Bottom line: it’s a great movie that weaves an entertaining story that really happened. The dram is top notch. And it’s not preachy. Instead, it makes you appreciate all that was accomplished while rooting for the home team.