Learning Spring Boot 2nd Edition delayed – Find out how to get a FREE advanced copy

Having taken a week off after turning in the last chapter, I geared up and started reading the whole thing, top-to-bottom. Some bits were written before Spring Boot 2.0 was even available on Github, so there is much code that needs updating.

That’s when I got word from my published that due to Boot’s schedule, they are delaying the release. Didn’t say when. We haven’t hammered that out. But this makes me feel good. Encourages me to really polish things up and ensure I can get the right message out there.

In the meantime, I’m putting together my Launch Team.

  • get an advanced reader e-copy of my book
  • be a part of a secret Facebook page and get sneak peeks
  • learn more about my writing journey as it happens

Then the Turnquist Techies is for you!! I will send out FREE advanced reader e-copies to the members of my launch team. There will be a secret Facebook page and a private newsletter just for this team.

What do I ask?

I am encouraging the members of the team to participate in this journey with me by sharing their honest opinion in reviews, tweeting and posting about what they are reading, and the like. For the most part, being a part of the Turnquist Techies is about having fun, reading, connecting, and learning more about the technical writing process.

How do I sign up?

Just click on join The Turnquist Techies and fill out this form.

What do you do when you’re traveling to Germany in two weeks?

At Pivotal I work on the Spring Data team, and our fearless leader is having us all converge in Berlin in just two weeks.

Short of being über awesome, what do you do? Well, considering I’ve studied German off and on since high school, I thought it time to get back into my tools to freshen up my speaking.

For starters, need to use this cue card to remember when to say “bitte”! Heh, that’s a joke. You throw it out about every 3-4 sentences, because it means EVERYTHING.

Anki

But seriously, my favorite app is Anki. To call it a “flashcard app” is a gross understatement. This app uses “spaced repetition”, a concept going back at least to the 1930s from Professor C. A. Mace.

Spaced repetition takes a deck of cards, and as you answer each one, it polls you on whether you did/didn’t know it. Cards you knew are delayed as to when they show up again. Cards you didn’t are reviewed sooner.

There are all kinds of studies backing up the efficacy of such study techniques. In fact you can spend hundreds of dollars on language learning courses based on this. Or you can do what I did (cue dramatic music leading into next section.)

Duolingo

I learned of DuoLingo.com back in 2013, when one of my teammates started tweeting about learning Spanish. Digging in, I found a really cool website/iPhone app that had free, 5-minute lessons on many languages including German.

So I dug in!

It was nothing short of awesome. Each lesson is concentrated. I enjoyed how many verbs I could still conjugate even after being away from German for twenty years.

Then a major breakthrough – after learning of Anki, I discovered someone had built a deck out of DuoLingo and shared it with the community.

Slack + German

If that wasn’t enough, just this week someone launched a German JVM Slack group. Duh! Who wouldn’t sign up for that?

To wrap things up, I find myself roaming around doing chores or work, mumbling little phrases in German. I have written my high school German teacher. And I’ve heckled my teammates.

It’s going to be awesome!

 

 

 

How to write a tech book, or how I stopped worrying and learned to love writing

I just sent in the last chapter of Learning Spring Boot 2nd Edition’s 1st draft. And my brain has collapsed. I’ve been working for several months on this 10-chapter book that embraces Spring Boot 2.0 and reactive programming. There are several books out there on reactive programming, but I believe this will be the first to hit the market about Project Reactor.

I’m not done, not by a long shot. I told my publisher that we’d need at least one big round to make updates to ALL the code, because I started writing when not everything was in place. And it’s still true. But editing, polishing, and updating an existing repository of code and manuscript is easier than creating one out of thin air.

I wanted to write just a little bit about how I approaching writing something like this. Maybe you have been thinking about writing a book yourself, and your curious what goes on. This isn’t the only, but it’s the way that works for me.

Tools

Laptop on a work table with DIY and construction tools all around top view hobby and crafts concept

To write a book, you need a mechanism to capture prose and code. For fiction, I use Scrivener, but when it comes to technical writing, where the code, screenshots, and text are tightly integrated, I use Asciidoctor. With Asciidoctor, the overhead of a word processor is removed, and instead I can focus on pure content.

Also, using Asciidoctor lets me pull in the code to generate the manuscript sent in to my publisher. This way, I have Sublime Text in one window viewing the source prose and IntelliJ open in another viewing the source code. To top it off, I have a Ruby guardfile configure to constantly regenerate an HTML proof of the chapter I’m writing with it refreshing via LiveReload in my browser.

This combination gives me a quick feedback loop as I write.

What to write

This may be the biggest hurdle to some. When you’ve picked the technology, setup your tools, and finally have the editor opened up, what do you type in that black, blank screen?

Waiting for magical words to arrive? Or perhaps you hope elves will scurry in and leave something? Nope. This is where the rubber hits the proverbial road and you have to push yourself to start typing.

What do I do? I actually start earlier than that. From time to time, I have a crazy idea about something I want to show to an audience at a conference. Some demo I want to give with a few pieces of the Spring portfolio. I began to noodle out code to make that happen. Once, I asked “can I snap a picture of the audience and upload it from my phone to a webpage the audience is watching on the overhead?” Thus was born my Spring-a-Gram demo.

That demo has morphed many times to the point that I have built a full blown, cloud native, microservice-based system. And guess what. It’s the system we get to explore in Learning Spring Boot 2nd Edition.

So when I sit down to write a chapter, I first start writing the code I want to walk through. Once it’s humming inside my IDE, I start to typeset it in Asciidoctor. And pages of code fragments, I began to tell a story.

Weaving a story

When writing technical articles, getting started guides, and books, everything is still a story. Even if this isn’t a novel, it’s still a story. People that grant you the honor of reading your work want to be entertained. When it comes to tech work, they want the ooh’s and ahh’s. They want to walk away saying, “That was cool. I could use that right now.”

At least, that’s how I read things. So my goal when I write is to make it fun for me. If it’s fun for me, I trust it will be fun for others.

If I sift through a chapter, and it’s just a boring dump of code, then it’s sad. And that’s not what I want. I can’t promise that all my writing has upheld this lofty goal. But it’s my goal nonetheless.

So oftentimes, I will typeset the code, hand some descriptive details around the code, then read it again, top to bottom, and add extra stuff. Paragraphs talking about why we’re doing this. Mentioning tradeoffs. Issues that maybe exist today and where we have to make a choice. Ultimately, understand not just what the code does but why it does it this way. And what other options are out there.

Letting go

At some point, after all the writing and polishing and fine tuning, you have to turn in your work. I don’t know if there is ever a time where I’m 100% satisfied. I’m kind of picky. But the truth is – you’ll never find every typo, every bug.

My code fidelity is much higher ever since I started using Asciidoctor. But stuff happens. And you have to be happy turning in your work.

You see, if you’ve acquired enough skill to sit down a write a book without someone leaning over your shoulder and coaching you, you might have a lot of good value other developers seek. Eager coders will be able to read what you wrote, look past small mistakes, and most importantly, grok the points you make. That’s what is key.

And one thing is for certain – writing makes you better. I have found that any gaps in my own understanding of certain parts of code lead me to chase down and grasp those bits. And then I want to share them with others. Which is what writing books is all about.

Happy writing!

Layering in new behavior with React

I’ve talked in the past how I like the approach React leads me to when it comes to building apps. How does such grandiose talk play out when it’s time to add a new, unexpected feature? Let’s check it out. I’ve been building an installation app for Spinnaker, and one of our top notch developer advocates gave it a spin.

Results? Not good. Too many presumptions were built into the UI meaning he had no clue where to go. Message to me? Fix the flow so it’s obvious what must be done and what’s optional.

So I started coding in a feature to flag certain fields REQUIRED and not allow the user to reach the installation screen without filling them out. Sounds easy enough in concept. But how do you do that?

With React, what we’re describing is an enhancement to the state model. Essentially, keep filling out fields, but earmark certain fields as required, and adjust the layout of things to show that, while barring other aspects of the interface in the event those same fields aren’t populated.

So I started with a little bit of code to gather a list of these so-called required fields, and it looked like this:

If you have done any React programming, assigning something via this.state[…] = foo should set off bells in your head. You always, always, ALWAYS used this.setState(…). So what’s up?

Rules are rules until they aren’t. This is a situation that defies the concept. I don’t WANT to set the state such that it triggers a ripple through the DOM. Instead, this code happens right after the initial state model is initialized. And I’m setting it using values populated in the previous line, because you can’t initialize required, pointing at this.state.api in the same call the initializes this.state.api itself!

With this list of required fields setup, we can start marking up the fields on the UI to alert the user. I have a handful of React components that encapsulate different HTML inputs. One of them is dedicate to plain old text inputs. Using the newly minted required list, I can adjust the rendering like this:

Notice the little clause where it checks this.props.settings.requires.includes(this.props.name)? That is a JavaScript ternary operation that if true, returns the label with extra, highlighted text. Otherwise, just render the same label as always.

By applying this same tactic to the other React components I have for rendering each selection on the UI, I don’t have to go to each component and slap on some new property. Instead, the designation for what’s required and what’s not it kept up top in the state model, making it easier to maintain and reason about.

At the top of the screen is a tab the user clicks on to actually install things and track their progress. To ensure no one clicks on that until all required fields are populated, I updated that tab like this:

A little function that detects whether or not all required fields have been filled out is checked, and if so, renders the HTML LI with its onClick property filled out with the handler. If NOT, then it renders the same component, but NO SUCH onClick property is present, meaning it just won’t respond.

This is the nature of React. Instead of dynamically adjusting the DOM model, you declare variant layouts based on the state of the model. This keeps pushing you to put all such changes up into the model, and writing ancillary functions to check the state. In this case, let’s peek at requiredFieldsFilledOut:

This tiny function checks this.state.required, counts how many are “truthy”, and if the count matches the size of this.state.required itself, we’re good to go.

In case you didn’t know it, React encourages you to keep moving state-based functions up, closer to the state model itself. It’s super simple to pass along a handle to the function to lower level components, so they can still be invoked at lower levels. But anytime one component is trying to invoke another one in a separate part of the hierarchy, that’s a React-smell hinting that the functions should be higher up, where they can meet. And that results can trickle down to lower level components.

I encountered such a function stuffed down below in a lower level component. The need to trigger it sooner based on a change in the state had me move it up there. Now, when the Installation tab is clicked, that function is run, instead of waiting for the user to click some button down below.

Suffice it to say, I rehabbed the UI quite nicely. The thing is, with React it’s not hard to effect such change in a couple days, compared to the days or weeks of effort combined with testing it might have taken with classic manipulated-the-DOM apps, where you have to hunt down gobs of wired event handlers and find every little nuanced operation you coded to make things operate correctly.

HTTP + REST + OAuth = say what???

In the past couple of weeks, things have really gotten hopping on the Nashville Java community’s #java slack channel. A recent topic topic of interest is how do we take something like REST, which talks about clean URIs and stateless services, and stir in this crazy OAuth stuff?

I threw in my own $0.02 given I’ve done a bit of work on projects like Spring HATEOAS and Spring Data REST. And my $0.02 is this:

HTTP + REST + OAuth should work hunky dory without compromising any of these principles.

What does this even mean?

HTTP is a spec that goes back twenty years. Or should I say, HTTP is an amalgamation of specs that goes back that long. HTTP includes the request/response protocol, the verbs (GET, POST, etc.), the concept of media types, content negotiation, and more. At the heart of MANY of these specs is none other than Roy Fielding. Yes, the man that wrote his doctoral dissertation on REST had his fingers in a dozen specs that governs how the Internet operates.

REST is doctoral dissertation that was published in 2001. REST not a spec, not an API, not a standard. It’s an idea. An idea that if similar constraints that existed for the web plus a few others were adopted when building Internet-based services, then they too could enjoy the scalability and fault tolerance the web has come to enjoy.

OAuth is an 11-year-old security protocol driven by the explosion of 3rd party social media applications that were requiring users enter their credentials so the apps could log into social media networks on their behalf. It introduces a flow whereby the app, instead of gathering credentials, would take the user over to the source site, have them login there, be issued a token, and thereafter use the token in lieu of actual credentials.

All of these concepts have grown supreme in applications today. Enterprise shops want big apps to support millions of customers, and they want them yesterday. Hence the desire for scale.

We’re all aware of how the web was won the war of the UI. I remember coding Swing apps in data centers, and side stepping the growth of the web, but the web has beaten thick client apps hands down. The only REAL thick client app are the apps found on mobile devices. The standards of web apps is front and center.

EVERYONE wants OAuth. I’ve often stated “it isn’t real until it’s secured” indicating that frameworks without a security solution will get passed over by production shops. That’s because no customer will talk to you unless your solution can be secured. And with the rise of mobile apps and multiple clients talking to a backend, the need for OAuth is gigantic.

So how does this mishmash of ideas all fit together without tragic compromise? Let’s take things apart.

Fundamentals of REST

One of the most fundamental concepts behind REST is to include not just data in the payload, but links to resources from which you can DO SOMETHING with the payload. Or links to a state you can transition toward. For example, a Coffee Shop API that let’s you place orders should include a link to cancel the order WHEN IT’S POSSIBLE. By pushing the link to the human user, they can see when they can/can’t do that. Hence the rise of media types that support including links. For example, the JSON document below shows HAL, with both data and links.

{
  "firstName" : "Frodo",
  "lastName" : "Baggins",
  "_links" : {
    "self" : {
      "href" : "http://localhost:8080/persons/1"
    },
    "address" : {
      "href" : "http://localhost:8080/persons/1/address"
    }
  }
}

That’s nice, but what do we do when this data needs to be wrapped with OAuth?

OAuth, as briefly stated, is a flow where the website redirects the client to a login page, potentially at a different site altogether. A token is issued, and then we go back to the original site. Now we have a token in hand.

This token nicely fits into an HTTP Authorization header like this:

Authorization: Bearer 0b79bab50daca910b000d4f1a2b675d604257e42

If you’ll notice, I said header. HTTP headers don’t infringe upon payloads designed to support REST. HAL doesn’t have to know that the service is protected by OAuth + whatever other security protocols are layered on. Security, as they say, is orthogonal.

That’s because HTTP includes options to extend itself. HTTP has room for new media types (application/hal+json and application/hal+xml), new headers (Authorization), and new payloads.

And why shouldn’t we expect this? The man involved with designing the web also designed REST. Someone trying to take lessons learned from the web and apply them to APIs would surely work to fit it into the nature of the web.

To approach a secured service, I make the call, and if I get a 302 redirect to a login page, my client can come back to the human user and indicate that there is security required. At this stage, it would be completely normal to SHOW the human the login page. Let the user supply what’s needed, and my client should be able to harvest the token, and press on.

Should security be a part of the REST representation?

Yes and no. Security protocols aren’t required to be there. That would clutter up the nature of data and navigation. And clients shouldn’t be giving away any secrets, whether that is a bearer token or credentials.

But hypermedia links should be VERY sensitive to the nature of the client and serve different sets of links based on what the client is allowed to do. If the client will be declined attempting to perform a DELETE, then don’t show them the link to do it.

REST and stateless services

People like to regurgitate this REST-ism about “no state no state no state”. Yet when we get into the subject of security and the STATE of the user’s logged in nature, some people go nuts. Being stateless means that my next REST request should be able to get routed to any server on the backend. In a cloud native environment, there could be twenty copies right now, fifty five minutes from now, and two copies ten minutes after that.

The state of my next REST operation shouldn’t require I go back to the same server. If the server is feverishly trying to push session details around between gobs of servers, that is crazy. That’s why toolkits like Spring Session offer the option to use either cookies or headers. It’s possible to log into a system, get some form of session, offload the session state to a 3rd pary service, like Redis, and then bounce off of that using a header.

As Rob Winch states in the video below, a little bit of state is alright. When done like this, where you security state is managed properly in a data store and NOT on the server itself, you’ll do fine.

So wrap things up, serving up things over the web, securing them with modern practices, while also having scalable APIs should fit together nicely. Because the web was built for flexibility and extensibility, if we just embrace all of its paradigms.

Learning Spring Boot 2nd Edition 80% complete w/ Reactive Web

This weekend I sent in the first draft for Chapter 2 – Reactive Web with Spring Boot. Even though this is Chapter 2, it’s 80% of the book. That’s because I’m writing Chapters 2, 3, and 4 last, due to the amount they depend on Reactive Spring.

This may sound rather awkward given Spring Boot has yet to release any tags for 2.0. Pay it note, there is a lot of action in Spring Framework 5.0.0 such that it’s already had several milestones. A big piece of this book is getting a hold of those reactive bits and leveraging them to build scaleable apps. The other part is how Spring Boot will autoconfigure such stuff.

Thanks to Spring guru Brian Clozel, there is an experimental project that autoconfigures Spring Boot for Reactive Spring, and will eventually get folded into the Spring Framework. Bottom line: Reactive Spring is available for coding today, albeit not with every feature needed. But since the target release date is May, there will be time for spit and polish against the book’s code base.

And now, an excerpt from Chapter 2, for your reading pleasure:


Learning the tenets of reactive programming

To launch things, we are going to take advantage of one of Spring Boot’s hottest new features: Spring 5’s reactive support. The entire Spring portfolio is embracing the paradigm of reactive applications, and we’ll focus on what this means and how we can cash in without breaking the bank.

Before we can do that, the question arises: what is a reactive application?

In simplest terms, reactive applications embrace the concept of non-blocking, asynchronous operations. Asynchronous means that the answer is coming later, whether by polling or by an event pushed backed to us. Non-blocking means not waiting for a response, implying we may have to poll for the results. Either way, while the result is being formed, we aren’t holding up the thread, allowing it to service other calls.

The side effect of these two characteristics is that applications are able to accomplish more with existing resources.

There are several flavors of reactive applications going back to the 1970s, but the current one gaining resonance is reactive streams due its introduction of backpressure.

Backpressure is another way of saying volume control. The consumer controls how much data is sent by using a pull-based mechanism instead of a traditional push-based solution. For example, imagine requesting a collection of images from the system. You could receive one or a hundred thousand. To prevent sthe risk of running out of memory in the latter, people often code page-based solutions. This ripples across the code base, causing a change in the API. And it introduces another layer of handling.

For example, instead having a solution return a risky collection like this:

public interface MyRepository {
List findAll();
}

We would instead switch to something like this:

public interface MyRepository {
Page findAll(Pageable p);
}

The first solution is simple. We know how to iterate over it. The second solution is also iterable (Spring Data Commons’s Page type implements Java’s Iterable interface), but requires passing in a parameter to our API specifying how big a page is and which page we want. While not hard, it introduces a fundamental change in our API.

Reactive streams is much simpler – return a container that lets the client choose how many items to take. Whether there is one or thousands, the client can use the exact same mechanism and take however many it’s ready for.

public interface MyRepository {
Flux findAll();
}

A Flux, which we’ll explore in greater detail in the next section, is very similar to a Java 8 Stream. We can take just as many as we want and lazily waits until we subscribe to it to yield anything. There is no need to put together a PageRequest, making it seemless to chain together controllers, services, and even remote calls.


Hopefully this has whet your appetite to code against Reactive Spring.

Happy coding!

 

Do you have real life struggles in SW development? Lessons learned in Ops? Come share them @NashvilleJUG

Ever battle a NoSQL data store for six hours straight? Installed an upgrade that destroyed a database? Have you spent two weeks learning a new library, language, or build tool, only to chuck it out the window? We’d love to hear about it at the Nashville Java Users Group.

We meet on the first Tuesday of every month in downtown Nashville. Beer and pizza are provided gratis.

We’re looking for people just like you, willing to share their most tragic or most exciting tale. Can you chat for 20 minutes? That’s all we ask.

Whether it’s a tale about beating your brains out over a Maven plugin or kicking a Redis server into oblivion, we’re excited to hear about it! Whether you scrapped some infernal JavaScript library, or just finished a new book that has changed your development perspective forever, let us know.

And you’ll find a bunch of others nodding along, saying “I know what you mean!” The Java community in Nashville is strong. We founded the group back in 2010, and have been growing every since. But we can’t operate without guests coming in, and pouring out their heart and experiences. We need you!

How to reach us:

The beauty of coding frontends with React

This industry can be quite brutal. Tools come and go. Programming styles invented fifty years ago suddenly become relevant. But I really enjoy when a certain toolkit nicely presents itself over and over as the way to go. I’m talking about React. Every wonder what it is that has made coding frontends with React so dang popular? Let’s take a peek.

What’s so good about React?

React innovates frontend development by moving the focus off of cobbling together DOM elements. Instead, it shifts things toward laying out a declarative UI and driving everything by a consolidated state model. Update the state and the layout changes automatically.

In traditional JavaScript toolkits, you find yourself writing DOM finagling code bits inside event handlers strewn throughout the code base. (jQuery, I’m looking at you!) Managing, organizing, and maintaining order of this code is a chore that isn’t hard to fail. It’s easy to NOT cleanup properly and let your app leak.

Get on with the example already!

With React, you lay out a series of HTML elements inside the code (and using ES6 makes your eyes stop bleeding!) based on properties and state.

FYI: Properties are read only attributes, State are updateable attributes. In this component, there are NO event handlers. Everything shown is passed through the constructor call and accessed via this.props.

Some people balk at how React mixes HTML with JavaScript in the same file. Frankly, I find keeping things small and cohesive like this as the right level of mixture

It’s possible to have optional components, and they can be based on the centralized state model. Flip a toggle or trigger off some other thing (RESTful payload?) and see components appear/disappear. (NOTE: React smoothly updates the DOM for you.)

Check out the fragment below:

Toward the bottom, orgsAndSpacesLoading is used as a stateful flag to indicate some data is loading. Using JavaScript’s ternary boolean check, it’s easy to display a Spinner. When the code fetching the data completed, it merely needs to update the state of this flag to false, and React will redraw the UI to show the <span> with two dropdowns.

When piecing together event handlers and DOM elements by hand puts you in this mindset of updating the screen you’re looking at. You start to think about hunting down elements with selectors, changing attributes, and monkeying around with low level constructs.

When working with React, you update the state and imagine React redrawing everything for you. The UI is redrawn constantly to catch up to the new state. Everything is about the state, meaning it’s best to invest effort designing the right state model. This pulls your focus up a distinct level, letting you think more about the big picture.

The state must flow

Another neat characteristic you start doing is pushing bits of state down into lower level components as read-only properties. You also push down functions as invocable properties. You may start with functions in the lower level components, but many of them work their way back to manipulating the state. And often the state works best when pulled toward the top. Hence, functions tend to move up, making lower level components easier driven by properties.

This component is a reusable HTML checkbox with a label. You feed it the name of a state attribute and it allows flipping the state attribute on or off. Changes are invoked by the passed in property function, handleChange. This function is actually passed into a lot of various components in this application. You can see how this component is invoked below:

  • The label text is provided – “OAuth?”
  • The name is connected to a property known as settings.oauthEnabled.
  • The function to respond to clicks is this.handleChange.
  • The raw state is passed down as a bit of a free for all.

The point is, nice little components are easy to put together. Bits of state and needed functions are easy to hand off. And we don’t waste frivolous time with building the DOM and thinking about triggering an update in one part of the UI from some other remote corner of the UI.

We simply update the relevant bits of state and let the system redraw itself as needed. Once you get warmed up to this style of building frontends, it’s hard to put it down.

Happy coding!

Check out my @SpringData and @SpinnakerIO talks from SpringOne Platform @S1P

Recently, my latest conference presentations have been released. You are free to check them out:

In the Introduction to Spring Data talk, I live code a project from scratch, using start.spring.io, Spring Data, and other handle Spring tools.

In the Spinnaker: Land of a 1000 Builds talk, I present the CI/CD (continuous integration/continuous delivery) multi-cloud tool Spinnaker:

Enjoy!

Tuning Reactor Flows

I previously wrote a post about Reactively talking to Cloud Foundry with Groovy. In this post, I want to discuss something of keen interest: tuning reactor flows.

When you use Project Reactor to build an application, is the style a bit new? Just trying to keep your head above water? Perhaps you haven’t even thought about performance. Well at some point you will. Because something big will happen. Like a 20,000 req/hour rate limit getting dropped on your head.

Yup. My development system mysteriously stopped working two weeks ago. I spotted some message about “rate limit exceeded” and rang up one of my friends in the Ops department to discover my app was making 43,000 req/hour. Yikes!

As I poured over the code (big thanks to the Ops team giving me a spreadsheet showing the biggest-to-smallest calls), I started to spot patterns that seemed like things I had seen before.

Reactor tuning a lot like SQL tuning

Long long ago, I learned SQL. As the saying goes, SQL isn’t rocket science. But understanding what is REALLY happening is the difference between a query taking twenty minutes vs. sub-second time to run.

So let’s back up and refresh things. In SQL, when you join two tables, it produces a cartesian product. Essentially, a table with n rows + a table with m rows, will produce a table with n x m rows, combining every possible pair. From there, you slim it down based on either relationships or based on filtering the data. What DBMS engines have had decades is learning is how to read your query and figure out the BEST order to do all these operations. For example, many queries will apply filtering BEFORE building the cartesian product.

In Reactor, when you generate a flux of data and then flatmap it to another flux, you’re doing the same thing. My reactor flow, meant to cache up a list of apps for Spinnaker, would scan a list of eighty existing apps and then perform a domain lookup…eighty times! Funny thing is, they were looking up the same domain EIGHTY TIMES! (SQL engines have caching…Reactor doesn’t…yet).

So ringing up my most experienced Reactor geek, he told me that it’s more performant to simply fetch all the domains in one call, first, and THEN do the flatmap against this in memory data structure.

Indexing vs. full table scans

When I learned how to do EXPLAIN PLANs in SQL, I was ecstatic. That tool showed me exactly what was happening in what order. And I would be SHOCKED at how many of my queries performed full table scans. FYI: they’re expensive. Sometimes it’s the right thing to do, but often it isn’t. Usually, searching every book in the library is NOT as effective as looking in the card catalog.

So I yanked the code that did a flatmap way at the end of my flow. Instead, I looked up ALL domains in a CF space up front and passed along this little nugget of data hop-to-hop. Then when it came time to deploy this knowledge, I just flatmapped against this collection of in memory of data. Gone were all those individual calls to find each domain.

.then(apps ->
	apps.stream()
		.findFirst()
		.map(function((org, app, environments) -> Mono.when(
			Mono.just(apps),
			CloudFoundryJavaClientUtils.getAllDomains(client, org))))
		.orElse(Mono.when(Mono.just(apps), Mono.empty())))

This code block, done right after fetching application details, pauses to getAllDomains(). Since it should only be done once, we only need one instance from our passed along data structure. The collection is gathered, wrapped up in a nice Mono, and passed along with the original apps. Optionally, if there are no domains, an empty is passed along.

(NOTE: Pay it no mind that after all this tweaking, the Ops guy pointed out that routes were ALREADY included in the original application details call, hence eliminating the need for this. The lesson on fetching a whole collection up front can be useful.)

To filter or not to filter, that is the question

Filtering is an art form. Simply put, a filter is a function to reduce rows. Being a part of both Java 8’s Stream API as well as Reactor’s Flux API, it’s pretty well known.

The thing is to watch out for if the filter operation is expensive and if it’s inside a tight loop.

Loop? Reactor flows don’t use loops, right? Actually, that’s what flatmaps really are. When you flatmap something, you are embedding a loop to go over every incoming entry and possibly generating a totally different collection. If this internal operation inside the flapmap involves a filter that makes an expensive call, you might be repeating that call too many times.

I used to gather application details and THEN apply a filter to find out whether or not this was a Spinnaker application vs. someone else’s non-Spinnaker app in the same space. Turns out, finding all those details was expensive. So I moved the filter inward so that it would be applied BEFORE looking up the expensive details.

Look at the following code from getApplications(client, space, apps):

return requestApplications(cloudFoundryClient, apps, spaceId)
	.filter(applicationResource ->
		applicationResource.getEntity().getEnvironmentJsons() != null &&
		applicationResource.getEntity().getEnvironmentJsons().containsKey(CloudFoundryConstants.getLOAD_BALANCERS())
	)
	.map(resource -> Tuples.of(cloudFoundryClient, resource))
	.switchIfEmpty(t -> ExceptionUtils.illegalArgument("Applications %s do not exist", apps));

The code above is right AFTER fetching application information, but BEFORE going to related tables to find things such as usage, statistics, etc. That way, we only go for the ones we need.

Sometimes it’s better to fetch all the data, fetch all the potential filter criteria, and merge the two together. It requires a little more handling to gather this together, but again this is what we must do to tailor such flows.

Individual vs. collective fetching

Something I discovered was that several of the Cloud Foundry APIs have an “IN” clause. This means you can feed it a collection of values to look up. Up until that point, I was flatmapping my way into these queries, meaning that for each application name in my flux, it was making a separate REST call for one.

Peeking at the lower level APIs, I spotted where I could give it a list of application ids vs. a single one. To do that, I had to write my flow. Again. By putting together a collection of ids, by NOT flatmapping against them (which would unpack them), but instead using collectList, I was able to fetch the next hop of data in one REST call (not eight), shown below:

return PaginationUtils
	.requestClientV2Resources(page -> client.spaces()
		.listApplications(ListSpaceApplicationsRequest.builder()
			.names(applications)
			.spaceId(spaceId)
			.page(page)
			.build()))
	.map(OperationUtils.<ApplicationResource, AbstractApplicationResource>cast());

cf-java-client has an handy utility to wrap paged result sets, iterating and gathering the results…reactively. Wrapped inside is the gold: client.spaces().listApplications(). There is a higher level API, the operations API, but it’s focus is replicating the CF CLI experience. The CF CLI isn’t built to do bulk operations, but instead operate on one application at a time.

While nice, it doesn’t scale. At some point, it can a be a jump to move to the lower level APIs, but the payoff is HUGE. Anyhoo, by altering this invocation to pass in a list of application names, and following all the mods up the stack, I was able to collapse eighty calls into one. (Well, two, since the page size is fifty).

You reap what you sow

By spending about two weeks working on this, I was able to replace a polling cycle that perform over seven hundred REST calls with less than fifty. That’s basically a 95% reduction in network traffic, and nicely put my app in the safe zone for the newly imposed rate limit.

I remember the Ops guy peeking at the new state of things and commenting, “I’m having a hard time spotting a polling cycle” to which the lead for Cloud Foundry Java Client replied, “sounds like a good thing.”

Yes it was. A VERY good thing.