REST, SOAP, and CORBA, i.e. how we got here

I keep running into ideas, thoughts, and decisions swirling around REST. So many things keep popping up that make me want to scream, “Just read the history, and you’ll understand it!!!”

wayback-machineSo I thought I might pull out the good ole Wayback Machine to an early part of my career and discuss a little bit about how we all got here.

In the good ole days of CORBA

This is an ironic expression, given computer science can easily trace its roots back to WWII and Alan Turing, which is way before I was born. But let’s step back to somewhere around 1999-2000 when CORBA was all the rage. This is even more ironic, because the CORBA spec goes back to 1991. Let’s just say, this is where I come in.

First of all, do you even know what CORBA is? It is the Common Object Request Broker Architecture. To simplifiy, it was an RPC protocol based on the Proxy pattern. You define a language neutral interface, and CORBA tools compile client and server code into your language of choice.

The gold in all this? Clients and servers could be completely different languages. C++ clients talking to Java servers. Ada clients talking to Python servers. Everything from the interface definition language to the wire protocol was covered. You get the idea.

Up until this point, clients and servers spoke binary protocols bound up in the language. Let us also not forget, that open source wasn’t as prevalent as it is today. Hessian RPC 1.0 came out in 2004. If you’re thinking of Java RMI, too bad. CORBA preceded RMI. Two systems talking to each other were plagued by a lack of open mechanisms and tech agreements. C++ just didn’t talk to Java.

CORBA is a cooking!

advanced-corbaWith the rise of CORBA, things started cooking. I loved it! In fact, I was once known as Captain Corba at my old job, due to being really up to snuff on its ins and outs. In a rare fit of nerd nirvana, I purchased Steve Vinoski’s book Advanced CORBA Programming with C++, and had it autographed by the man himself when he came onsite for a talk.

Having written a mixture of Ada and C++ at the beginning of my career, it was super cool watching another team build a separate subsystem on a different stack. Some parts were legacy Ada code, wrapped with an Ada-Java-CORBA bridge. Fresh systems were built in Java. All systems spoke smoothly.

The cost of CORBA

Boring PresentationThis was nevertheless RPC. Talking to each other required meeting and agreeing on interfaces. Updates to interfaces required updates on both sides. The process to make updates was costly, since it involved multiple people meeting in a room and hammering out these changes.

The high specificity of these interfaces also made the interface brittle. Rolling out a new version required ALL clients upgrade at once. It was an all or nothing proposition.

At the time, I was involved with perhaps half a dozen teams and the actual users was quite small. So the cost wasn’t that huge like today’s web scale problems.

Anybody need a little SOAP?

soapAfter moving off that project, I worked on another system that required integrate remote systems. I rubbed my hands together, ready to my polished CORBA talents to good use again, but our chief software engineer duly informed me a new technology being evaluted: SOAP.

“Huh?”

The thought of chucking all this CORBA talent did not excite me. A couple of factors transpired FAST that allowed SOAP to break onto the scene.

First of all, this was Microsoft’s response to the widely popular CORBA standard. Fight standards with standards, ehh? In that day and age, Microsoft fought valiantly to own any stack, end-to-end (and they aren’t today???? Wow!) It was built up around XML (another new acronym to me). At the time of its emergence, you could argue it was functionally equivalent to CORBA. Define your interface, generate client-side and server-side code, and its off the races, right?

But another issue was brewing in CORBA land. The OMG, the consortium responsible for the CORBA spec, had gaps not covered by the spec. Kind of like trying to ONLY write SQL queries with ANSI SQL. Simply not good enough. To cover these gaps, very vendor had proprietary extensions. The biggest one was Iona, an Irish company that at one time held 80% of the CORBA marketshare. We knew them as “I-own-ya'” given their steep price.

CORBA was supposed to cross vendor supported, but it wasn’t. You bought all middleware from the same vendor. Something clicked, and LOTS of customers dropped Iona. This galvanized the rise of SOAP.

But there was a problem

SOAP took off and CORBA stumbled. To this day, we have enterprise customers avidly using Spring Web Services, our SOAP integration library. I haven’t seen a CORBA client in years. Doesn’t mean CORBA is dead. But SOAP moved into the strong position.

Yet SOAP still had the same fundamental issue: fixed, brittle interfaces that required agreement between all parties. Slight changes required upgrading everyone.

When you build interfaces designed for machines, you usually need a high degree of specification. Precise types, fields, all that. Change one tiny piece of that contract, and clients and servers are no longer talking. Things were highly brittle. But people had to chug along, so they started working around the specs anyway they could.

I worked with a CORBA-based off the shelf ticketing system. It had four versions of its CORBA API to talk to. A clear problem when using pure RPC (CORBA or SOAP).

Cue the rise of the web

Around-the-webWhile “rise of the web” sounds like some fancy Terminator sequel, the rampant increase in the web being the platform of choice for e-commerce, email, and so many other things caught the attention of many including Roy Fielding.

Roy Fielding was a computer scientist that had been involved in more than a dozen RFC specs that governed how the web operated, the biggest arguably being the HTTP spec. He understood how the web worked.

The web had responded to what I like to call brute economy. If literally millions of e-commerce sites were based on the paradigm of brittle RPC interfaces, the web would never have succeeded. Instead, the web was built up on lots of tiny standards: exchanging information and requests via HTTP, formatting data with media types, a strict set of operations known as the HTTP verbs, hypermedia links, and more.

But there was something else in the web that was quite different. Flexibility. By constraining the actual HTML elements and operations that were available, browsers and web servers became points of communication that didn’t require coordination when a website was updated. Moving HTML forms around on a page didn’t break consumers. Changing the text of a button didn’t break anything. If the backend moved, it was fine as long as the link in the page’s HTML button was updated.

The REST of the story

The-Rest-of-the-StoryIn his doctoral dissertation published in 2000, Roy Fielding attempted to take the lessons learned from building a resilient web, and apply them to APIs. He dubbed this Representational Transfer of State or REST.

So far, things like CORBA, SOAP, and other RPC protocols were based on the faulty premise of defining with high precision the bits of data sent over the wire and back. Things that are highly precise are the easiest to break.

REST is based on the idea that you should send data but also information on how to consume the data. And by adopting some basic constraints, clients and servers can work out a lot of details through a more symbiotic set of machine + user interactions.

For example, sending a record for an order is valuable, but it’s even handier to send over related links, like the customer that ordered it, links to the catalog for each item, and links to the delivery tracking system.

Clients don’t have to use all of this extra data, but by providing enough self discovery, clients can adapt without suffering brittle updates.

The format of data can be dictated by media types, something that made it easy for browsers to handle HTML, image files, PDFs, etc. Browsers were coded once, long ago, to render a PDF document inline including a button to optionally save. Done and done. HTML pages are run through a different parser. Image files are similarly rendered without needing more and more upgrades to the browser. With a rich suite of standardized media types, web sites can evolve rapidly without requiring an update to the browser.

Did I mention machine + user interaction? Instead of requiring the client to consume links, it can instead display the links to the end user and let he or she actually click on them. We call this well known technique: hypermedia.

To version or not to version, that is the question!

HamletA question I get anytime I discuss Spring Data REST or Spring HATEOAS is versioning APIs. To quote Roy Fielding, don’t do it! People don’t version websites. Instead, they add new elements, and gradually implement the means to redirect old links to new pages. A better summary can be found in this interview with Roy Fielding on InfoQ.

When working on REST APIs and hypermedia, your probing question should be, “if this was a website viewed by a browser, would I handle it the same way?” If it sounds crazy in that context, then you’re probably going down the wrong path.

Imagine a record that includes both firstName and lastName, but you want to add fullName. Don’t rip out the old fields. Simply add new ones. You might have to implement some conversions and handlers to help older clients not yet using fullName, but that is worth the cost of avoiding brittle changes to existing clients. It reduces the friction.

In the event you need to REALLY make a big change to things, a simple version number doesn’t cut it. On the web, it’s called a new website. So release a new API at a new path and move on.

People clamor HARD for getting the super secret “id” field from a data record instead of using the “self” link. HINT: If you are pasting together URIs to talk to a REST service, something is wrong. It’s either your approach to consuming the API, or the service itself isn’t giving you any/enough links to navigate it.

When you get a URI, THAT is what you put into your web page, so the user can see the control and pick it. Your code doesn’t have to click it. Links are for users.

Fighting REST

fight1To this day, people are still fighting the concept of REST. Some have fallen in love with URIs that look like http://example.com/orders/523/lineitem/14 and http://example.com/orders/124/customer, thinking that these pretty URLs are the be-all/end-all of REST. Yet they code with RPC patterns.

In truth, formatting URLs this way, instead of as http://example.com/orders?q=523&li=14 or http://example.com/orders?q=124&mode=customer is to take advantage of HTTP caching when possible. A Good Idea(tm), but not a core tenet.

As a side effect, handing out JSON records with {orderId: 523} has forced clients to paste together links by hand. These links, not formed by the server, are brittle and just as bad as SOAP and CORBA, violating the whole reason REST was created. Does Amazon hand you the ISBN code for a book and expect you to enter into the “Buy It Now” button? No.

Many JavaScript frameworks have arisen, some quite popular. They claim to have REST support, yet people are coming on to chat channels asking how to get the “id” for a record so they can parse or assemble a URI.

BAD DESIGN SMELL! URIs are built by the server along with application state. If you clone state in the UI, you may end up replicating functionality and hence coupling things you never intended to.

Hopefully, I’ve laid out some the history and reasons that REST is what it is, and why learning what it’s meant to solve can help us all not reinvent the wheel of RPC.

#opensource is not a charity

clock-with-a-questionLogging onto my laptop this morning, I have already seen two tickets opened by different people clamoring for SOMEONE to address their stackoverflow question. They appeared to want an answer to their question NOW. The humor in all this is that the issue itself is only seven hours old, with the person begging for a response when their question is barely three hours old. Sorry, but open source is not a charity.

gordonBatPhoneIf you have a critical issue, perhaps you should think about paying for support. It’s what other customers need when they want a priority channel. It definitely isn’t free as in no-cost. Something that doesn’t work is opening a ticket with nothing more than a link to your question.

question-not-answeredOpen source has swept the world. If you don’t get onboard to using it, you risk being left in the dust. But too many think that open source is free, free, FREE. That is not the case. Open source means you can access the source code. Optimally, you have the ability to tweak, edit, refine, and possibly send back patches. But nowhere in there is no-cost support.

pivotal-ossIn a company committed to open source, we focus on building relationships with various communities. The Spring Framework has grown hand over fist in adoption and driven much of how the Java community builds apps today. Pivotal Cloud Foundry frequently has other companies sending in people to pair with us. It’s a balancing act when trying to coach users to not assume their question will be answered instantly.

helpingI frequent twitter, github, stackoverflow, and other forums to try and interact with the community. If at all possible, I shoot to push something through. Many times, if we’re talking about a one-line change, it’s even easier. But at the end of the day, I have to draw a line and focus on priorities. This can irk some members not aware of everything I’m working on. That is a natural consequence.

Hopefully, as open source continues to grow, we can also mature people’s expectations between paid and un-paid support. Cheers!

P.S. For a little while longer, there is a coupon code to Learning Spring Boot for 50% off (Python Testing Cookbook as well!)

Banner (LSPT50)

Spring Boot is still a gem…waiting to be discovered

devnexusLast week, I had the good fortune of speaking twice at the DevNexus conference, the 2nd largest Java conference in North America. It was awesome! Apart from being a total geek fest with international attendance, it was a great place to get a bigger picture of the state of the Java community.

intro-to-spring-data-devnexus-2016A bunch of people turned up for my Intro to Spring Data where we coded up an employee management system from scratch inside of 45 minutes. You can see about 2/3 of the audience right here.

It was a LOT of fun. It was like pair programming on steroids when people helped me handle typos, etc. It was really fun illustrating how you don’t have to type a single query to get things off the ground.

platform-spring-bootWhat I found interesting was how a couple people paused me to ask questions about Spring Boot! I wasn’t expecting this, so it caught me off guard when asked “how big is the JAR file you’re building?” “How much code do you add to make that work?” “How much code is added to support your embedded container?”

learning-spring-bootSomething I tackled in Learning Spring Boot was showing people the shortest path to get up and running with a meaningful example. I didn’t shoot for contrived examples. What use is that? People often take code samples and use it as the basis for a real system. That’s exactly the audience I wrote for.

People want to write simple apps with simple pages leveraging simple data persistence. That is Spring Boot + Spring Data out of the running gate. Visit http://start.spring.io and get off the ground! (Incidentally, THIS is what my demo was about).

I-heart-spring-AidenI was happy to point out that the JAR file I built contained a handful of libraries along with a little “glue code” to read JAR-within-a-JAR + autoconfiguration stuff. I also clarified that the bulk of the code is actually your application + Tomcat + Hibernate. The size of Boot’s autoconfiguration is nothing compared to all that. Compare that to time and effort to write deployment scripts, maintenance scripts, and whatever else glue you hand write to deploy to an independent container. Spring Boot is a life saver in getting from concept to market.

It was fun to see at least one person in the audience jump to an answer before I could. Many in the audience were already enjoying Spring Boot, but it was fun to see someone else (who by the way came up to ask more questions afterward) discovering the gem of Spring Boot for the first time.

CedricTo see the glint in someone’s eye when they realize Java is actually cool. Well, that’s nothing short of amazing.

LVM + RAID1 = Perfect solution to upgrade woes

As said before, I’m rebuilding an old system and have run into sneaky issues. In trying to upgrade from Ubuntu 12.04 to 14.04, it ran out of disk space at the last minute, and retreated to the old install. Unfortunately, this broke its ability to boot.

Digging in, it looks like Grub 2 (the newer version) can’t install itself properly due to insufficient space at the beginning of the partition. Booting up from the other disk drives (from a different computer), I have dug in to solve the problem.

How do you repartition a disk drive used to build a RAID 1 array, that itself is hosting a Linux Volume Group?

It’s not as hard as you think!

raid1A mirror RAID array means you ALWAYS have double the disk space needed. So…I failed half of my RAID array and removed one of the drives from the assembly. Then I wiped the partition table and built a new one…starting at later cylinder.

POOF! More disk space instantly available at the beginning of the disk for GRUB2.

Now what?!? Creating a new RAID array partition in degraded mode, I add it the LVM volume group as a new physical volume.

lvmThen I launch LVM’s handy pvmove command, which moves everything off the old RAID array and onto the new one.

Several hours later, I can reduce the volume group and remove the older RAID array. With everything moved onto the newly resize partition, I can then destroy and rebuild the old disk with the same geometry as the new one, pair it up, and BAM! RAID array back in action, and let it sync back up.

This should line things up to do a GRUB2 chroot installation.

With LVM it’s easy to shrink and expand partitions, reclaim some spare space, and move stuff around. But you are nicely decouple from the physical drives.

With RAID1, you have high reliability by mirroring. And as a side effect, you always have a space disk on hand if you need to move data around. I once moved live MythTV video data off the system to reformat my video partition into xfs.

Out with old and in with the new

startrek-ubuntu-bootupI have been waiting a long time to resurrect an old friend of mine: my MythTV box. I built that machine ten years ago. (I’d show you the specs, but they’re locked away ON the machine in an antique mediawiki web server). It runs Root-on-LVM-on-Raid top to bottom (which, BTW, requires LILO).

It was great project to build my own homebrew DVR. But with the advent of digital cable and EVERYTHING getting scrambled, those days are gone. So it’s sat in the corner for four years. FOUR YEARS. I’m reminded of this through periodic reports from CrashPlan.

I started to get this idea in my head that I could rebuild it with Mac OSX and make it a seamless backup server. Until I learned it was too hold to not support OSX. So it continued to sit, until I learned that I could install the right bits for it to speak “Apple”, and hence become a Time Machine capsule.

So here we go! I discovered that I needed a new VGA cable and power cord to hook up the monitor. After that came in, I booted it up…and almost cried. Almost.

As I logged in, I uncovered neat stuff and some old commands I hadn’t typed in years. But suffice it to say, it is now beginning its first distro upgrade (probably more to come after that), and when done, I’ll migrate it off of being a Mythbuntu distro and instead pick mainline Ubuntu (based on GNOME).

One that is done, I hope to install docker so I spin up services needed (like netatalk) much faster and get ahold of its ability to provide an additional layer of home support for both my and my wife’s Mac laptops.

Book Report: Area 51 by Bob Mayer

As indicated before, I started reading break away or debut novels by prominent authors last year. And here I am to deliver another book report!

Area 51 – Bob Mayer

Bob Mayer was one of the speakers at last year’s Clarksville Writer’s Conference. He was hilarious, gung ho, maybe a tad bombastic (retire Green Beret), and best selling author that had no hesitation to brag he makes about $1000/day with his trove of published novels.

Like or hate his personality, he has succeeded so I wanted to read one of his first works. It turns out, this novel was released under the pen name “Robert Doherty” through classic channels. He has since gotten the IP rights for all these past novels reverted back to him, a business move worthy of respect, and moved on to e-books.

Back to the story. It really is pretty neat. The writing is crisp, the dialog cool. I kept turning page after page, wanting to know what happens. I also had an inbuilt curiosity as to what this author would do. I have seen TV shows set in Area 51 like Seven Days, Stargate: SG-1 (based near Area 51 and steeped in similar military conspiracy), and other movies.

There was a bit of investigative journalism gone wrong combined with other historical legends. I must admit that part (won’t give it away!) really whet my appetite.

Bob Mayer indeed knows how to write. He knows how to make you turn the pages. I think I spent 3-4 days tops reading this book. I’ll confess it didn’t match my hunger in reading the debut Jack Reacher novel KILLING FLOOR. But then again, I’m finding it hard to spot the next novel that will compete on that level.

I’ll shoot with you straight on this: it wasn’t as hard to move to another novel by another author when I finished as it was for certain other novels. There were other series novels I read last year that made it hard to stop and move on instead of continuing the series. This one wasn’t the same. Will I ever go back and read more of Bob Mayer’s books?

Maybe/maybe no. I have read some of his other non-fiction books on writing craft, so in a sense, the man has already scored additional sales. It takes a top notch story with top notch characters and top notch writing to score that with me, and Jack Reacher has made me picky. Don’t take it a nock.

If you like SciFi and military conspiracies, you’ll find this book most entertaining.

Happy reading!

Book Report: The Andromeda Strain by Michael Crichton

Over the past year, I have been on a bit of a reading binge. I got this idea at the 2015 Clarksville Writer’s Conference to read the debut novel of top notch authors. Instead of reading a series or stack of novels by one author, I’ve been jumping from author to author, looking for a cross section of writing styles, views on things, and varied tastes.

This is my first of many book reports, so without further ado….

The Andromeda Strain – Michael Crichton

There was a movie by the same name released in 1971. As a kid, I had seen it a dozen times. Okay, maybe not that much, but anytime I spotted it, I had to stop what I was doing and watch it. It’s so cool, despite its dated look. When I learned, years later, that this was the break away novel (not debut) of the famous Harvard doctor Michael Crichton, it blew me away. I finally bit the bullet and read it last year.

A team of scientists battle a strange disease that threatens all of mankind. But instead of being loaded with cliches, the scientists battle it with real science. And they have real, believable issues that hamper their pursuit of a cure.

One scientist spots a key symptom early on that would result in a solution, but a strange, unexplainable incident causes him to forget this epiphany. Having seen the movie, I knew what happened. I won’t spoil it for you and tell you what it is, but suffice it to say that I have suffered the same in the past, and this connected with me on a personal level.

Michael Crichton has a strong basis in biological science with his medical education. He clearly shows preferences for the hard sciences as did Isaac Asimov. He takes things into the realm of “this may not exist today, but I believe it could in the future.”

The novel isn’t as dated as the movie. The scenes with the military sound realistic. I can visualize the parts in the labs where experiments are conducted. I may not be on top of medical research, so perhaps some of the stuff mentioned is ancient. But it gripped me. And it doesn’t slow down and bore you with research, but instead makes things exciting.

I was pleasantly surprised to learn that little was changed from novel to movie. The novel has an all male set of characters whereby they changed a key doctor to a woman in the movie, for the better. Kate Reid delivers a superior performance as a sassy, knows-what-she-knows microbiologist. But the core story and the big wrinkles are all there. Makes me want to go and watch the movie, again.

The whole thing is cutely wrapped up as a government memo you are reading implying this event DID happen. I always enjoy little bits like that, and I hope you do as well.

Happy reading until my next book report!

.@ScrivenerApp – The Ultimate #NoSQL Database

Over the past year, I have dove head first into use Scrivener for my writing efforts. The thing is amazing!

Scrivener is a writer’s tool, built by writers for writers. It costs about $30+. I couldn’t put my finger on what was so cool about it until I read this.

tl;dr – Scrivener puts your story/case/project into a database, Microsoft Word puts what you’re doing into a typesetter. Typesetters optimize for printout, databases optimize for reading/writing/updating information.

NoSQL Database

NoSQL data stores have gained big popularity over the past ten years. Why? Their charm is being schema-less.

schema-less – data not required to adhere to a certain structure

For years, people have adhered to SQL, the codified and accepted standard for grouping data to a strong structure. SQL comes loaded with lots of promises, which it indeed delivers. What is that?

If you define the structure of your data upfront, and observe other related practices (like 3rd Normal Form, i.e. 3NF; ACID, …), your data store will…

  • have maximum efficiency in storage by not duplicating data
  • have maximum efficiency in maintenance by not accidentally updating data in one place but forgetting to update in another place
  • get ALL the results when you query the database
  • ensure ALL inputs are committed to the database or none

These sound great, until you reach the era we have entered. People have discovered that all the guarantees of schema-driven data have costs. And costs that are proportional to your volume of data can catch up and cripple you.

We have discovered that not ALL data needs this amount of guarantee. Different data stores optimize in other ways, solving different problems. And thus was born the schema-less data store revolution.

Scrivener as a NoSQL data store

scrivener-binder

left – binder of folders with leaves; right – one leaf

How does Scrivener work? Out-of-the-box, it has a hierarchical nature. You can create folders within folders with folders. Each folder can have metadata about the folder itself, and it can contain leaves as well.

Click on a folder and you can view/edit all its leaves at once. Click on a leaf, view and edit a single leaf.

Folders and leaves can be converted from one to the other. The only difference is that folders are also containers, able to hold more folders/leaves.

The content can be text (our primary medium as writers) or other types (PDF, images, videos, …), meaning folders don’t have to just contain your story. Use it to capture your research, character notes, whatever!

Breaking out of the box

scrivener-charactersWhen you first install Scrivener, it comes with a pre-written manuscript and a tutorial. You walk through it, learning how to use the tool. It’s really quite clever and brought me up to speed, fast!

But at some point, you need to break out of conventions and learn how to use the tool. I first did so when I needed to sift through an extensive critique from my editor.

In the span of a 2-hour phone call, I had written down two pages of notes in a LibreOffice document last year. Some points, high level; some points, specific to a single sentence. I imported that document into Scrivener and took it apart, using another structure.

scrivener-researchI split up the collection of notes into individual leaves, all contained in a single folder. This way, as I addressed each comment, I could flag it as complete inside scrivener (I used checkbox icon to indicate this).

I put the scene-specific notes in a sub-sub-folder. To tackle the fact my editor had a different version of my manuscript, I dug that copy out of email and put in there as well. Using that, I tracked down every page specific comment and found its current scene.

Scrivener lets you put links between scenes (kind of like a MongoDB DBRef).

In a nutshell, I laid out my own structure, and then bent it as needed. Instead of bumping into it, like one often does with schema-based data stores, Scrivener accommodated my needs.

Spring Data Scrivener?

As a member of the Spring Data team, I’m truly amazed at how this release train of projects has leaped over the balkanized landscape of query languages. Use them when needed, but offloading typical queries to a framework is great!

I may have to keep my eye on the potential for writing apps that can query Scrivener manuscripts. It would lean on exactly what people are putting in their projects.

Until then, I hope you poke your nose into Scrivener and see how it’s perhaps the most user friendly NoSQL data store put out there to solve a very popular problem.

Why software development is not for everyone

Have you ever had gobs of fun hacking away on a computer? Noodled with a piece of code that you discovered in the afternoon, and here it is, 2:00 a.m.? That’s a sign you may be a computer geek. That’s all and good, but the question that may come before you is, do you want to make this your dream job? Your career? Watch out, though. Software development is not for everyone.

mr_potatohead-300x190I fear that some people may be getting into software development because computers are now hip and cool. And the money is good! I remember a time, several years ago, when I saw the corner turn in how computer geeks were no longer the goof bots found in Wargames, but instead, totally righteous dudes.

What was that tip off? A TV ad for liquor which showed “Jim from IT” partying just like everyone else. TV ads are a bellwether for trends, because that write them are constantly polling groups of people to see what will resonate. When IT people had entered the main fray of the party-going crowd and were seen a force to be reckoned with (err…a force to be advertised with), then game over.

The side effect of anything going mainstream is that others, who have no deep seated desire to submit and review pull requests on vacation (for the record, I’ve actually done both at Disney World. Top that!), will still flock to the field since its cool, and there’s money to be made. Am I saying that to get ahead, you must be a workaholic and eschew your family? Not at all. What I’m saying is that computer geeks that are in it for the challenge and not just cuz it’s cool can’t help themselves but do what i just described.

So far, I’ve talked about hacking on bits of code at odd hours in odd places. But here is the real test, the true gambit to see if deep down you really are a totally righteous hacker dude: do you go after the most boring, challenging, mind numbingly difficult problems until the code surrenders its secrets to you?

Do you go after the most boring, challenging, mind numbingly difficult problems until the code surrenders its secrets to you?  –sign of a righteous hacker

I have been working on something for several months that is coming to light in less than two weeks. (Stay tuned!) For MANY hours/days/weeks, every time I tweaked one line of code, I had to restart a microservice. That microservice would take, on average, two minutes to load up its data set from a remote system. Next, I went to another tab and clicked on a button to launch a process, and wait to see if my system detected it and processed it correctly. Got that? Edit one line of code, wait five minutes for the results, then debug the outcome. Rinse and repeat.

aaarrrggghhhAaaarrgggghhh!!!! <— that was my reaction on many days. And my followup was, “thou shalt not defeat me!” My street cred was at stake, and my own internal desire to beat this system into submission were there as well.

Suffice it to say, trudging through this task is not for the faint of heart. Software hobbyists need not apply.

I’ve always hated the expression “9-to-5 coder.” It’s meant to imply people that only work on the clock, and don’t work late hours, etc. In general, I do that as well. At times, I have to alert my family that I’ll be putting in extra hours (especially the month before SpringOne each year), but in general, I have found that working smarter, not harder, and thinking out solutions before slinging code can make things more efficient.

So I prefer to think of developers not based on the hours they work, but rather, are you a hobbyist or a diehard coder? Diehard coders will take the most ruthless problems and dig, dig, dig, dig, and dig until the answer is found. They are willing to take many hits, submit faulty patches, and screw things up big time, because the end game is important. Version control is our tool to save us from screwing things up, not timidity. Are you a hobbyist that fears making a big change? Or are you a diehard hacker ready to show your handy work to others?

I remember this topic coming up when a website popped up offering bounties for people to solve problems. This was rightly criticized as encouraging people to go after easy–to-tackle issues, or selecting issues that carried much public praise, like UI mods. But serious, professional hacks will take some of the most invisible things (like a 300ms slowdown) and go to extreme ends to unearth what’s wrong. Imagine trying to get ANYONE to pick up that issue for a bounty.

If such tasks sound outrageous or tedious, no problem. I understand. You are hobbyist. Nothing wrong there. But if you’re ready to dive into the slimiest mud bath of code, smile. You are rare.

Darklight critique by best selling author @JerryBJenkins

The thing you can never do without is getting solid, concrete feedback from a bestselling author. You can see my Darklight critique by Jerry Jenkins below.

I have the webinar keyed up to where he digs into my story and pulls no punches. The points he makes are amazing.

The blind leading the blind

If you meet up with a handful of wannabe authors, the odds of getting solid feedback aren’t stellar. Your chances begin to rise when you meet with published authors. Those that have been through the wringer of editors, publishers, and proof readers may have more usable stuff to chew on. Find an author with 21 best sellers, and you’re no longer hear “your story is wonderful, dear,” from your spouse or your mother.

Okay, enough glee on my own story. I really recommend you go back and watch the whole thing from the beginning. There is one other 1-pager that gets picked apart like mine. The points he makes are great.

  • Avoid on-the-nose writing (telling us about stuff we all know).
  • Focus on nouns and verbs to keep it snappy and tight.
  • Don’t explain everything to us. Give the reader credit, a.k.a. give the audience 2+2. Let them figure out 4.