← Back to team overview

canonical-ci-engineering team mailing list archive

Re: objections to uservice_utils project on pypi?

 

Hi Evan *et al*,

On Fri, Apr 3, 2015 at 12:08 AM, Evan Dandrea <evan.dandrea@xxxxxxxxxxxxx>
wrote:

> I found this to be a compelling and concise defence:
>
>
> http://www.simplicityitself.com/our-team/sharing-code-between-microservices/
>

Thanks for the link. I read the article last Thursday and let it sit in my
head for the long weekend. This will almost certainly become a blog post at
some point, but I thought I'd try out my reply on y'all first

First, I think the entire article is *technically correct* (which, as
Hubert Farnsworth would point out, "is the best kind of correct"), but
misses the point in a few places.

The paragraph I have the most trouble with is:

When you build a new service and share nothing, you feel like you have
> ‘lost’ the functionality that is available in the other services. In fact,
> you’ve traded it away to give yourself more isolation, by sharing more, you
> lose more isolation.  *You gain the convenience of using existing code,
> at the cost of coupling more tightly to the other services in the system.*


(the emphasis is mine). Again, this is technically correct, but kind of
misses the point: the python standard library is a library, do we think all
our services that use the python standard library are "coupled more
tightly" to other services? I'd suggest not. What about kombu? swiftclient
& friends? Again, I'd suggest not.

What makes it OK for us to rely on any of these libraries across all our
services? I think it's because we have some reasonable assurance that these
libraries won't update from beneath us and break backwards compatibility. I
strongly suspect that the author is talking about sharing business-logic
related classes, rather than infrastructure components, as we are.

The author goes on to say:

This demonstrably does occur even when you are sharing only technical
> libraries.  I’m sure that any Netflix guys (or other larger microservices
> implementations) reading can attest to “build the world” CI storms if one
> of the base libraries are changed. This is witnessing the infamous ripple
> effect in full force. Those seemingly innocuous, merely technical libraries
> will start to gain a whiff of ‘scary’.  This is because altering them will
> cause a large scale redeploy of services unrelated the one being developed.
> Developers will start to avoid them if possible, for fear of the unknown
> effect that you may create.  There lies legacy …


If his Netflix hypothetical case is accurate, that suggests that Netflix is
either:

A) Not maintaining backwards compatibility between libraries and the
services that use them (thereby necessitating a full rebuild of the world
in order to find out what broke and fix it before deploying)

B) ... or Always deploying services with the latest version of every
dependant library, as opposed to deploying with well-known version of
libraries.

C) ... or both :D

Why else would you need to rebuild anything when releasing a new version of
a library?

I think that, by pinning our dependant libraries, and by being good library
authors and following reasonable backwards compatibility & deprecation
practises we will be in a significantly better place than the above Netflix
scenario. Another way to think about this is: we should pretend that
uservice-utils is being used by thousands of other people that we want to
look after: Don't release backwards-incompatible code. Do write good,
well-tested code. Do deprecate broken code sensibly. Pretend you're writing
a "real" library, and don't treat it as a dumping ground for everything
that doesn't fit anywhere else.

One final bit in that article piqued my interest:

This is a fairly stark message, but one that I can’t avoid.   If you want
> convenience, build a monolith. They are significantly quicker to start a
> new project with, quicker to be able to alter the service boundaries as
> desired. To be able to get that initial jolt of primary development, they
> are the right answer.
> For more sustained innovation and ability to change anything as desired,
> you must work to reduce the sharing between areas of the system to allow
> them to move independently of each other.

...

> The natural progression that we now expect in new Microservices
> implementations is :-
>
>    - Prepare the ground (the ability to break services off)
>
>
>    - Build a Monolith in the primary development phase
>
>
>    - As areas of the system stop changing, break them off as new services
>    behind their stable barrier.
>
>
>    - At this point, they should share almost nothing with the monolith it
>    has been pulled out from
>
>
>    - During ongoing/ secondary development, new areas start life as new
>    services.
>
>
This bit really resonated with me. One of the main aspects of the arguments
we keep having is regarding trying to predict what will change in the
future. We keep arguing about "what happens if service A wants to change
it's data payload", or "what happens if we need to add a new type of
worker" etc. etc. The point is, we're *guessing* at what the future change
will be. We've already seen that we learn more about the system as we build
it. From the quoted text above, perhaps we should consider building a
monolith initially, and think about splitting it out into micro-services
when we know what's going to change, and what's going to remain constant?

Perhaps doing greenfield development by spinning up micro-services from
scratch isn't such a good idea (I'm not suggesting that it's a *bad* idea,
I just don't know either way).


Thanks again for the article - the more viewpoints on this the better. I
look forward to picking this discussion up over a beer in Austin :D
 --
Thomi Richards
thomi.richards@xxxxxxxxxxxxx

References