In the microservices architecture, why they say is bad to share REST Client libraries? [closed]

In the microservices architecture, why they say is bad to share REST Client libraries? [closed]

We have 15 services build with Java Spring, they talk each other using REST .
Each time we add a new service to the pool we create from scratch all the code, including rest client code that will talk to other Services and the POJO classes used to map the resource(s) being requested.
We end up copy and pasting from the source code of other services into the new service.
I think it would be better to put all these POJO’s and rest client code into a library for all the services to consume it, it would save us a lot of work coding, but “they” say we should not do that with microservices.
So, why is that?
We end up copy and pasting the exactly same code over and over again, I don’t see the difference.

Solutions/Answers:

Solution 1:

The main issue is coupling. Sam Newman, author of Building Microservices puts it well:

In general, I dislike code reuse across services, as it can easily
become a source of coupling. Having a shared library for serialisation
and de-serialisation of domain objects is a classic example of where
the driver to code reuse can be a problem. What happens when you add a
field to a domain entity? Do you have to ask all your clients to
upgrade the version of the shared library they have? If you do, you
loose independent deployability, the most important principle of
microservices (IMHO).

Code duplication does have some obvious downsides. But I think those
downsides are better than the downsides of using shared code that ends
up coupling services. If using shared libraries, be careful to monitor
their use, and if you are unsure on whether or not they are a good
idea, I’d strongly suggest you lean towards code duplication between
services instead.

https://samnewman.io/blog/2015/06/22/answering-questions-from-devoxx-on-microservices/

Related:  How to see the output of a service in a docker stack?

Solution 2:

I would say “they” are wrong and you are right. There are several issues with copy and pasting client code:

  • If there is a bug in your client code, you will have to fix the bug in 15 places instead of just 1.
  • It slows things down. You now have to test and maintain multiple copies of the same code.
  • It is common practice to create client libraries and distribute them via a standard dependency manager like maven. Amazon does this https://github.com/aws/aws-sdk-java along with virtually everyone else.

In summary, you are right and Amazon is the strongest example supporting your opinion. They do exactly what you are suggesting for their web services, and they are arguably the largest most powerful player in the microservices space.

Also to address the concern of tight coupling in the other answer. Good apis are backward compatible, so a change to the api would not require upgrading all the clients even if they use the same client library.

Solution 3:

I agree with the statements about coupling. I’m forced to use a specific set of Maven dependencies in a reuse scenario, and no one is allowed to update them. The result is that creating new services gets harder because the frameworks are out of date, and so is the documentation.

On the other hand, code reuse can save a lot of time and money, especially boilerplate code used in services, if it is well constructed and has proper tests.

I think there is a middle ground here that involves versioning and a certain amount of routine maintenance.

Related:  How to register External Service (Non MSA) On Eureka Discovery registry

References