IPCs don’t make good APIs

In embedded systems, we tend to use a lot of threads. Not for computational efficiency, but for coping with a complex environment. They are useful for interacting with the parallel operating HW parts, different external parties (incl. user) we are communicating with etc.

To coordinate our multiple threads, we need to use a good deal of IPCs [1]Inter-process communication, I’ll use this term to refer to all kinds of synchronization primitives in this post, whether inter-process or intra-process., e.g. mutex locks, events, semaphores, shared memory regions.

These IPCs offer tempting division points in our system, and thereby sometimes become the interface between the subsystems. However, in fact they are not well-suited for this purpose for a number of reasons:

  1. Global namespace: All IPCs reside in the global namespace, just like global variables. So they imply the same drawbacks associated with global variables.
  2. Usage style not clear: With an interface function you can only do one thing: call it, and with appropriate parameters. And it will give you a clear signal back via its return value. Compare this with a shared memory: you can write to it, read from it, release it, get another handle to it. Moreover, you’ll probably need a protocol for accessing it: do you need to get a mutex lock before reading? And, how do you know the result of the operation?
  3. Technological dependency: The choice of the IPCs used is mostly dependent on the current platform/OS. Using IPC as the API exposes the decision of the platform/OS to the outside of the component, thereby making it hard to change, causing problems if portability is required.

The first solution that comes to mind is to document this as a protocol and demand every developer to adhere to it. The smarter solution is to just have this mundane work done by a piece of SW, i.e. a thin layer wrapping this IPC protocol providing an API [2]The word “API” might ring some alarm bells, as it implies long-term commitment and inflexibility. For those, I’d propose an unstable API or just an interface, to make it less … Continue reading and reduce the need for documentation to a minimum.

Such a design will show its immediate benefits at first in the testability of the system. The APIs will provide very handy interfaces to the test code and stubs/mocks. This design will also show benefits in limiting the ripple effect and improving the modifiability of the system. It will also provide a good basis for applying decorator and adapter patterns when necessary. Moreover, this wrapping will provide the capability to debug/break/trace/assert at a central point and easily replace the implementation by alternatives.[3]Thanks to Holger Strobel for this contribution.

This discussion can be abstracted to a more general level than the technological domain of embedded systems. What are the cases where a division caused by a factor other than the functional distribution of work among the subsystems becomes a functional interface? For example, can a design decision about how a a distributed system is deployed dictate the structure of the interface? Should it?

Footnotes

Footnotes
1 Inter-process communication, I’ll use this term to refer to all kinds of synchronization primitives in this post, whether inter-process or intra-process.
2 The word “API” might ring some alarm bells, as it implies long-term commitment and inflexibility. For those, I’d propose an unstable API or just an interface, to make it less assertive.
3 Thanks to Holger Strobel for this contribution.

2 thoughts on “IPCs don’t make good APIs

  1. What are you thinking about Multicore API ?(using lock-free or wait-free data structures in real-time embedded system APIs (i.e. posix) )

    1. Hi Umit,

      I guess you are referring to MCAPI. The IPC mechanisms offer an API themselves, e.g. MCAPI or Microsoft WEC’s synchronization API. We should distinguish these from the internal and external APIs of our own software.

      Regarding MCAPI, there is one aspect special to it. It is intended for computational efficiency. Therefore in most designs, you will not see that a system will be divided along the lines defined by MCAPI primitives. Rather, the parts communicating over MCAPI will be closely knit-together to achieve maximum computational efficiency.

      Besides this aspect, the rules are the same. If it is really defining a subsystem, we should hide MCAPI’s specifics inside the thin layer that will expose an API that doesn’t depend on the IPC used. In fact, your feedback reminded me of another drawback of using IPCs directly as APIs: the close dependency on used technology. I’ll add this to the post.

      Thank you for your feedback, you are the first :-)
      Dogan

Leave a Reply

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.