The Superfluid Approach for IT ahead of its time


For excellent, business critical, applications on Apache Kafka:
  • Smart, strong, streaming, reactive
  • Robust and future proof
  • Easy backend development
  • Easy frontend development
  • Easy deployment

  • Here the knowledge that is needed for building the future, for:
  • Functional designers/ analysts
  • Front-end developers
  • Back-end developers
  • System administrators
  • Purpose: to show what is needed to make modern IT incredibly simple.

    Create the most compact and robust eco-system for Apache Kafka based applications.




    Based on industry standards, so incredibly
    future proof

    Minimum software layers, minimum dependencies, robust, best security possible

    Horizontal (volume) and vertical (complexity) scalable

    Frontend: Excellent, simple and robust GUI's

    Interaction: between Frontend and Backend superb

    Backend: Apache Kafka Streaming Applications simple and strong



    ... leads to minimum design, development and maintenance effort and to maximum results!

    What a functional designer should learn

    The knowledge you need to have to design applications on the Superfluid Eco System

    1. Structure your messages


    (the old data modeling)

    Ideally Kafka messages have a structure defined.
    Usually JSON is used today.

    Use JSON Schema, that will help UI designers, developers and maintainers to make their work simpler.

    Study also Domain Driven Design (DDD), it is different than relational modelling.


    2. Define your Business Rules


    Using JSON Schema, you can add restrictions or computations on the structure.

    Messages are ususally reasonable self-contained (see DDD), so these rules can also be reasonable complete.


    3. The result: an object model with behaviour


    A set of nice JSON schemas that might have some relations to each other

    You might add fields to make the application blockchain prepared, or add processing logic/documentation.

    Very important: Using CQRS you can link to thousands of applications, to represent your data. If your messages are good defined you have unlimited possibilities here.


    What frontend developers should learn

    The knowledge you need to have to design UI for applications on the Superfluid Eco System

    1. Understand the JSON schema


    Because the schema's are the basis of webpages, dialogs, ...


    2. Understand Html forms based on schema's


    JSON messages are defined in JSON Schema, good mapping from forms to json is important.
    Use form-serialize (>50.000 downloads/week)

    And use json-editor (>5.000 downloads/week)

    Then generate the needed html forms


    3. Enrich and test your webapp


    Understand the logic in the message structure (in Json Schema )

    Use and test ajv(Another Json Validator, >30.000.000 downloads/week), see if your form reacts ok

    And : understand why you do not need a framework !!!


    What backend developers should learn

    To start with: You have to be fluent in Apache Kafka producers/consumers/ streaming apps.

    1. The Apache Kafka Backend


    Only basic Java is needed for Superfluid Eco System!

  • Do not use Spring Kafka!
  • Even Java EE is not needed for Apache Kafka

  • Apache Kafka streaming apps are extremely simple java apps


    2. Communication


    Interaction between frontend and backend can be done in 2 ways:

  • Rest services (interface api in Apache Kafka).
  • WebSockets (part of HTML5)
  • We have to add one dependency to our software stack for rest services and websockets

  • Jetty (as confluent is doing).
  • Or undertow (JBoss kernel).
  • Or tomcat embedded.

  • 3. The frontend


    We prefer to use only standard:
    HTML5, CSS, javascript as defined by MDN


    NO Framework!!! See Adam Bien
    So no knowledge of Angular, Vue, or React needed.


    What system adminstrators need to know

    There are several ways to deploy Apache Kafka applications.

  • In your private cloud
  • On AWS
  • On Google Cloud
  • On Azure
  • ... etc

  • Because the development is done in a professional way, using WebSockets/ Rest communication according to above way, a kubernetes like implementation must be an easy task. We prefer BitNami Kubernetes to be used. (also open source)


    How Kafka Academy can help

    We give courses where we introduce the concepts: See our courses

    On several points on the software stack, we have some improvements.

  • Some improvements are public: See github of Kafka.Academy
  • We are very enthusiastic about our new Forms Generator.
  • We have a lot of experience in Streaming Applications
  • We have a lot of experience on Apache Avro for more compact data
  • We can make your environment Block Chain ready.
  • We can give support on site or remotely.
  • Why Apache Kafka based IT is ahead of its time

    What is different with Apache Kafka


    1. Kafka makes IT Scalable and Available.

      - It is based on logging, what is a breakthrough compared to being based on relational databases.
      - Traditional databases can be scaled in clusters but there is a Newtonian logical barrier. Kafka doesn't have this barrier.
      Because of adapting logging as the basis and breaking out of limits of clusters a Kafka application is scalable and is 24/7 See Newton vs Einstein, time in IT.

    2. Kafka breaks restrictions on complexity (vertical scalable).

      Relational databases have limitations in mapping complexity in the relational model. With Apache Kafka we use Domain Driven Design what is a more natural modelling technic and can handle complex structures much easier.

      See Domain Driven Design. So the Kafka approach is comfortable handling more complex structures

    3. Kafka is much richer on data.

      Traditional, data in tables are seen as the fundament of IT. In Kafka we see events/messages as the fundament of IT. Out of the events we can derive the tables. But out of the tables we cannot extract the events. Tables are in fact a snapshot on a certain moment. Out of Events we can extract that snapshot at any point of time in the past. So events are more fundamental than tables.

      So making events the basis we can enliven any moment in the past.

    4. Kafka applications are more natural and easier to build.

      In day-to-day life, we are used having a constant stream of facts/events. Stream of events is a natural thing, and we process continuous an endless stream of events. Kafka’s Streaming Applications are doing just that. Analysts and developers have to digest this approach fully. At first sight it looks more complex, but with some training it eventually it will be easier.

    5. Kafka is easier for (database) administrators.

      Kafka stores events, and does this storing in a reliable, redundant way. So your data is backed up automatic , and if we want we can restore any past situation. So not any problem with backup/restore, usually a heavy problem with 24/7 applications

    6. Kafka is easier for auditors.

      It is the task of an auditor to check if an administration follows legal rules. In present day IT, auditors have a complex task. But logging comes to the rescue. Logging is done since Babylonian times.(clay tablets) In Roman times not everyone could read, so a person has to read aloud the logs( the “auditor’).

      With Kafka, we return to the age old practice and make auditing an easy task.
    7. Kafka Applications are easier to protect against fraud.

      Kafka is already used since 2016 by a lot at banks for fraud detection in current IT payment streams. If we base our IT also internally on Kafka, we can do this fraud detection more thoroughly. Kafka is based on a chain of events, and we can easily add blockchain logic on these streams

    8. Security.

      Apache Kafka is a reasonable simple software layer on top of an operating system, Also the Streaming Apps and their UI’s can be kept very simple, with not any, or a very few dependencies (see www.kafka.academy), and therefore much less vulnerable than other software stacks.

    9. Kafka connects to nearly everything.

      CQRS is a driving methodology in Kafka applications. Applications can subscribe to (parts of) the event source, and use third party tools to present or digest the data. Like financial reports, sending letters to customers etc.

      See CQRS