When a program wants to communicate over the internet, it opens a "socket", which is like a door that others can knock to talk to the program.
When someone knocks on the door (socket), the program needs to decide whether to let them in or not.
Once the connection is established, data starts flowing through it. This data isn't neatly packaged like email; it's more like a continuous stream of bytes. The reader's job is to make sense of this stream by organizing it into meaningful pieces, understanding where one message ends and another starts.
TCP is the underlying protocol that ensures data gets from one place to another reliably. It doesn't care about the structure of the data itself. It's like a delivery truck that transports packages. The reader's job is to unpack the packages and make sense of them, like understanding the individual requests being sent on the internet.
In this architecture, the backend application handles all aspects of connection management, including listening for incoming connections, accepting them, and processing the data streams. This approach is simple and straightforward but may struggle to handle high loads efficiently due to its single-threaded nature.
Example: Node.js operates a single-threaded event loop model where a single thread manages all incoming requests and responses.
The Multiple Threads Single Acceptor Architecture is a design pattern used in building performant backend applications that leverage multi-threading to take advantage of all CPU cores. In this architecture, a single listener thread is responsible for accepting connections, but each accepted connection is handed over to a separate worker thread for processing.
Overall: The Multiple Threads Single Acceptor Architecture offers a balance between performance and complexity, allowing backend applications to efficiently utilize multi-core CPUs while managing incoming connections effectively.
In the Multiple Threads Multiple Acceptors Architecture, a single listener thread is responsible for creating the socket and placing it in shared memory accessible to other threads. Multiple worker threads are then created, each of which calls accept on the shared socket object to accept incoming connections. In this model, each worker thread takes on the dual role of acceptor and reader, handling both the acceptance of connections and the processing of data.
Example: NGINX, a widely used web server and reverse proxy server, used this architecture by default prior to version 1.9.1. In NGINX, multiple worker processes are created, each of which handles its own set of connections independently, improving concurrency and performance.
In this special architecture used by systems like RAMCloud, connections and work are handled differently. Instead of just passing connections to different workers, there's a special "listener" who not only listens for connections but also reads and sorts the messages that come through. Once the messages are sorted out, they're given to different worker threads to handle.
This setup ensures that no worker gets too overloaded with work while others sit idle. However, the listener, which reads and sorts messages, can sometimes become overwhelmed, slowing things down. Techniques like optimized message handling can mitigate these issues.
Peer-to-Peer (P2P) architecture allows computers to communicate and share resources directly with each other without relying on a central server. Each computer, or peer, can act as both a client and a server, enabling them to exchange data and services directly.
Example: File Sharing & Streaming Media
Serverless architecture is a cloud computing model where cloud providers manage the infrastructure, allowing developers to focus on writing code without worrying about server management. In this architecture, applications are broken down into smaller functions that run in response to events, autoscaling and managed by the cloud provider. This model offers benefits such as reduced operational costs, improved scalability, and faster time to market.
Example: AWS Lambda, Azure Functions, Google Cloud Functions
Microservices architecture is an approach to building applications as a set of loosely coupled, independently deployable services. Each service is responsible for a specific business function and communicates with other services through APIs. This architecture promotes scalability, flexibility, and continuous delivery, enabling teams to independently develop, deploy, and scale services.
Example: Netflix, Amazon, Uber, Spotify