A distributed system is a collection of autonomous computers, connected through a network and distribution middleware, which enables computers to coordinate their activities and to share the resources of the system. The computers that are part of a distributed system may vary in size and function, ranging from workstations up to mainframe systems.
When talking about distributed systems, we consider the following parts:
Transparency: This concept refers to the idea of making the system appear as one cohesive unit, hiding the complexity of the distributed nature from the user.
- Location transparency: This refers to the idea that a resource’s physical location in a network is irrelevant to the user. Users can interact with a remote resource as though it’s local to their own system.
- Migration transparency: This allows resources or computation to move within a system without affecting other components or operations. This is essential in load balancing or system maintenance.
- Replication transparency: It is the process of hiding the replication of resources or services to provide fault-tolerance and improve performance.
- Concurrency transparency: This involves shielding the user from the awareness of concurrent execution to guarantee consistency.
Scalability: This is the ability of a system to handle an increasing amount of work by adding resources to the system.
Concurrency: This involves several computations happening within a system at the same time. This poses challenges in terms of ensuring data consistency and integrity, for which we use synchronization mechanisms like locks, semaphores, and monitors.
Fault Tolerance: This refers to the ability of a system to continue functioning correctly (possibly at a reduced level) despite the failure of some of its components. Fault tolerance can be achieved through techniques like redundancy, checkpointing, and heartbeating.
CAP Theorem: Proposed by Eric Brewer, this theorem states that it is impossible for a distributed data store to simultaneously provide more than two out of the following three guarantees:
Data Consistency: This means that all nodes in the system appear to be working with the same data. Different types of data consistency models exist, such as eventual consistency (updates propagate through the system but there will be periods where some parts of the system may have older data), and strong consistency (all updates are seen simultaneously by all nodes).
Consensus Algorithms: These algorithms help networked nodes agree on a single data value. They are crucial in maintaining consistency across distributed systems. Well-known consensus algorithms include Paxos, Raft, and Zab.
Synchronization: In distributed systems, it’s crucial to manage the order of operations and to coordinate system processes. Synchronization can be achieved through various algorithms like Lamport’s timestamps, vector clocks, or mutual exclusion algorithms like Ricart–Agrawala algorithm or Maekawa’s algorithm.
Distributed Hash Table (DHT): DHTs provide a lookup service similar to a hash table – key-value pairs are stored, and any participating node can efficiently retrieve the value associated with a given key. Popular DHT protocols include Chord, Pastry