المقدمة

الـ load balancer هو component مسئول عن توزيع ال client requests سواء كان هذا الكلا ينت هو web,desktop, app حتي انو gps sensor, camera, ايا كان هو client وبيتم توزيع هذا ال load علي processing managers.

والprocessing manager هو اي software بيعمل manage للاجزاء الي في الكود الي بتعمل بروسيسنج سواء كان الكلاينت بيبعت صور فانت عندك image processing ,video processing,Text Processing, Numerical Data Processing, extra

flowchart LR
    subgraph Clients
      A1[🌐 Web Client]
      A2[📱 Mobile/Desktop App]
      A3[📡 GPS Sensor]
      A4[🎥 Camera]
    end

    LB[🧰 Load Balancer]

    subgraph Processing Managers
      PM1[🖼️ Image Processing]
      PM2[🎞️ Video Processing]
      PM3[📝 Text Processing]
      PM4[🔢 Numerical Processing]
    end

    A1 --> LB
    A2 --> LB
    A3 --> LB
    A4 --> LB

    LB -->|Distribute Requests| PM1
    LB --> PM2
    LB --> PM3
    LB --> PM4

Sequence Diagram

Load Balancer - Sequence Diagram
Load Balancer - Sequence Diagram

ليه بنستخدم الـ Load Balancers

طيب الاجابة بسيطة الهدف توزيع الload بحيث ان مافيش processor يستهلك اكتر من الاخر بمعني اصح اقدر اعمل scale مثلا باني ازود servers ووزع عليهم الجهد , طيب هل ال scale horizontally يعني adding servers هو الهدف الوحيد؟

الاجابة لأ لأن حتي لو حتعمل scaling vertically(increasing the same device capabilities) فانت ممكن مثلا تستخدم ال multiprocessing, انك تعمل اكتر من process وتوزع عليهم الجهد


🧭 Distribution Diagrams

ايه الطرق اللي ممكن اوزع بيها الجهد:

  • Evenly (بالتساوي) — باستخدام Round Robin
  • Least Connection — (سنناقشها لاحقًا)
  • Workers داخل نفس الماكينة — نفس فكرة توزيع الحمل على سيرفرات متعددة

Round Robin (Even Distribution)

sequenceDiagram
  participant C as Client
  participant LB as Load Balancer
  participant S1 as Server 1
  participant S2 as Server 2

  C->>LB: task 1
  LB->>S1: Send task 1
  C->>LB: task 2
  LB->>S2: Send task 2
  C->>LB: task 3
  LB->>S1: Send task 3
  C->>LB: task 4
  LB->>S2: Send task 4

Round Robin (Even Distribution)

Round Robin (Even Distribution)
Round Robin (Even Distribution)
  1. task 1 goes top server1
  2. task 2 goes to server 2
  3. task 3 goes to server 1
  4. task 4 goes to server 2extra, every task multipe of 2 goes to two and others goes to server one, you can gather you self, if we have three servers

Code (Round Robin by Modulus)

class RoundRobin:
    def __init__(self, servers):
        self.servers = servers
        self.n = len(servers)
        self.counter = 0

    def next_server(self):
        server = self.servers[self.counter % self.n]
        self.counter += 1
        return server

# Example:
rr = RoundRobin(["server1", "server2", "server3"])
for task_id in range(1, 10):
    target = rr.next_server()
    print(f"task {task_id} -> {target}")

طيب ملحوظة احنا في الكود استخدمنا الmodulus

لان بكل بساطة منproperties الmodulus ان مثا ل1 = 3 % 1 because one is less than 32 is

2 also it is less

3 is equal it is zero

4 is bigger than one it goes back to the diffrence 1

5 with 2 goes back to difference ,task 6 is multiple of 3 this mean we have to end the circle so we come back to our end the three, if you made it from zero actually multiple will go back to zero, simple because we finished the end and start from the start of the circle(we make a turn laughing circle icon)


Least connection

وال algorithm ده حنناقشوا في article لوحدو


Workers, inside the same machine

حترجع لنفس فكرة ال servers


🧪 Sample Code

يعني inside the same machine باستخدام multiprocessing + Queue (مجرد إضافة دون تغيير للنص)

import multiprocessing as mp
import queue
import time

# Example processing functions
def image_processing(task):
    time.sleep(0.1)
    return f"image:{task}"

def text_processing(task):
    time.sleep(0.05)
    return f"text:{task}"

WORKERS = [image_processing, text_processing]

def worker_loop(worker_fn, in_q, out_q):
    while True:
        try:
            task = in_q.get(timeout=1)
        except queue.Empty:
            break
        try:
            result = worker_fn(task)
            out_q.put((worker_fn.__name__, task, result))
        finally:
            in_q.task_done()

if __name__ == "__main__":
    in_q, out_q = mp.JoinableQueue(), mp.Queue()

    # enqueue tasks
    for t in range(1, 11):
        in_q.put(f"task-{t}")

    # round-robin assign processes to different worker functions
    procs = []
    for i in range(4):
        fn = WORKERS[i % len(WORKERS)]
        p = mp.Process(target=worker_loop, args=(fn, in_q, out_q))
        p.start()
        procs.append(p)

    in_q.join()

    # collect results
    while not out_q.empty():
        print(out_q.get())

    for p in procs:
        p.join()

6. Example of already implemented load balancers:

Nginx

# /etc/nginx/conf.d/app.conf
upstream app_backends {
    # Round-robin by default
    server app1:8080;
    server app2:8080;
    # For least connections:
    # least_conn;
}

server {
    listen 80;
    server_name _;

    location /health {
        return 200 'OK';
        add_header Content-Type text/plain;
    }

    location / {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass http://app_backends;
    }
}
FROM nginx:alpineCOPY ./nginx/app.conf /etc/nginx/conf.d/app.conf

Sample app (Python FastAPI) — two replicas

# app/main.py
from fastapi import FastAPI
import socket

app = FastAPI()

@app.get("/")
def root():
    return {"host": socket.gethostname(), "msg": "hello from app"}

@app.get("/health")
def health():
    return {"status": "ok"}

docker-compose.yml

version: "3.9"
services:
  nginx:
    build: ./nginx
    ports:
      - "8080:80"
    depends_on:
      - app1
      - app2

  app1:
    build: ./app
    command: uvicorn main:app --host 0.0.0.0 --port 8080

  app2:
    build: ./app
    command: uvicorn main:app --host 0.0.0.0 --port 8080

Dockerfile (app)

FROM python:3.11-slim
WORKDIR /app
RUN pip install fastapi uvicorn
COPY ./app /app
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8080"]

تشغيل:

docker compose up --build

code

# مثال Round Robin بسيط كدالة فقط

def assign(tasks, servers):
    n = len(servers)
    for i, task in enumerate(tasks):
        yield task, servers[i % n]

# demo
for t, s in assign(range(1, 7), ["s1", "s2", "s3"]):
    print(f"task {t} -> {s}")