Web Infrastructural Design
OSI - TCP/IP - HTTPS - DNS-record-types - downtime - High-availability-cluster - load-balancers - SPF
Network basics :
A network protocol is a set of rules and conventions that govern how data is transmitted, received, and processed over a computer network
In the OSI (Open Systems Interconnection) model:
The physical layer is the lowest and foundational layer. It deals with the actual physical transmission of data over the physical medium, such as cables, wires, or radio signals. This layer is responsible for translating the digital bits into a format that can be transmitted and received over the physical medium.
Data-link layer in a simple way!
Think of the data-link layer as a translator that helps devices talk to each other over a local network, like a bunch of friends chatting in a room. Here's how it works:
1. **Framing:** Imagine you're writing a letter to your friend. You put your thoughts on paper and start with "Dear Friend" and end with "Sincerely, You." Similarly, in the data-link layer, your data is divided into small "frames." Each frame has a starting point and an ending point, making it easy for the receiving device to understand where the data begins and ends.
A frame is a fundamental unit of data transmission in networking. It's like a package that holds your data, along with important information needed for proper delivery. Imagine sending a letter by mail: you put your message in an envelope, write the recipient's address, and include a return address. In networking, a frame serves a similar purpose.
2. **MAC Addresses:** Just like each friend has a unique name, every device on a network has a unique identifier called a MAC address. It's like a digital nameplate. When you want to send a message to a specific friend, you write their name on the envelope. In the data-link layer, each frame is "addressed" to a device using its MAC address.
3. **Error Detection:** Sometimes letters get smudged or words get mixed up. To make sure your message is received correctly, you might add a simple code at the end of your letter. If your friend finds the code doesn't match, they'll know something went wrong during delivery. In the data-link layer, a similar process happens to detect if any bits in the frame got corrupted during transmission.
4. **Media Access Control:** Imagine if all your friends talked at the same time in the room—it would be chaos! To avoid this, you might use a talking stick to take turns speaking. In the data-link layer, devices use rules to take turns sending data over the network. This prevents collisions (when two devices talk at the same time) and ensures everyone gets a chance to "speak."
5. **Switches:** In a room full of friends, you might have a friend who helps relay messages. If you want to send a message to someone across the room, you tell your relay friend, and they pass it along. In the data-link layer, switches do something similar. They forward frames to the right device, making sure data gets to where it needs to go.
In simple terms, the data-link layer helps devices on a local network communicate effectively by creating frames, giving each device a unique address, checking for errors, managing the order of conversations, and using switches to send messages to the right places. It's like the friend who makes sure everyone gets a chance to talk and understand each other at a party!
Network layer : a road trip
Imagine you're planning a road trip with your friends. You need to figure out the best routes, avoid traffic jams, and make sure everyone reaches the right destination. The network layer does something similar for data as it travels across different networks.
1. **Routing:** Just like you pick the best roads for your trip, the network layer decides the best path for your data to travel from your device to its destination. It uses a set of rules and algorithms to choose the fastest and most efficient route.
2. **IP Addresses:** Similar to how every house has a unique address, every device on a network has a unique identifier called an IP address. When you're planning your road trip, you use GPS coordinates or an address to find your destination. In the network layer, your data is "addressed" using IP addresses so routers can guide it to the right place.
3. **Packetization:** On your road trip, you might pack your stuff in separate bags to make carrying easier. In the network layer, your data is divided into small "packets." Each packet has a label saying where it's from, where it's going, and its position in the sequence.
4. **Fragmentation and Reassembly:** Sometimes the road isn't wide enough for your big suitcase. You might break it into smaller pieces and reassemble it later. In the network layer, if your data is too big for a network's capacity, it's split into smaller pieces (fragments) and then put back together at the destination.
Transport layer : delivery service
The transport layer divides data into smaller units called data frames to check for errors within network segments. It guarantees fair transmission by preventing faster hosts from overwhelming slower ones. Essentially, this layer ensures complete message delivery from start to finish. It verifies successful data transmission and re-sends data in case of errors.
The Session layer [ conversation manager ] :
the fifth layer of the OSI model. Its primary role is to establish, manage, and terminate communication sessions between two devices or systems. A session can be thought of as a logical connection between two endpoints. This layer ensures the orderly and controlled exchange of data between these endpoints.
The Presentation layer
Translates the data received from the Application layer into a format suitable for transmission over the network. It handles data compression, encryption, and character set conversion to ensure that data is properly understood by both the sender and receiver.
Data Encryption and Decryption: This layer can apply encryption to data before it's transmitted and decrypt it upon reception, ensuring the confidentiality of the information.
Data Compression: The Presentation layer can compress data to to reduce the amount of information that needs to be transmitted, improving efficiency.
The Application layer is the seventh and topmost layer of the OSI model. It's the layer that interacts directly with end-user applications and provides a platform for communication services.An IP address, or Internet Protocol address:
A numerical label assigned to each device connected to a computer network that uses the Internet Protocol for communication. In simpler terms, it's like a unique address for a device on the internet or a local network, similar to how a street address identifies the location of a physical building.
IP addresses are used to identify and route data packets between devices across networks. There are two main types of IP addresses:
IPv4 (Internet Protocol version 4): This is the older and more commonly used version of IP addresses. IPv4 addresses are composed of four sets of numbers separated by periods, such as "192.168.1.1". Each set can have a value from 0 to 255, resulting in a total of around 4.3 billion unique addresses. However, due to the rapid growth of the internet, the availability of these addresses has become limited.
IPv6 (Internet Protocol version 6): This is the newer version of IP addresses designed to address the limitations of IPv4. IPv6 addresses are longer and are composed of groups of hexadecimal digits separated by colons, such as "2001:0db8:85a3:0000:0000:8a2e:0370:7334". IPv6 provides a significantly larger number of unique addresses, which helps accommodate the expanding number of devices connected to the internet.
What is TCP/IP ?
TCP/IP stands for Transmission Control Protocol/Internet Protocol. It's a set of networking protocols that form the foundation for communication on the internet and most private networks. TCP/IP defines how data is transmitted, routed, addressed, and received across networks. It's a suite of protocols, each serving a specific purpose in the networking process. Here are some key components of TCP/IP:
Transmission Control Protocol (TCP): TCP is a reliable and connection-oriented protocol
Responsible for breaking down data into packets, ensuring they are sent and received in the correct order, and reassembling them at the destination. It also handles error checking and retransmission of lost or corrupted packets.
Internet Protocol (IP): IP is responsible for addressing and routing packets of data so they can be sent from one device to another across different networks. IP provides a logical addressing scheme using IP addresses (IPv4 or IPv6) to uniquely identify devices on a network.
Port Numbers :
In networking, a port number is a numeric identifier that helps distinguish different communication channels or endpoints within a single device on a network. Ports allow multiple applications or services to share the same IP address while maintaining separate communication streams.
Think of a port number as an extension on a telephone line – the main line (IP address) connects to the device, and the extension (port number) directs the communication to a specific application or service running on that device.
For example, a web server listens on port 80 (HTTP) or port 443 (HTTPS). Clients use these well-known port numbers when initiating communication with the server.
IP versus MAC Address:
IP addresses are used for routing data across networks (internet or local), operate at the network layer, and can be dynamically assigned.
MAC addresses are used for identifying devices within a local network segment, operate at the data-link layer, and are fixed and assigned by the manufacturer. Both IP addresses and MAC addresses play important roles in enabling network communication.
Servers, web servers and application server :
A server is a computer program or a physical device that provides services, resources, or functionality to other computers or devices, known as clients, over a network.
A web server is a specific type of server that delivers web content to clients over the World Wide Web. When you type a URL (Uniform Resource Locator) into a web browser, such as "http://www.example.com," your browser sends a request to a web server to retrieve the web page associated with that URL. The web server processes the request and sends back the requested web page, which your browser then renders for you to see.
Key features of web servers include:
1. **HTTP Protocol**: Web servers primarily use the Hypertext Transfer Protocol (HTTP) to communicate with web browsers and other clients. HTTP defines how data is formatted and transmitted between clients and servers.
2. **Handling Requests**: When a web browser sends a request to a web server (e.g., to access a web page or download a file), the web server processes the request, retrieves the appropriate content, and sends it back to the client.
3. **Serving Web Content**: Web servers can host various types of content, including HTML files, images, videos, scripts, and more. They handle requests for these resources and deliver them to clients.
Popular web server software includes:
- **Apache HTTP Server**: One of the most widely used open-source web servers. It's highly customizable and has been in use for decades.
- **Nginx**: Known for its efficiency and ability to handle high concurrency, making it a popular choice for serving static content and as a reverse proxy.
These web server software options handle the core functionality of processing HTTP requests and delivering web content, contributing to the smooth functioning of the World Wide Web.
Web Server:
A web server is responsible for handling and serving static content, such as HTML files, images, CSS, and JavaScript, to clients (usually web browsers) that request them. Web servers are designed to efficiently deliver these static resources to users. They handle incoming HTTP requests, locate the requested files, and send them back to the client's browser. Examples of web servers include Apache HTTP Server and Nginx.
Application Server:
An application server, on the other hand, is responsible for executing dynamic application logic and processing data. It handles more complex tasks, such as retrieving data from databases, processing business logic, and generating dynamic content based on user input. Application servers are often used in web applications that require server-side processing. They interact with databases, communicate with external services, and manage the application's business logic. Examples of application servers include Tomcat, JBoss, and Microsoft IIS.
In some scenarios, a web server and an application server can be hosted on the same physical or virtual machine, but they serve distinct purposes.
The web server handles static content delivery and can also act as a reverse proxy to route requests to the appropriate application server based on the requested URLs.
The application server, on the other hand, handles the dynamic aspects of the application, such as processing forms, interacting with databases, and generating personalized responses for users.
What is Proxy ?
In the context of computer networks and the internet, is an intermediary server or software that sits between a client (such as a user's computer) and a destination server., often providing additional functions or benefits. Proxies can be used for various purposes, including security, performance optimization, content filtering, and anonymity.
Here are some common types and uses of proxies:
Forward Proxy: A forward proxy, also known as an HTTP proxy or simply a proxy server, sits between clients (such as web browsers) and destination servers (websites). It acts on behalf of clients to request resources from the internet. This type of proxy can be used for various purposes, including:
Caching: Storing frequently requested content to reduce bandwidth usage and improve response times.
Anonymity: Hiding the client's IP address from the destination server, providing a level of user privacy.
Content Filtering: Blocking or filtering specific websites or content based on predefined rules.
Access Control: Restricting access to certain websites or resources based on user roles or policies.
Reverse Proxy: A reverse proxy is placed in front of one or more servers and acts as a gateway for client requests. It distributes incoming requests to different backend servers based on various criteria, such as load balancing, caching, and security. Key benefits of reverse proxies include:
Load Balancing: Distributing incoming traffic across multiple backend servers to improve performance and availability.
SSL Termination: Handling SSL encryption and decryption on behalf of backend servers, offloading the processing burden from them.
Caching: Storing static content to reduce load on backend servers and speed up content delivery to clients.
Security: Acting as a barrier between clients and backend servers, protecting the servers from direct exposure to the internet and potential attacks.
DNS
Certainly! DNS, which stands for Domain Name System, is like the phonebook of the internet. It's a system that helps your computer find the right website when you type in a web address, like www.example.com. Here's how it works simply:
1. **You type a web address**: When you enter a website's name (like "www.example.com") into your browser, your computer doesn't know where that website is located on the internet.
2. **Request to DNS server**: Your computer sends a request to a DNS server. This server is like a directory that knows the phone numbers (IP addresses) of websites.
3. **DNS server search**: The DNS server looks up the web address you typed and finds the matching IP address. An IP address is a unique set of numbers that identifies a website's location on the internet.
4. **IP address found**: Once the DNS server finds the IP address linked to the web address, it sends it back to your computer.
5. **Connecting to the website**: With the IP address in hand, your computer can now connect to the website's server using that unique address. This is like dialing the phone number you got from the directory.
6. **Website appears**: Your computer contacts the website's server, and the server sends back the webpage you wanted to see. This is displayed in your web browser.
In short, DNS helps you find websites by translating human-friendly web addresses into computer-friendly IP addresses, allowing your devices to connect to the right web servers.
Main types of DNS records:
1. **A Record (Address Record)**: An A record is used to map a domain name to an IP address. For example, if you have the domain "www.example.com" and you want it to point to the IP address "192.168.1.1", you would create an A record for this purpose.
2. **CNAME Record (Canonical Name Record)**: A CNAME record is used to create an alias for a domain name. Instead of pointing directly to an IP address, a CNAME points to another domain name. This is often used for creating subdomains or redirecting one domain to another.
For instance, if you have a CNAME record that makes "blog.example.com" point to "www.example.com", both addresses lead to the same content.
3. **MX Record (Mail Exchange Record)**: MX records are used for email. They specify the mail servers responsible for receiving email on behalf of a domain. When you send an email to someone@domain.com, the recipient's email server is determined by the MX records of the recipient's domain.
4. **TXT Record**: A TXT record allows you to attach human-readable text to a domain. It's often used for adding extra information, such as SPF (Sender Policy Framework) records for email authentication, DKIM (DomainKeys Identified Mail) records, and other purposes like verifying domain ownership for services like Google Workspace.
These DNS record types are crucial for managing various aspects of domain names, such as website hosting, email routing, and ensuring the security and authenticity of your domain.
Round Robin DNS is a technique used to distribute incoming internet traffic across multiple servers or IP addresses in a rotational manner. It's a simple load balancing method that helps distribute the load among multiple servers to prevent any single server from becoming overwhelmed with traffic.
What’s the point in having “www” in a URL?
1. **Hosting a Big Website with a CDN**: Imagine you have a large website, and to ensure fast and efficient delivery of content, you decide to use a Content Distribution Network (CDN) like Akamai. CDNs help deliver content from servers located close to the user, improving speed and reliability.
2. **Setting Up DNS with a CNAME**: To set up your website to work with the CDN, you configure your DNS records. Instead of using an A record (which directly points to an IP address), you use a CNAME record. A CNAME points your domain to another domain (in this case, the CDN's domain, like akamai.com). This allows the CDN to provide an IP address that's optimal based on the user's location.
3. **DNS Quirk - CNAME and Other Records**: Here's where the quirk comes in. According to DNS rules, if you set a CNAME record for a particular hostname (e.g., "www"), you can't have any other records (like A, MX, etc.) for the same hostname. However, your main domain (example.com) must have important records like NS (Name Server) and SOA (Start of Authority) records for proper functioning.
4. **Using www Subdomain to Overcome Quirk**: To work around this quirk and still use the benefits of a CDN, you use the "www" subdomain. You set up a CNAME record for "www" to point to your CDN's domain. This allows you to have the necessary NS and SOA records for the main domain (example.com), while still benefiting from the CDN for your website's content delivery.
5. **Redirecting to www**: Since it's common for users to type in "example.com" without the "www," you can set up an A record for "example.com" that points to a server. This server can then send an HTTP redirect to "www.example.com," ensuring that users who type in the domain without the "www" are automatically redirected to the version that benefits from the CDN.
In essence, this setup lets you take advantage of a CDN's benefits while dealing with the DNS limitations that arise from using CNAME records for the root domain. It also ensures that essential DNS records are properly maintained while optimizing content delivery.
Server monitoring :
the practice of continuously observing and tracking the performance, health, and various metrics of computer servers, whether they are physical machines or virtual instances. This process helps ensure that servers are running efficiently, securely, and reliably. Here's a breakdown of server monitoring:
Metrics Tracking: Server monitoring involves collecting and analyzing various metrics, such as CPU usage, memory utilization, disk space, network traffic, server response times, and more. These metrics give insight into the server's resource consumption and overall performance.
Availability and Uptime: Monitoring checks if the server is up and running, ensuring it's accessible to users and services. Monitoring tools often send periodic requests to servers and notify administrators if the server becomes unresponsive.
Load-balancing
Load balancer is like a traffic director for incoming requests to a website or application. Its job is to distribute these requests across multiple servers in a way that optimizes performance, prevents overload, and ensures high availability.
Software Load Balancer:
Software load balancers are implemented as software applications or services running on general-purpose servers.
They are more flexible and easier to set up than hardware load balancers.
Examples include Nginx, HAProxy, and Amazon Elastic Load Balancer (ELB).
Hardware Load Balancer:
Hardware load balancers are specialized physical devices designed solely for distributing network traffic.
They are often more powerful and efficient than software load balancers but can be costlier and less flexible.
Examples include F5 BIG-IP, Citrix ADC, and Barracuda Load Balancer.
Load balancing algorithms used by load balancers:
Random: Traffic is sent randomly to servers. Common, but not very efficient.
Round Robin: Each server gets a turn to handle a request, evenly distributing the load.
Weighted Round Robin: Some servers get more requests based on specified ratios.
Let's say server A is more powerful and can handle more traffic compared to servers B and C. With Weighted Round Robin, you assign weights to each server: A (weight 3), B (weight 2), and C (weight 1).
Dynamic Round Robin: Like Weighted Round Robin, but ratios change based on server performance.
Fastest: Traffic goes to the server with the quickest response time.
Least Connections: Traffic goes to the server with the fewest active connections.
Observed: Combines Least Connections and Fastest algorithms.
Predictive: Similar to Observed, but predicts server performance trends.
Single point of failure
A SPOF is a weak spot in a system's design, setup, or operation that can lead to the entire system failing if that spot malfunctions.
If not addressed, SPOFs can lead to reduced productivity, business disruption, and even security risks.
High-availability systems, like networks and software, don't want SPOFs. These can occur in both software and hardware, even in cloud setups.
Eliminating Single Points of Failure:
To get rid of SPOFs, first identify risks in hardware, software/services, and people.
Redundancy is key: Have backup components at different levels to replace failed ones.
For instance, a server can have multiple hard drives, and a data center can be replicated at another location.
Avi's platform:
Offers a solution to eliminate Single Points of Failure (SPOFs) by providing robust load balancing capabilities. Here's how it works in simpler terms:
By distributing traffic across multiple servers, the platform reduces the risk of a single server failing and causing the entire system to go down.
If one server starts to struggle or fails, Avi's platform automatically redirects incoming traffic to other healthy servers. This ensures that users don't experience disruptions even if one server is having issues.
**Self-Healing Virtual Services**: When a service (like a website or application) hosted on a server fails, the platform can automatically create a new instance of that service on another healthy server. This self-healing process keeps the service operational.
Update your production codebase/database without causing downtime?
1. **Initial Setup**:
- Your website is accessed through a load balancer.
- There are two web servers (A and B) that handle user requests.
- You have two database servers (M and N) that store data.
2. **Preparing for Update**:
- You want to update web server A, but you don't want it to handle incoming traffic during the update.
- You pause the process that keeps the database servers synchronized (log shipping).
3. **Updating Web Server A**:
- You update web server A with new changes.
- You configure web server A to use database server M for data.
4. **Testing Web Server A**:
- You check if the update on web server A worked by directly accessing it (not through the load balancer).
5. **Load Balancer Change**:
- You tell the load balancer to send new user requests to web server A, but existing sessions continue on web server B.
6. **Waiting for Sessions to Finish**:
- You wait for the existing user sessions on web server B to complete, which could take around half an hour.
7. **Updating Web Server B and Database Server N**:
- Now you update web server B (since its sessions are done) and database server N.
8. **Testing Web Server B and Database Server N**:
- You test web server B and database server N to make sure the updates are successful.
9. **Resuming Database Synchronization**:
- You restart the process that synchronizes the databases (log shipping) between servers M and N.
10. **Load Balancer Normal Operation**:
- The load balancer is set back to its normal mode, distributing traffic between both web servers.
In more complex situations, these steps can take a longer time, and there might be detailed plans and schedules to ensure that updates don't disrupt the website's availability. Updates are first tested in a controlled environment (QA), and when successful, they're applied to the live website during specific time frames (maintenance windows) to minimize any negative impact on users.
What is HTTPS ?
HTTP (Hypertext Transfer Protocol) and HTTPS (Hypertext Transfer Protocol Secure) are both protocols used for transmitting data over the internet. However, they have some key differences in terms of security and data protection.
**HTTP (Hypertext Transfer Protocol):**
HTTP is the basic protocol used for transmitting data between a web browser and a web server.
- Data transmitted over HTTP is not encrypted or secured in any way.
- This means that any information sent using HTTP, such as login credentials or personal data, can be intercepted and read by malicious actors.
- Websites using only HTTP are not considered secure, and modern browsers often display warnings when users access such sites.
**HTTPS (Hypertext Transfer Protocol Secure):**
- HTTPS is a secure version of HTTP.
- It adds an extra layer of security by using encryption to protect the data being transmitted.
- HTTPS uses SSL (Secure Sockets Layer) or TLS (Transport Layer Security) protocols to encrypt data, making it much harder for unauthorized parties to intercept and decipher.
- To use HTTPS, websites need an SSL/TLS certificate, which confirms their authenticity and enables the encryption process.
- Browsers display a padlock symbol or "Secure" label in the address bar to indicate that a website is using HTTPS.
- Many websites, especially those handling sensitive data like login information or payment details, require HTTPS to ensure user privacy and security.
While both HTTP and HTTPS allow data to be transferred between browsers and servers, HTTPS adds encryption and security measures to protect the data from interception and unauthorized access. As a best practice, it's recommended to use HTTPS for any website that handles user data or login information to ensure the privacy and security of users.
High availability cluster (active-active/active-passive)
A high availability cluster is like having a backup plan for your computer systems to make sure they stay up and running even if something goes wrong. There are two main types: active-active and active-passive.
**Active-Active High Availability Cluster:**
In an active-active cluster, all the computers or servers are actively working and sharing the load. If one server fails, the others immediately take over its tasks, so there's no interruption in service. It's like having a team of people working together, and if one person needs a break, someone else can step in without stopping the work.
**Active-Passive High Availability Cluster:**
In an active-passive cluster, one computer is actively doing the work while another is standing by in case of trouble. The active one handles all the tasks, and if it fails, the passive one quickly steps in. It's like having a main performer on stage, and if they can't continue, an understudy comes in to keep the show going.
Both types of high availability clusters help make sure that even if something breaks or stops working, your systems can keep running smoothly. This is important for important applications or websites that need to be available all the time.
Example:
Design and Whiteboarding: How Web Communication Works
Let's explore how websites work by drawing a simple diagram on a whiteboard. Imagine you're drawing on the board as we explain each step:
1. **Main Server:** Start by drawing a central box to represent the main server. This is where all the important parts come together.
2. **Web Server (Nginx):** Draw a box labeled "Web Server (Nginx)" near the main server box. This is like a gatekeeper that handles requests from users.
3. **Application Server:** Add another box for the "Application Server." This is where the special functions of the website happen.
4. **Database (MySQL):** Draw a box labeled "Database (MySQL)" to represent where data is stored.
5. **Application Files:** Create a box for "Application Files (Your Code Base)." This contains all the instructions for the website.
Now, let's connect the dots with arrows:
- **User's Request:** Draw an arrow from the user towards the Web Server box. This shows the user's request reaching the server.
- **Domain Connection:** Draw another arrow from the server to the text "www.google.com." This arrow means that the website's name connects to the server's address.
Let's talk about the steps:
1. **User Starts:** When a user enters a website link, clicks on something, or fills out a form, they send a request.
2. **Web Server Gets the Request:** The Web Server (Nginx) gets this request and is like a receptionist taking a message.
3. **Web Server Decides:** Depending on what's asked, the Web Server decides what to do:
- **Static Stuff:** If it's simple stuff like pictures, the Web Server shows it directly.
- **Dynamic Stuff:** If it's more complicated, like a special program, it sends the request to the Application Server.
4. **Application Server Does the Job:** The Application Server gets the request and figures out what's needed.
5. **Database Might Be Involved:** If there's a need for data (like showing a user's info), the Application Server talks to the Database (MySQL).
6. **Database Handles Data:** The Database does the work and gets the data ready.
7. **Application Server Creates Content:** The Application Server takes everything, processes it, and makes what's needed.
8. **Application Server Sends Response:** The Application Server makes a complete answer and sends it back.
9. **Web Server Sends Back:** The Web Server gets the answer and sends it back to the user.
10. **User Sees the Result:** The user's computer gets the response and starts showing the website.
11. **User and Website Talk More:** The user might click things or ask for more, starting the process again.
Remember, this is like a conversation between the user and the website's servers. Each one has a role, and they work together to make everything happen smoothly.
Primary Node: The primary node, often called the "master" or "primary server," is the main instance of the database that handles read and write operations from applications. It serves as the authoritative source of data and is responsible for processing and storing changes to the database. When data is modified, the primary node receives the updates and ensures data integrity and consistency.
Replica Node: A replica node, also known as a "secondary," "standby," or "replication node," is a copy of the primary node's data. It is used primarily for read operations, serving as a backup source of data. The purpose of replica nodes is to improve scalability, performance, and fault tolerance. They can help distribute the read workload and provide redundancy in case the primary node becomes unavailable.
see part2..
Resources:
https://oa-angel26.medium.com/web-infrastructure-design-4634a2e1b27c
Web Server and Application Server | Explained 🔥🔥
Resource of the following image is the attached video of Hussein Nasser