The Backend Engineering Show with Hussein Nasser

The Backend Engineering Show with Hussein Nasser

By Hussein Nasser

Welcome to the Backend Engineering Show podcast with your host Hussein Nasser. If you like software engineering you’ve come to the right place. I discuss all sorts of software engineering technologies and news with specific focus on the backend. All opinions are my own.

Most of my content in the podcast is an audio version of videos I post on my youtube channel here www.youtube.com/c/HusseinNasser-software-engineering

Buy me a coffee
www.buymeacoffee.com/hnasr

🧑‍🏫 Courses I Teach
husseinnasser.com/courses
Available on
Apple Podcasts Logo
Castbox Logo
Overcast Logo
Pocket Casts Logo
PodBean Logo
RadioPublic Logo
Spotify Logo
Currently playing episode

They Enabled Postgres Partitioning and their Backend fell apart

The Backend Engineering Show with Hussein NasserJun 24, 2023
00:00
33:38
The beauty of the CPU

The beauty of the CPU

If you are bored of contemporary topics of AI and need a breather, I invite you to join me to explore a mundane, fundamental and earthy topic.


The CPU.

A reading of my substack article https://hnasr.substack.com/p/the-beauty-of-the-cpu

May 09, 202509:39
Sequential Scans in Postgres just got faster

Sequential Scans in Postgres just got faster

This new PostgreSQL 17 feature is game changer. They know can combine IOs when performing sequential scan.

Grab my database course

https://courses.husseinnasser.com


Apr 18, 202527:37
Does discipline work?

Does discipline work?

No technical video today, just talking about the idea of discipline and consistency.

Apr 11, 202510:07
Socket management and Kernel Data structures

Socket management and Kernel Data structures

Fundamentals of Operating Systems Course

This video is an overview of how the operating system kernel does socket management and the different data structures it utilizes to achieve that.

timestamps

0:00 Intro

1:38 Socket vs Connections

7:50 SYN and Accept Queue

18:56 Socket Sharding

23:14 Receive and Send buffers

27:00 Summary

Apr 04, 202531:26
The genius of long polling

The genius of long polling

Polling is the ability to interrogate a backend to see if a piece of information is ready. It can introduce a chatty system and as a result long polling was born. In this video I explain the beauty of this design pattern and how we can push it to its limit. 0:00 Intro 0:45 Polling 2:30 Problem with Polling 3:50 Long Polling 8:18 Timeouts 10:00 Long Polling Benefits 12:00 Make requests into Long Polling 17:36 Request Resumption 21:40 Summary

Dec 06, 202428:58
Six stages of a good software engineer

Six stages of a good software engineer


You get better as a software engineer when you go through these stages.


0:00 Intro 

1:15 Understand a technology

7:07 Articulate how it works

15:30 Understand its’ limitations

19:48 Try to build something better

27:45 Realize what you built also has limitations

32:48 Appreciate the original tech as is




  1. Understand a technology 

 We use technologies all the time without knowing how it works. And it is ok not knowing how things work if interests isn’t there. But when there is interest to understand how something works, pursue it. It feels good when you understand how something works because you work better with it, you swim with the tide instead of against it. 


When I learned how TCP/IP work..  you would appreciate every connection request, how you read requests. You will ask questions,


 what is my code doing here? 

When exactly I’m creating connections?

When am I reading from the connection? 

Is it safe to share connections?



  1. Articulate how it works

This one is not easy, you might think you understand something until you try to explain how it works. If you find yourself using jargon you probably don’t understand and you just try to impress others. Have you seen people who want to talk about something to show they understand it? It’s the opposite. Try to truly articlate how it works, you will really understand it , back to 1.


I thought I understand how backend reads requests until I tried to speak to it. 


  1. Understand the technology limitations


Once 1,2 are done you will truly understand the tech, now you are confidant, you are excited about the tech and you will truly see when you can use the tech to its full potential and also know the weak points of the tech where it breaks, this happens a lot with TCP/IP. We know tcps limitations. 


  1. Try to build something better

This one is optional and can be skipped, but attempting to design or building something better then the tech because you know the limitations will truly reveal how you became better. But the challenge here is the ego, you might understand the limitations but you problem is thinking that what you will build is flawless. This step must be proceed with caution. 


  1. Realize what you build also has limitation

Dust settles.. this step hurts, and you may take a while to realize it, but whatever you build will have flaws… and when you realize this it is when you get better as an engineer. 


  1. Appreciate the tech as is

This is when you are back full circle you are back to the first stage, look at the technology and understand it but don’t judge it.. just know the limitations and its strength and flow with it. Stop fighting and instead build around a tech, does that mean you shouldn’t build anything new, of course not. Go build, but don’t stress around making something better to defeat existing tech. But actually build it for building it.


Nov 01, 202439:28
This new Linux patch can speed up Reading Requests

This new Linux patch can speed up Reading Requests

Fundamentals of Operating Systems Course https://oscourse.win Very clever! We often call read/rcv system call to read requests from a connection, this copies data from kernel receive buffer to user space which has a cost. This new patch changes this to allow zero copy with notification. “Reading' data out of a socket instead becomes a “notification” mechanism, where the kernel tells userspace where the data is.” This kernel patch enables zero copy from the receive queue. https://lore.kernel.org/io-uring/ZwW7_cRr_UpbEC-X@LQ3V64L9R2/T/ 0:00 Intro 1:30 patch summary 7:00 Normal Connection Read (Kernel Copy) 12:40 Zero copy Read 15:30 Performance

Oct 25, 202418:12
Cloudflare's 150ms global cache purge | Deep Dive

Cloudflare's 150ms global cache purge | Deep Dive

Cloudflare built a global cache purge system that runs under 150 ms.


This is how they did it.


Using RockDB to maintain local CDN cache, and a peer-to-peer data center distributed system and clever engineering, they went from 1.5 second purge, down to 150 ms.


However, this isn’t full picture, because that 150 ms is just actually the P50. In this video I explore Clouldflare CDN work, how the old core-based centralized quicksilver, lazy purge work compared to the new coreless, decentralized active purge. In it I explore the pros and cons of both systems and give you my thoughts of this system.


0:00 Intro

4:25 From Core Base Lazy Purge to Coreless Active

12:50 CDN Basics

16:00 TTL Freshness

17:50 Purge

20:00 Core-Based Purge

24:00 Flexible Purges

26:36 Lazy Purge

30:00 Old Purge System Limitations

36:00 Coreless / Active Purge

39:00 LSM vs BTree

45:30 LSM Performance issues

48:00 How Active Purge Works

50:30 My thoughts about the new system

58:30 Summary


Cloudflare blog

https://blog.cloudflare.com/instant-purge/



Mentioned Videos


Cloudflare blog

https://blog.cloudflare.com/instant-purge/



Percentile Tail Latency Explained (95%, 99%) Monitor Backend performance with this metric

https://www.youtube.com/watch?v=3JdQOExKtUY


How Discord Stores Trillions of Messages | Deep Dive

https://www.youtube.com/watch?v=xynXjChKkJc


Fundamentals of Operating Systems Course

https://os.husseinnasser.com


Backend Troubleshooting Course

https://performance.husseinnasser.com

Oct 18, 202401:02:22
MySQL is having a bumpy journey

MySQL is having a bumpy journey

Fundamentals of Database Engineering udemy course https://databases.win MySQL has been having bumpy journey since 2018 with the release of the version 8.0. Critical crashes that made to the final product, significant performance regressions, and tons of stability and bugs issues. In this video I explore what happened to MySql, are these issues getting fixed? And what is the current state of MySQL at the end of 2024. 0:00 Intro 2:00 MySQL 8.0 vs 5.7 Performance 11:00 Critical Crash in 8.0.38, 8.4.1 and 9.0.0 15:40 Is 8.4 better than 8.0.36? 16:30 More Features = More Bugs 22:30 Summary and my thoughts resources https://x.com/MarkCallaghanDB/status/1786428909376164263 https://www.percona.com/blog/do-not-upgrade-to-any-version-of-mysql-after-8-0-37/ http://smalldatum.blogspot.com/2024/09/mysql-innodb-vs-sysbench-on-large-server.html https://www.percona.com/blog/mysql-8-0-vs-5-7-are-the-newer-versions-more-problematic/

Sep 28, 202428:34
How many kernel calls in NodeJS vs Bun vs Python vs native C

How many kernel calls in NodeJS vs Bun vs Python vs native C

Fundamentals of Operating Systems Course https://oscourse.win In this video I use strace a performance tool that measures how many system calls does a process makes. We compare a simple task of reading from a file, and we run the program in different runtimes, namely nodejs, buns , python and native C. We discuss the cost of kernel mode switches, system calls and pe 0:00 Intro 5:00 Code Explanation 6:30 Python 9:30 NodeJS 12:30 BunJS 13:12 C 16:00 Summary

Sep 20, 202420:42
When do you use threads?

When do you use threads?

Fundamentals of Operating Systems Course https://os.husseinnasser.com When do you use threads? I would say in scenarios where the task is either 1) IO blocking task 2) CPU heavy 3) Large volume of small tasks In any of the cases above, it is favorable to offload the task to a thread. 1) IO blocking task When you read from or write to disk, depending on how you do it and the kernel interface you used, the write might be blocking. This means the process that executes the IO will not be allowed to execute any more code until the write/read completes. That is why you see most logging operations are done on a secondary thread (like libuv that Node uses) this way the thread is blocked but the main process/thread can resume its work. If you can do file reads/writes asynchronously with say io_uring then you technically don't need threading. Now notice how I said file IO because it is different than socket IO which is always done asynchronously with epoll/select etc. 2) CPU heavy The second use case is when the task requires lots of CPU time, which then starves/blocks the rest of the process from doing its normal job. So offloading that task to a thread so that it runs on a different core can allow the main process to continue running on its the original core. 3) Large volume of small tasks The third use case is when you have large amount of small tasks and single process can't deliver as much throughput. An example would be accepting connections, a single process can only accept connections so fast, to increase the throughput in case where you have massive amount of clients connecting, you would spin multiple threads to accept those connections and of course read and process requests. Perhaps you would also enable port reuse so that you avoid accept mutex locking. Keep in mind threads come with challenges and problems so when it is not required. 0:00 Intro 1:40 What are threads? 7:10 IO blocking Tasks 17:30 CPU Intensive Tasks 22:00 Large volume of small tasks

Sep 13, 202431:09
Frontend and Backends Timeouts

Frontend and Backends Timeouts


I am fascinated by how timeouts affect backend and frontend programming.

When a party is waiting on something you can place a timeout to break the wait. This is useful for freeing resources to more critical processes, detecting slow operations and even avoiding DOS attacks.

Contrary to common beliefs, timeouts are not exclusive to request processing, they can be applied to other parts of the frontend-backend communications. Let us explore this briefly.


0:00 Intro

2:30 Connection Timeout

5:00 Request Read timeout

10:00 Wait Timeout 

12:00 Usage Timeout

14:00 Response Timeout

16:00 Canceling a request

19:50 Proxies and timeouts

Sep 07, 202425:23
Postgres is combining IO in version 17

Postgres is combining IO in version 17


Learn more about database and OS internals, check out my courses 

Fundamentals of database engineering https://databases.win 

Fundamentals of operating systems https://oscourse.win



This new PostgreSQL 17 feature is game changer.


You see, postgres like most databases work with fixed size pages. Pretty much everything is in this format, indexes, table data, etc. Those pages are 8K in size, each page will have the rows, or index tuples and a fixed header. The pages are just bytes in files and they are read and cached in the buffer pool.


To read page 0, for example, you would call read on offset 0 for 8192 bytes, To read page 1 that is another read system call from offset 8193 for 8192, page 7 is offset 57,345 for 8192 and so on. 


If table is 100 pages stored a file, to do a full table scan, we would be making 100 system calls, each system call had an overhead (I talk about all of that in my OS course). 


The enhancement in Postgres 17 is to combine I/Os you can specify how much IO to combine, so technically while possible you can scan that entire table in one system call doesn’t mean its always a good idea of course and Ill talk about that. 


This also seems to included a vectorized I/O, with preadv system call which takes an array of offsets and lengths for random reads. 


The challenge will become how to not read too much, say I’m doing a seq scan to find something, I read page 0 and found it and quit I don’t need to read any more pages. With this feature I might read 10 pages in one I/O and pull all its content, put in shared buffers only to find my result in the first page (essentially wasting disk bandwidth, memory etc) 


It is going to be interesting to balance this out. 


Sep 02, 202427:39
Windows vs Linux Kernel

Windows vs Linux Kernel

Fundamentals of Operating Systems Course https://os.husseinnasser.com Why Windows Kernel connects slower than Linux I explore the behavior of TCP/IP stack in Windows kernel when it receives a RST from the backend server especially when the host is available but the port we are trying to connect to is not. This behavior is exacerbated by having both IPv6 and IPv4 and if the happy eye ball protocol is in place where IPv6 is favorable. 0:00 Intro 0:30 Fundamentals TCP/IP 3:00 Unreachable Port Behavior 6:00 Client Kernel Behavior (Linux vs Windows) 11:40 Slow TCP Connect on Windows 15:00 localhost, IPv6 and IPv4 20:00 Happy Eyeballs 28:00 Registry keys to change the behavior 31:00 Port Unreachable vs Host Unreachable https://daniel.haxx.se/blog/2024/08/14/slow-tcp-connect-on-windows/

Aug 30, 202437:23
Running out of TCP ephemeral source ports

Running out of TCP ephemeral source ports


In this episode of the backend engineering show I describe an interesting bug I ran into where the web server ran out of ephemeral ports causing the system to halt. 


0:00 Intro

0:30 System architecture 

2:20 The behavior of the bug

4:00 Backend Troubleshooting

7:00 The cause

15:30 Ephemeral ports on loopback


Aug 25, 202420:06
io uring gets even faster

io uring gets even faster

Fundamentals of Operating Systems Course https://os.husseinnasser.com Linux I/O expert and subsystem maintainer Jens Axboe has submitted all of the IO_uring feature updates ahead of the imminent Linux 6.10 merge window. In this video I explore this with a focus on what zerocopy. 0:00 Intro 0:30 IO_uring gets faster 2:00 What is io_uring 7:00 How Normal Copying Work 12:00 How Zero Copy Works 13:50 ZeroCopy and TLS https://www.phoronix.com/news/Linux-6.10-IO_uring https://lore.kernel.org/io-uring/fef75ea0-11b4-4815-8c66-7b19555b279d@kernel.dk/?s=09

May 20, 202416:35
They made Python faster with this compiler option

They made Python faster with this compiler option

Fundamentals of Operating Systems Course https://oscourse.win Looks like fedora is compiling cpython with the -o3 flag, which does aggressive function inlining among other optimizations. This seems to improve python benchmarks performance by at most 1.16x at a cost of an extra 3MB in binary size (text segment). Although it does seem to slow down some benchmarks as well though not significantly. O1 - local register allocation, subexpression elimination O2 - Function inlining only small functions O3 - Agressive inlining, SMID 0:00 Intro 1:00 Fedora Linux gets Fast Python 5:40 What is Compiling? 9:00 Compiling with No Optimization 12:10 Compiling with -O1 15:30 Compiling with -O2 20:00 Compiling with -O3 23:20 Showing Numbers Backend Troubleshooting Course https://performance.husseinnasser.com

May 07, 202429:04
How Apache Kafka got faster by switching ext4 to XFS
Apr 29, 202433:52
Google Patches Linux kernel with 40% TCP performance

Google Patches Linux kernel with 40% TCP performance

Get my backend course https://backend.win


Google submitted a patch to Linux Kernel 6.8 to improve TCP performance by 40%, this is done via rearranging the tcp structures for better cpu cache lines, I explore this here. 0:00 Intro 0:30 Google improves Linux Kernel TCP by 40% 1:40 How CPU Cache Line Works 6:45 Reviewing the Google Patch https://www.phoronix.com/news/Linux-6.8-Networking https://lore.kernel.org/netdev/20231129072756.3684495-1-lixiaoyan@google.com/ Discovering Backend Bottlenecks: Unlocking Peak Performance https://performance.husseinnasser.com

Mar 05, 202414:24
Database Torn pages

Database Torn pages

0:00 Intro

2:00 File System Block vs Database Pages

4:00 Torn pages or partial page

7:40 How Oracle Solves torn pages

8:40 MySQL InnoDB Doublewrite buffer

10:45 Postgres Full page writes



Feb 29, 202415:33
Cloudflare Open sources Pingora (NGINX replacement)
Feb 28, 202431:06
The Internals of MongoDB

The Internals of MongoDB

https://backend.win

https://databases.win


I’m a big believer that database systems share similar core fundamentals at their storage layer and understanding them allows one to compare different DBMS objectively. For example, How documents are stored in MongoDB is no different from how MySQL or PostgreSQL store rows. 

Everything goes to pages of fixed size and those pages are flushed to disk. 


Each database define page size differently based on their workload, for example MongoDB default page size is 32KB, MySQL InnoDB is 16KB and PostgreSQL is 8KB.


The trick is to fetch what you need from disk efficiently with as fewer I/Os as possible, the rest is API.  


In this video I discuss the evolution of MongoDB internal architecture on how documents are stored and retrieved focusing on the index storage representation. I assume the reader is well versed with fundamentals of database engineering such as indexes, B+Trees, data files, WAL etc, you may pick up my database course to learn the skills.

Let us get started.

Feb 19, 202444:57
The Beauty of Programming Languages

The Beauty of Programming Languages

In this video I explore the type of languages, compiled, garbage collected, interpreted, JIT and more. 


Feb 19, 202418:17
The Danger of Defaults - A PostgreSQL Story

The Danger of Defaults - A PostgreSQL Story

I talk about default values and how PostgreSQL 14 got slower when a default parameter has changed. Mike's blog https://smalldatum.blogspot.com/2024/02/it-wasnt-performance-regression-in.html



Feb 18, 202411:35
Database Background writing

Database Background writing

Background writing is a process that writes dirty pages in shared buffer to the disk (well goes to the OS file cache then get flushed to disk by the OS) I go into this process in this video
Feb 16, 202409:09
The Cost of Memory Fragmentation

The Cost of Memory Fragmentation

Fragmentation is a very interesting topic to me, especially when it comes to memory. While virtually memory does solve external fragmentation (you can still allocate logically contiguous memory in non-contiguous physical memory) it does however introduce performance delays as we jump all over the physical memory to read what appears to us for example as contiguous array in virtual memory. You see, DDR RAM consists of banks, rows and columns. Each row has around 1024 columns and each column has 64 bits which makes a row around 8kib. The cost of accessing the RAM is the cost of “opening” a row and all its columns (around 50-100 ns) once the row is opened all the columns are opened and the 8 kib is cached in the row buffer in the RAM. The CPU can ask for an address and transfer 64 bytes at a time (called bursts) so if the CPU (or the MMU to be exact) asks for the next 64 bytes next to it, it comes at no cost because the entire row is cached in the RAM. However if the CPU sends a different address in a different row the old row must be closed and a new row should be opened taking an additional 50 ns hit. So spatial access of bytes ensures efficiency, So fragmentation does hurt performance if the data you are accessing are not contiguous in physical memory (of course it doesn’t matter if it is contiguous in virtual memory). This kind of remind me of the old days of HDD and how the disk needle physically travels across the disk to read one file which prompted the need of “defragmentation” , although RAM access (and SSD NAND for that matter) isn’t as bad. Moreover, virtual memory introduces internal fragmentation because of the use of fixed-size blocks (called pages and often 4kib in size), and those are mapped to frames in physical memory. So if you want to allocate a 32bit integer (4 bytes) you get a 4 kib worth of memory, leaving a whopping 4092 allocated for the process but unused, which cannot be used by the OS. These little pockets of memory can add up as many processes. Another reason developers should take care when allocating memory for efficiency.

Jan 29, 202439:07
The Real Hidden Cost of a Request
Dec 13, 202313:09
Why create Index blocks writes

Why create Index blocks writes

Fundamentals of Database Engineering udemy course (link redirects to udemy with coupon) https://database.husseinnasser.com Why create Index blocks writes In this video I explore how create index, why does it block writes and how create index concurrently work and allow writes. 0:00 Intro 1:28 How Create Index works 4:45 Create Index blocking Writes 5:00 Create Index Concurrently

Oct 28, 202313:02
The Problems of an HTTP/3 Backend

The Problems of an HTTP/3 Backend

HTTP/3 is getting popular in the cloud scene but before you migrate to HTTP/3 consider its cost. I explore it here. 0:00 Intro HTTP/3 is getting popular 3:40 HTTP/1.1 Cost 5:18 HTTP/2 Cost 6:30 HTTP/3 Cost https://blog.apnic.net/2023/09/25/why-http-3-is-eating-the-world/

Oct 05, 202313:53
Encrypted Client Hello - The Pros & Cons

Encrypted Client Hello - The Pros & Cons


The Encrypted Client Hello or ECH is a new RFC that encrypts the TLS client hello to hide sensitive information like the SNI. In this video I go through pros and cons of this new rfc. 0:00 Intro 2:00 SNI 4:00 Client Hello 8:40 Encrypted Client Hello 11:30 Inner Client Hello Encryption 18:00 Client-Facing Outer SNI 21:20 Decrypting Inner Client Hello 23:30 Disadvantages 26:00 Censorship vs Privacy ECH https://blog.cloudflare.com/announcing-encrypted-client-hello/ https://chromestatus.com/feature/6196703843581952

Sep 29, 202333:18
The Journey of a Request to the Backend
Aug 01, 202352:59
They Enabled Postgres Partitioning and their Backend fell apart
Jun 24, 202333:38
WebTransport - A Backend Game Changer

WebTransport - A Backend Game Changer

WebTransport is a cutting-edge protocol framework designed to support multiplexed and secure transport over HTTP/2 and HTTP/3. It brings together the best of web and transport technologies, providing an all-in-one solution for real-time, bidirectional communication on the web.

Watch full episode (subscribers only) https://spotifyanchor-web.app.link/e/cTSGkq5XuAb


Jun 09, 202315:01
WebTransport - A Backend Game Changer

WebTransport - A Backend Game Changer

WebTransport is a cutting-edge protocol framework designed to support multiplexed and secure transport over HTTP/2 and HTTP/3. It brings together the best of web and transport technologies, providing an all-in-one solution for real-time, bidirectional communication on the web.
Jun 09, 202331:12
Your SSD lies but that's ok | Postgres fsync

Your SSD lies but that's ok | Postgres fsync

fsync is a linux system call that flushes all pages and metadata for a given file to the disk. It is indeed an expensive operation but required for durability especially for database systems. Regular writes that make it to the disk controller are often placed in the SSD local cache to accumulate more writes before getting flushed to the NAND cells. However when the disk controller receives this flush command it is required to immediately persist all of the data to the NAND cells. Some SSDs however don't do that because they don't trust the host and no-op the fsync. In this video I explain this in details and go through details on how postgres provide so many options to fine tune fsync 0:00 Intro 1:00 A Write doesn’t write 2:00 File System Page Cache 6:00 Fsync 7:30 SSD Cache 9:20 SSD ignores the flush 9:30 15 Year old Firefox fsync bug 12:30 What happens if SSD loses power 15:00 What options does Postgres exposes? 15:30 open_sync (O_SYNC) 16:15 open_datasync (O_DSYNC) 17:10 O_DIRECT 19:00 fsync 20:50 fdatasync 21:13 fsync = off 23:30 Don’t make your API simple 26:00 Database on metal?

May 25, 202330:04
The problem with software engineering

The problem with software engineering

ego is the main problem to a defective software product. the ego of the engineer or the tech lead seeps into the quality of the product. Fundamentals of Backend Engineering Design patterns udemy course (link redirects to udemy with coupon) https://backend.husseinnasser.com

May 21, 202317:39
2x Faster Reads and Writes with this MongoDB feature | Clustered Collections

2x Faster Reads and Writes with this MongoDB feature | Clustered Collections


Fundamentals of Database Engineering udemy course (link redirects to udemy with coupon) https://database.husseinnasser.com


In version 5.3, MongoDB introduced a feature called clustered collection which stores documents in the _id index as oppose to the hidden wiredTiger hidden index. This eliminates an entire b+tree seek for reads using the _id index and also removes the additional write to the hidden index speeding both reads and writes. 


However like we know in software engineering, everything has a cost. This feature does come with a few that one must be aware of before using it. In this video I discuss the following 


  • How Original MongoDB Collections Work
  • How Clustered Collections Work
  • Benefits of Clustered Collections
  • Limitations of Clustered Collections

 



May 11, 202327:02
Prime Video Swaps Microservices for Monolith: 90% Cost Reduction

Prime Video Swaps Microservices for Monolith: 90% Cost Reduction

Prime video engineering team has posted a blog detailing how they moved their live stream monitoring service from microservices to a monolith reducing their cost by 90%, let us discuss this 0:00 Intro 2:00 Overview 10:35 Distributed System Overhead 21:30 From Microservices to Monolith 29:00 Scaling the Monolith 32:30 Takeaways https://www.primevideotech.com/video-streaming/scaling-up-the-prime-video-audio-video-monitoring-service-and-reducing-costs-by-90 Fundamentals of Backend Engineering Design patterns udemy course (link redirects to udemy with coupon) https://backend.husseinnasser.com

May 06, 202335:58
A Deep Dive in How Slow SELECT * is

A Deep Dive in How Slow SELECT * is

Fundamentals of Database Engineering udemy course (link redirects to udemy with coupon) https://database.husseinnasser.com In a row-store database engine, rows are stored in units called pages. Each page has a fixed header and contains multiple rows, with each row having a record header followed by its respective columns. When the database fetches a page and places it in the shared buffer pool, we gain access to all rows and columns within that page. So, the question arises: if we have all the columns readily available in memory, why would SELECT * be slow and costly? Is it really as slow as people claim it to be? And if so why is it so? In this post, we will explore these questions and more. 0:00 Intro 1:49 Database Page Layout 5:00 How SELECT Works 10:49 No Index-Only Scans 18:00 Deserialization Cost 21:00 Not All Columns are Inline 28:00 Network Cost 36:00 Client Deserialization https://medium.com/@hnasr/how-slow-is-select-8d4308ca1f0c

May 02, 202339:24
AWS Serverless Lambda Supports Response Streaming

AWS Serverless Lambda Supports Response Streaming


Lambda now supports Response payload streaming, now you can flush changes to the network socket as soon as it is available and it will be written to the client socket. I think this is a game changing feature



0:00 Intro

1:00 Traditional Lambda

3:00 Server Sent Events & Chunk-Encoding

5:00 What happens to clients?

6:00 Supported Regions

7:00 My thoughts


Fundamentals of Backend Engineering Design patterns udemy course (link redirects to udemy with coupon)

https://backend.husseinnasser.com


Apr 07, 202313:14
The Cloudflare mTLS vulnerability - A Deep Dive Analysis

The Cloudflare mTLS vulnerability - A Deep Dive Analysis

Cloudflare released a blog detailing a vulnerability that has been in their system for nearly two years. it is related to mTLS or mutual TLS and specifically client certificate revocation. I explore this in details 0:00 Intro 3:00 The Vulnerability 7:00 What happened? 8:50 Certificate Revocation 12:30 Rejecting certain endpoints 17:00 Certificate Authentication 20:30 Certificate serial number 24:00 Session Resumption (PSK) 35:00 The bug 37:00 How they addressed the problem Fundamentals of Backend Engineering Design patterns udemy course (link redirects to udemy with coupon) https://backend.husseinnasser.com

Apr 06, 202343:13
The Virgin Media ISP outage - What happened?

The Virgin Media ISP outage - What happened?

BGP (Border gateway protocol) withdrawals caused the Virgin media ISP customers to lose their Internet connection. I go into details on this video. 0:00 Intro 2:00 What happened? 4:11 How BGP works? 11:50 Version media withdrawals 15:00 Deep dive Fundamentals of Backend Engineering Design patterns udemy course (link redirects to udemy with coupon) https://backend.husseinnasser.com

Apr 06, 202323:24
GitHub SSH key is Leaked - How bad is this?

GitHub SSH key is Leaked - How bad is this?

GitHub Accidentally Exposed their SSH RSA Private key, this is the message you will get . @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that a host key has just been changed. The fingerprint for the RSA key sent by the remote host is SHA256:uNiVztksCsDhcc0u9e8BujQXVUpKZIDTMczCvj3tD2s. Please contact your system administrator. Add correct host key in ~/.ssh/known_hosts to get rid of this message. Host key for github.com has changed and you have requested strict checking. Host key verification failed. In this video I discuss how bad is this,. 0:00 Intro 1:10 What happened? 3:00 SSH vs TLS Authentication 6:00 SSH Connect 7:45 How bad is the github leak? 15:00 What should you do? 18:50 Is ECDSA immune? https://github.blog/2023-03-23-we-updated-our-rsa-ssh-host-key/

Mar 30, 202321:57
Cookie Hijacking - How Linus Tech Tips got Hacked

Cookie Hijacking - How Linus Tech Tips got Hacked

How Linus Tech Tips channel got Hacked



In this short video we explain how was it possible for Linux to get hacked with cookies hijacking. 0:00 Intro 0:47 TLDR what happened 5:10 Cookies in Chrome 7:30 Cookies Hijacking 8:46 Session Tokens (Access/Refresh) 10:00 Remedies


Mar 29, 202313:34
All Postgres Locks Explained | A Deep Dive
Mar 19, 202349:11
Pinterest moves to HTTP/3
Mar 16, 202325:54
Why Loom Users got each others’ sessions on March 7th 2023

Why Loom Users got each others’ sessions on March 7th 2023

On March 7 2023, Loom users started seeing each others data as a result of cookies getting leaked from the CDN. This loom security breach is really critical. Let us discuss   0:00 Intro 1:00 Why Cookies 2:00 How this happens 5:50 What caused it? 7:30 How Loom solved it? 8:20 Reading the RCA 10:30 Remedies

Mar 14, 202314:58
How Discord Stores Trillions of Messages - A deep dive
Mar 11, 202301:09:20
Postgres Architecture | The Backend Engineering Show
Feb 16, 202334:04
How Alt-Svc switches HTTP/2 clients to use HTTP/3 | The Backend Engineering Show

How Alt-Svc switches HTTP/2 clients to use HTTP/3 | The Backend Engineering Show

The Alt-Svc header/frame is a capability that allows the server to adverse alternative services to the connected application available in protocols, ports or domains. It is available as a response header alt-svc and also as an HTTP/2 frame. Let us discuss this capability.

0:00 Intro

1:38 what is alt-svc?

5:30 uses of h3 in alt-svc

8:00 alt-svc header

10:00 Alt-svc header with 103 early hints

14:48 h2 altsvc frame

18:30 SVCB DNS record

21:20 Summary

Fundamentals of Backend Engineering Design patterns udemy course (link redirects to udemy with coupon)

https://backend.husseinnasser.com

Feb 13, 202323:58