Full Code of Shivam5022/Systems-and-CPP for AI

main f2897c4b1222 cached
36 files
124.5 KB
32.3k tokens
1 requests
Download .txt
Repository: Shivam5022/Systems-and-CPP
Branch: main
Commit: f2897c4b1222
Files: 36
Total size: 124.5 KB

Directory structure:
gitextract_7xs099x_/

├── .gitignore
├── README.md
└── notes/
    ├── CRTP.md
    ├── RAII.md
    ├── allocators.md
    ├── atomic_instructions.md
    ├── buffer_overflow.md
    ├── casting.md
    ├── cheat.md
    ├── const_constexpr.md
    ├── cpp_question_bank.md
    ├── exceptions.md
    ├── find.md
    ├── function_inlining.md
    ├── git-sheet.md
    ├── http.md
    ├── lambdas.md
    ├── latency_numbers.md
    ├── linux_tcp.md
    ├── memory_reordering.md
    ├── metaprogramming.md
    ├── move_semantics.md
    ├── os_booting.md
    ├── packet_handling.md
    ├── padding_packing.md
    ├── performance.md
    ├── placement_new.md
    ├── pre-post-increment.md
    ├── program_to_process.md
    ├── rvo.md
    ├── set_pq.md
    ├── smart_pointers.md
    ├── templates.md
    ├── tmux.md
    ├── vim.md
    └── virtual_functions.md

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
.DS_Store
.vscode/

================================================
FILE: README.md
================================================
# Shivam's Knowledgebase

## CS Fundamentals

### Operating Systems

- Operating Systems: A Linux Kernel-Oriented Perspective by Prof. Smruti Sarangi:
  [Book Link](https://www.cse.iitd.ac.in/~srsarangi/osbook/index.html)
- OS lectures by Prof. Sorav Bansal: [Youtube Link](https://www.youtube.com/playlist?list=PLf3ZkSCyj1tdCS2oCYACXO6x-VKpDIMB6)

### Caches

- YouTube playlist by Prof. Harry Porter: [Link](https://www.youtube.com/playlist?list=PLbtzT1TYeoMgJ4NcWFuXpnF24fsiaOdGq)
- Cache Coherence: See relevant videos [here](https://www.youtube.com/watch?v=ISaYWm8T8n4&list=PLUl4u3cNGP62WVs95MNq3dQBqY2vGOtQ2&index=170)
- MIT Notes: [Intro](https://ocw.mit.edu/courses/6-004-computation-structures-spring-2017/pages/c14/c14s1/#17), [MESI](https://ocw.mit.edu/courses/6-004-computation-structures-spring-2017/pages/c21/c21s1/#18)

### Systems Programming

- CS 361 by Prof. Chris Kanich: [YT Link](https://www.youtube.com/playlist?list=PLhy9gU5W1fvUND_5mdpbNVHC1WCIaABbP)
- Some Latency Number: [Notes](notes/latency_numbers.md)
- To be added

### OS

- Atomic Instructions: [Notes](notes/atomic_instructions.md)
- Memory Reordering: [Notes](notes/memory_reordering.md)
- Padding and Packing (Aligned Memory Access): [Notes](notes/padding_packing.md)
- Buffer Overflow Attacks: [Notes](notes/buffer_overflow.md)
- OS Boot Process: [Notes](notes/os_booting.md)
- From a Program to Process: [Notes](notes/program_to_process.md)
- Stack Memory Management: [Link](https://organicprogrammer.com/2020/08/19/stack-frame/)
- To be added

### Networking

- From NIC to User Processes: [Notes](notes/packet_handling.md)
- How The Kernel Handles A TCP Connection: [Notes](notes/linux_tcp.md)

### Some Interesting YT Videos

- Revise OS Memory Management: [Link](https://www.youtube.com/watch?v=7aONIVSXiJ8&t=497s)
- Why Composition better than Inheritance: [Link](https://www.youtube.com/watch?v=tXFqS31ZOFM&list=PLE28375D4AC946CC3&index=24)
- `mmap` in database system: [Link](https://www.youtube.com/watch?v=1BRGU_AS25c)
- How processes get more memory: [Link](https://www.youtube.com/watch?v=XV5sRaSVtXQ)
- `mmap` for File Mapping: [Link](https://www.youtube.com/watch?v=m7E9piHcfr4)
- `mmap` for IPC: [Link](https://www.youtube.com/watch?v=rPV6b8BUwxM)
- From Silicon to Applications: [Link](https://youtu.be/5f3NJnvnk7k?si=zVW5JZbXZz8X74XI)

## C++

### General Concepts

- Move Semantics: [Notes](notes/move_semantics.md)
- Casting: [Notes](notes/casting.md)
- Const, const_cast<>, constexpr, consteval: [Notes](notes/const_constexpr.md)
- C++ Coding Practices: [Link](https://micro-os-plus.github.io/develop/sutter-101/)
- Lambdas: [Notes](notes/lambdas.md)
- Smart Pointers: [Notes](notes/smart_pointers.md)
- Classes and RAII: [Notes](notes/RAII.md)
- Templates: [Notes](notes/templates.md)
- Virtual Functions and vTables: [Notes](notes/virtual_functions.md)
- HTTP using libcurl: [Notes](notes/http.md)
- More C++ Study Notes: [Link](https://encelo.github.io/notes.html)
- Allocators: [Notes](notes/allocators.md)
- Placement New: [Notes](notes/placement_new.md)
- Exceptions (throw, try/catch): [Notes](notes/exceptions.md)
- `i++` vs `++i` overloading: [Notes](notes/pre-post-increment.md)
- CRTP: [Notes](notes/CRTP.md)
- To be added

### C++ Performance

- Keep in Mind: [Notes](notes/performance.md)
- Return Value Optimisation: [Notes](notes/rvo.md)
- Template Metaprogramming: [Notes](notes/metaprogramming.md)
- Set vs PQ: [Notes](notes/set_pq.md)
- Function Inlining: [Notes](notes/function_inlining.md)

### C++ Implementation

- Unique Pointer: [Code](https://github.com/Shivam5022/CPP-Internals/blob/main/includes/unique_pointer.hpp)
- Shared Pointer: [Code](https://github.com/Shivam5022/CPP-Internals/blob/main/includes/shared_pointer.hpp)
- Thread Pool: [Code](https://github.com/Shivam5022/CPP-Internals/blob/main/includes/thread_pool.hpp)
- HashMap: [Code](https://github.com/Shivam5022/CPP-Internals/blob/main/includes/hashmap.hpp)
- LRU Cache: [Code](https://github.com/Shivam5022/CPP-Internals/blob/main/includes/LRU_cache.hpp)
- Scope Timer: [Code](https://github.com/Shivam5022/CPP-Internals/blob/main/includes/timer.hpp)
- Memory Pool: [Code](https://github.com/Shivam5022/CPP-Internals/blob/main/includes/memory_pool.hpp)

## Development

### Work Environment

- Get started with Vim/Nvim (Helix actually): [Notes](notes/vim.md)
- Git command sheet: [Notes](notes/git-sheet.md)
- Tmux cheat sheet: [Notes](notes/tmux.md)

### Docker

- Basic Introduction by Learn Linux TV: [YT Link](https://www.youtube.com/playlist?list=PLT98CRl2KxKECHltRib03tG8pyKEzwf9t)

### CLI utilities

- Cheat.sh: [Notes](/notes/cheat.md)
- Find command linux: [Notes](/notes/find.md)


================================================
FILE: notes/CRTP.md
================================================
## Static Polymorphism Using CRTP

Static polymorphism is a type of polymorphism that is resolved at 
compile time. It is primarily achieved through templates and CRTP 
(Curiously Recurring Template Pattern) in C++. In contrast to 
dynamic polymorphism (which uses virtual functions and 
inheritance), static polymorphism doesn’t rely on runtime checks 
like vtables but instead utilizes compile-time mechanisms.

```cpp
#include <iostream>

// Base class template that uses CRTP
template <typename Derived>
class Shape {
public:
    // Static polymorphism: Calls the derived class's area() method at compile-time
    void draw() const {
        // Calling the derived class's area() method
        static_cast<const Derived*>(this)->draw();
    }

    // Common interface for all derived classes
    double getArea() const {
        // Calling the derived class's area() method using CRTP
        return static_cast<const Derived*>(this)->area();
    }
};

// Derived class for Circle
class Circle : public Shape<Circle> {
public:
    Circle(double radius) : radius_(radius) {}

    // Method specific to Circle
    void draw() const {
        std::cout << "Drawing Circle with radius: " << radius_ << std::endl;
    }

    // Implementation of area() for Circle
    double area() const {
        return 3.14159 * radius_ * radius_;
    }

private:
    double radius_;
};

// Derived class for Square
class Square : public Shape<Square> {
public:
    Square(double side) : side_(side) {}

    // Method specific to Square
    void draw() const {
        std::cout << "Drawing Square with side: " << side_ << std::endl;
    }

    // Implementation of area() for Square
    double area() const {
        return side_ * side_;
    }

private:
    double side_;
};

int main() {
    Circle circle(5.0);
    Square square(4.0);

    // Static polymorphism: Compile-time resolution
    circle.draw();  // Calls Circle's draw()
    square.draw();  // Calls Square's draw()

    std::cout << "Circle Area: " << circle.getArea() << std::endl;
    std::cout << "Square Area: " << square.getArea() << std::endl;

    return 0;
}
```
> The Shape class in this static polymorphism example acts as an 
interface-like structure, but at compile-time. Unlike traditional 
dynamic polymorphism (with a virtual base class), Shape is not used 
for runtime polymorphism or to manage different types via base 
class pointers. Instead, it ensures that each derived class (like 
Circle or Square) implements certain methods such as draw() and area().

> It provides a common template for other shapes (like Circle or Square) to 
follow, forcing them to implement specific methods (draw() and area()).





================================================
FILE: notes/RAII.md
================================================
## More on Classes

Compiler generates default functions: Constructor, Copy Constructor, Copy 
Assignment `only if any variant of them are not present`.

### Disallowing Functions
`f() = delete` : It will prevent the compiler from generating it.

### Private Destructor
It means the object cannot be stored in stack. Because when the stack unwinds,
the destructor of the objects are called but this destuctor is private.

Also it can only be destroyed by a factory member function or a friend function.
*Yes, friends are worse than enemies*.

```cpp
class MyClass {
private:
    ~MyClass() {
        std::cout << "Private Destructor Called" << std::endl;
    }
};

int main() {
    MyClass obj;  // Error: Destructor is private, stack object can't be destroyed
    MyClass* ptr = new MyClass(); // Yes can be created like this
    delete ptr;   // Error: Destructor is private, can't delete heap object
}
```

**How to destroy it:**

1. Private Constuctor
```cpp
class HeapOnly {
public:
    static HeapOnly* createInstance() {
        return new HeapOnly();
    }

    void destroyInstance() {
        delete this;  // Allows deletion, but only through this function
    }

private:
    HeapOnly() { std::cout << "HeapOnly Constructor" << std::endl; }
    ~HeapOnly() { std::cout << "HeapOnly Destructor" << std::endl; }
};

int main() {
    // HeapOnly obj;  // Error: Constructor is private (can't allocate on stack)
    HeapOnly* obj = HeapOnly::createInstance();
    obj->destroyInstance();  // Properly deletes the object
}
```

2. Public Constructor:

```cpp
#include <iostream>

class MyClass {
public:
    // Public constructor
    MyClass() {
        std::cout << "Constructor: MyClass object created!" << std::endl;
    }

    // Method to safely delete the object
    void destroyInstance() {
        delete this;  // Allows controlled deletion of the object
    }

private:
    // Private destructor
    ~MyClass() {
        std::cout << "Destructor: MyClass object destroyed!" << std::endl;
    }
};

int main() {
    // MyClass obj; // ERROR: destructor is private
    // Creating the object dynamically on the heap
    MyClass* obj = new MyClass();

    // Deleting the object through the controlled method
    obj->destroyInstance();

    // obj->~MyClass();    // ERROR: Destructor is private and cannot be called directly
    // delete obj;         // ERROR: Cannot delete directly, destructor is private

    return 0;
}
```


## RAII: Resource Acquisition is Initialisation

C++ program can have different type of resources:
- Allocated memory on heap
- FILE handles (fopen, fclose)
- Mutex Locks
- C++ threads

Some of these resources are `unique` like mutex lock and some can be duplicated
like heap allocations and file handlers (they can be `duped`).
> Some actions needs to be taken by the program in order to free these resources.

Try to do cleanups in the destructor of the object. Since destructor is always
called whenever the object goes out of scope: we don't need to release resources
explicitly.

```cpp
class NaiveVector {
    int* arr;
    size_t size;

    // assume we have released resource in destructor
}

{
    NaiveVector v;
    v.push_back(1);
    {
        NaiveVector w = v;  // this would also copy the pointer int* arr
    } // here int * arr would be released since w is now out of scope

    std::cout << v[0] << '\n';  // this is invalid now. since arr is deleted

}  // double delete here. arr is already freed, we will free it again.

// the problem above was, NaiveVector w = v, will copy all the member variables
// as it is, if we don't define our custom copy constructor.
```
#### Adding copy constructor
The destructor was responsible for freeing resources to avoid any leaks. The
copy constructor is responsible for duplicating resources to avoid double frees.

![](../assets/CC.png)

**Initialisation vs Assignment**
```cpp
// 1. This is initialisation (construction). Calls copy constructor
NaiveVector w = v; 

// 2. This is assignment to existing object w. Calls assignment operator
NaiveVector w;
w = v;
```

![](../assets/RAII.png)

In C++, the handling of `try-catch` blocks during an exception involves manipulating the **call stack**. Here’s how it works step by step:

### 1. **Normal Execution and Call Stack Behavior**
- Under normal execution, each function call pushes a new stack frame onto the call stack.
- This stack frame holds local variables, return addresses, and other function context.
- When a function completes, its stack frame is popped off, and control returns to the calling function.

### 2. **When an Exception is Thrown**
When an exception is thrown inside a `try` block:
- The program immediately **stops executing** the normal flow of code and begins **unwinding the call stack**.
- This process is known as **stack unwinding**.

### 3. **Stack Unwinding**
During stack unwinding:
- The function that threw the exception doesn’t return normally. Instead, the runtime looks for a `catch` block that can handle the exception.
- As the runtime searches for the appropriate `catch`, it starts **popping stack frames** off the call stack, effectively **exiting functions** in reverse order until a suitable handler is found.
- If any objects are going out of scope as part of this unwinding (i.e., objects with automatic storage duration in the stack frames), their destructors are called to properly clean up resources. This ensures that **RAII** (Resource Acquisition Is Initialization) is respected, and resources such as memory or file handles are properly released.

### 4. **Finding the Appropriate `catch` Block**
- The runtime checks each function in the call stack, starting with the function where the exception was thrown, to see if there is a `catch` block that matches the exception type.
- If a matching `catch` block is found, control is transferred to it, and the stack unwinding stops.
- If no matching `catch` block is found in the current function, the stack unwinding continues to the next function in the call stack.

### 5. **Uncaught Exceptions**
- If the runtime unwinds all the way through the call stack without finding a matching `catch` block, the program terminates.
- In this case, the runtime will call `std::terminate`, which by default ends the program, often producing an error message like "terminate called after throwing an instance of...".

### Example:

```cpp
#include <iostream>
#include <stdexcept>

void funcC() {
    std::cout << "In funcC\n";
    throw std::runtime_error("Exception in funcC");
}

void funcB() {
    std::cout << "In funcB\n";
    funcC();  // Call to funcC, which will throw an exception
    std::cout << "In funcB after exception\n";  // won't be printed
}

void funcA() {
    std::cout << "In funcA\n";
    try {
        funcB();  // Call to funcB, which will call funcC and eventually throw an exception
        std::cout << "In func A return\n";  // won't be printed
    } catch (const std::exception& e) {
        std::cout << "Caught exception: " << e.what() << '\n';
    }
    std::cout << "Handling Done\n";
}

int main() {
    funcA();  // Start the chain of function calls
    return 0;
}
```

### Output:
```plaintext
In funcA
In funcB
In funcC
Caught exception: Exception in funcC
Handling Done
```

### What Happens in the Call Stack:
1. **`main`** calls **`funcA`**, which adds a stack frame for `funcA` to the call stack.
2. **`funcA`** calls **`funcB`**, which adds another stack frame for `funcB` to the call stack.
3. **`funcB`** calls **`funcC`**, which adds yet another stack frame for `funcC` to the call stack.
4. **`funcC`** throws a `std::runtime_error`. The runtime starts stack unwinding.
   - The stack frame for `funcC` is popped off the stack, and the destructor of any local variables in `funcC` (if any) are called.
5. **`funcB`** doesn’t have a `catch` block, so its stack frame is also popped off the stack, and local objects (if any) are destroyed.
6. Control reaches **`funcA`**, which has a matching `catch` block for `std::exception`. The exception is caught, and stack unwinding stops.
7. The program continues execution in the `catch` block of `funcA`.


- **RAII**: Objects are properly destroyed even during stack unwinding, as destructors are automatically invoked.

- If a matching `catch` block is found, the exception is handled; otherwise, the program terminates.


### The Rule of Zero
If your class does not directly manage any resource, but merely use library 
components such as vector and string, then write NO special member function.

Let the compiler generate all of them default:
- Default destructor
- Default copy constructor
- Default copy assignment operator 

```cpp
#include <iostream>
#include <cstring>

class MyString {
private:
    char* data; // Dynamically allocated memory to hold a string
public:
    // 1. Default Constructor
    MyString(const char* str = "") {
        data = new char[std::strlen(str) + 1];
        std::strcpy(data, str);
        std::cout << "Constructor called\n";
    }

    // 2. Destructor
    ~MyString() {
        delete[] data;
        std::cout << "Destructor called\n";
    }

    // 3. Copy Constructor
    MyString(const MyString& other) {
        data = new char[std::strlen(other.data) + 1];
        std::strcpy(data, other.data);
        std::cout << "Copy Constructor called\n";
    }

    // 4. Copy Assignment Operator
    MyString& operator=(const MyString& other) {
        if (this == &other) return *this; // Self-assignment check

        delete[] data; // Release old memory
        data = new char[std::strlen(other.data) + 1]; // Allocate new memory
        std::strcpy(data, other.data); // Copy the data
        std::cout << "Copy Assignment Operator called\n";
        return *this;
    }

    // 5. Move Constructor
    MyString(MyString&& other) noexcept : data(other.data) {
        other.data = nullptr; // Release ownership of the moved-from object
        std::cout << "Move Constructor called\n";
    }

    // 6. Move Assignment Operator
    MyString& operator=(MyString&& other) noexcept {
        if (this == &other) return *this; // Self-assignment check

        delete[] data; // Release old memory
        data = other.data; // Steal the data pointer
        other.data = nullptr; // Release ownership of the moved-from object
        std::cout << "Move Assignment Operator called\n";
        return *this;
    }

    // Helper method to print the string
    void print() const {
        std::cout << "String: " << (data ? data : "null") << '\n';
    }
};

int main() {
    MyString s1("Hello");
    MyString s2 = s1; // Invokes Copy Constructor
    MyString s3;
    s3 = s1; // Invokes Copy Assignment Operator

    MyString s4 = std::move(s1); // Invokes Move Constructor
    MyString s5;
    s5 = std::move(s2); // Invokes Move Assignment Operator

    s4.print();
    s5.print();

    return 0;
}
```

================================================
FILE: notes/allocators.md
================================================
## An Allocator is a Handle to a Heap

CppNow Link: [Here](https://www.youtube.com/watch?v=0MdSJsCTRkY)

(Wrong) An allocator object represents a source of memory.

(Correct) An allocator represents a handle to the source of memory.

*Incomplete, haven't covered fully*

================================================
FILE: notes/atomic_instructions.md
================================================
## Atomics

To implement locks, we need hardware support for atomic instructions. This can't
alone be done by software.

### CMPXCHG (Compare-And-Exchange/Swap)
Atomically compares the value in memory to a register. If the values are equal, 
it writes a new value; otherwise, it leaves the memory unchanged and sets flags
indicating failure.

`CMPXCHG [mem], reg`

```cpp
bool CAS(int* addr, int expected, int new_val) {
    if (*addr == expected) {
        *addr = new_val; // update the memory with new_val
        return true;     // return success
    } else {
        return false;    // return failure
    }
}
```

### XCHG (Exchange)
Atomically swaps the contents of a register and a memory location.

`XCHG reg, [mem]`

### LOCK Prefix
In x86, certain instructions can be prefixed with the LOCK instruction to make 
them atomic, meaning the instruction will operate atomically on the memory location.

Applies to instructions like ADD, SUB, INC, DEC, XOR, OR, AND, etc. 
These operations can then modify memory atomically.

`LOCK ADD [mem], reg`

### Simple spinlock using CAS
```cpp
typedef int spinlock_t; // 0 = unlocked, 1 = locked

void spinlock_init(spinlock_t* lock) {
    *lock = 0; // Initialize the lock as unlocked
}

void spinlock_acquire(spinlock_t* lock) {
    while (!CAS(lock, 0, 1)) {
        // Spin until we successfully acquire the lock
    }
}

void spinlock_release(spinlock_t* lock) {
    *lock = 0; // Release the lock by setting it to unlocked
}
```
```assembly
spinlock_acquire:
    mov eax, 0          ; expected value (unlocked)
acquire_retry:
    mov ebx, 1          ; new value (locked)
    lock cmpxchg [lock], ebx ; compare and swap
    jne acquire_retry   ; if lock was not acquired, retry
    ret

spinlock_release:
    mov [lock], 0       ; set lock to unlocked
    ret
```

More on different types of spinlocks can be found 
[here](https://github.com/Shivam5022/Spin-Locks-and-Contention) in my second 
assignment of COL818.

### How hardware supports atomic instructions:

<mark>The simplest CAS implementations (and the easiest mental model) will simply freeze the local cache coherence protocol state machine after the load part of the CAS brings the relevant cache line into the nearest (e.g. L1) cache in exclusive mode, and will unfreeze it after the (optional) store completes. This, by definition, makes the CAS operation as a whole atomic with relation to any other participant in the cache coherence protocol. </mark>



================================================
FILE: notes/buffer_overflow.md
================================================
## Buffer Overflow

[Video 1](https://www.youtube.com/watch?v=scaz_pofc7A&list=PLEJxKK7AcSEGPOCFtQTJhOElU44J_JAun&index=33)

[Video 2](https://www.youtube.com/watch?v=o3pcY-bRRgs&list=PLEJxKK7AcSEGPOCFtQTJhOElU44J_JAun&index=34&pp=iAQB)


![](../assets/stack.svg)

[Pdf Notes Here](../assets/buffer_overflow.pdf)


================================================
FILE: notes/casting.md
================================================

### Explicit Keyword

By default, C++ allows implicit conversions for single-argument constructors. 
This means that if you have a constructor with one parameter, the compiler 
will automatically convert objects of that parameter’s type into objects of 
your class if needed.

- The explicit keyword prevents these implicit conversions.

```cpp
#include <iostream>

class MyClass {
public:
    // Constructor with 'explicit'
    explicit MyClass(int x) {
        std::cout << "MyClass constructor called with value: " << x << std::endl;
    }
};

void func(MyClass obj) {
    std::cout << "In func()" << std::endl;
}

int main() {
    // func(5);  // Error: implicit conversion from int to MyClass is not allowed!
    func(MyClass(5));  // This works because we explicitly create a MyClass object
    return 0;
}
```

### Casting

#### Static Casting

`static_cast` is used for compile-time type conversions between compatible 
types. It performs the conversion at compile time, ensuring type safety in 
most cases, *but it does not perform runtime checks (unlike dynamic_cast).*

```cpp
#include <iostream>

int main() {
    // Basic type conversion
    float f = 9.5;
    int i = static_cast<int>(f);  // Converts float to int
    std::cout << "int i: " << i << std::endl;

    // Upcasting (Derived to Base)
    class Base {};
    class Derived : public Base {};
    Derived d;
    Base* basePtr = static_cast<Base*>(&d);  // Safe upcast (Derived* -> Base*)

    return 0;
}
```
*static_cast performs no runtime checks, so downcasting (casting from base to 
derived) is unsafe unless you’re sure of the object type.*

#### Dynamic Casting

`dynamic_cast` is used for runtime type checking and safe downcasting in 
inheritance hierarchies. It is primarily used for casting between base and 
derived class pointers or references when polymorphism is involved (i.e., 
when you have a virtual function in the base class).

```cpp
#include <iostream>

class Base {
public:
    virtual ~Base() = default;  // Must have at least one virtual function
};

class Derived : public Base {};

int main() {
    Base* basePtr = new Derived();  // Pointer to Base, but actually a Derived object

    // Safe downcast: checks at runtime if basePtr actually points to a Derived object
    Derived* derivedPtr = dynamic_cast<Derived*>(basePtr);
    if (derivedPtr) {
        std::cout << "Successfully casted to Derived" << std::endl;
    } else {
        std::cout << "Failed to cast to Derived" << std::endl;
    }

    delete basePtr;
    return 0;
}
```

*dynamic_cast only works with pointers or references to polymorphic types (classes with at least one virtual function).*

#### Re-interpret Casting

`reinterpret_cast` is the most dangerous cast, used for low-level type 
reinterpretation. It allows you to treat a block of memory as if it were a 
different type entirely. This is often used for pointer conversions or 
type-punning.

This example shows how to use reinterpret_cast to interpret a 32-bit integer 
(std::uint32_t) as an array of bytes. This kind of operation can be useful in 
networking or binary file I/O, where you need to break a larger value into 
its individual bytes (little-endian or big-endian conversion).

```cpp
#include <iostream>
#include <cstdint>  // For uint32_t

void printBytes(const std::uint8_t* byteArray, std::size_t size) {
    for (std::size_t i = 0; i < size; ++i) {
        std::cout << "Byte " << i << ": 0x" << std::hex << static_cast<int>(byteArray[i]) << std::endl;
    }
}

int main() {
    std::uint32_t value = 0x12345678;  // A 32-bit integer (hexadecimal representation)
    
    // Reinterpret the 32-bit integer as a byte array
    const std::uint8_t* byteArray = reinterpret_cast<const std::uint8_t*>(&value);
    
    // Print the individual bytes
    std::cout << "Value as bytes:" << std::endl;
    printBytes(byteArray, sizeof(value));  // Should print the 4 bytes of the integer

    return 0;
}
```
**Output in Little Endian Machine:**
```
Value as bytes:
Byte 0: 0x78
Byte 1: 0x56
Byte 2: 0x34
Byte 3: 0x12
```

#### Const Casting
Refer [these](./const_constexpr.md) notes.

================================================
FILE: notes/cheat.md
================================================
## Cheat.sh

You can find important usage info about command line tools like grep, find,
curl etc at [cheat.sh](https://cheat.sh).

Here is a very minimal script for getting documentation about tools from
command line:

```sh
# Function to query cheat.sh and display results with less
cheat() {
    if [ -z "$1" ]; then
        echo "Usage: cheat <topic>"
        echo "Example: cheat grep"
        return 1
    fi

    local topic="$1"

    # Fetch the cheat sheet from cheat.sh
    echo "https://cheat.sh/${topic}"
    curl -s "https://cheat.sh/${topic}" | less -R
}
```

Add it in `.zshrc` and use like: `cheat grep`


================================================
FILE: notes/const_constexpr.md
================================================
## Resources:
1. The below notes are from the CppCon 2021 talk by Rainer Grimm: [Link](https://www.youtube.com/watch?v=tA6LbPyYdco)


### const
- Declare a variable const: means you cannot modify it afterwards.

- const objects:

    - must be initialised.
    - cannot be modified.
    - they cant be victim of data races. since they are read only.
    - can only invoke const member functions.

- `const member functions` of a class cannot change the state of the object.
That is, they can't change the value of member variables. 
Although they can change the value of objects which dont belong to this class.

- use `mutable` keyword, in case you want a member variable to get modified inside a `const` member function.

```cpp
int f = 100;
struct Widget {
    int a;
    mutable int c = 0;
    Widget(int init) : a(init) {}
    void test(int& p) const {
        p++; // valid since `p` doesnt belong to this class
        f++; // valid since `f` doesnt belong to this class
        c++; // valid since `c` is mutable
        const_cast<Widget*>(this)->c++; // another way to increment c, without mutable.
        // a++; // not valid
        std::cout << a << '\n';
    }
};
```

- `const char* const a`: means a is a const pointer to a const char. which means the value of neither the pointer nor the pointee can be altered.


### const_cast
- used to remove `const` or `volatile` from a variable. 

- modifying the value of  a `const` object by removing its constness is undefined behaviour.

```cpp
#include <iostream>


int main() {
    const int a = 10;
    int* b = const_cast<int*> (&a);
    *b = 11;
    std::cout << *b << '\n'; // prints 11
    std::cout << a << '\n'; // prints 10

}
```

<mark> Modifying an object declared as const after casting away its 
constness using const_cast results in undefined behavior if the 
object was truly defined as const in its original context. However, 
it’s safe if the object was not originally constant but passed 
around as a const reference or pointer. </mark>


```cpp
#include <iostream>
#include <vector>
#include <algorithm>
#include <cassert>

int main () {
    // here i is implicitly constant
    // hence, using const_cast is undefined
    const int i = 5;
    // int& j = i;  // error
    int& j = const_cast<int&>(i);
    j = 7;
    std::cout << i << '\n'; // prints 5
    std::cout << j << '\n'; // prints 7

    // int* ptr = &i; // error
    int* ptr = const_cast<int*> (&i);
    *ptr = 10;
    std::cout << i << '\n'; // still prints 5
    std::cout << *ptr << '\n'; // prints 10

    // although same address :)
    std::cout << ptr << ' ' << &i << '\n';


    // safe usage

    int original = 11; // not inherently constant

    auto change = [](const int& ref) {
        // ref = 15;  // error
        int& nonconst = const_cast<int&> (ref);
        nonconst = 15; // this is applicable
    };

    change(original);

    std::cout << original << '\n'; // prints 15 now :)

    // Modifying an object declared as const after casting 
    // away its const-ness using const_cast results in 
    // undefined behavior if the object was truly defined 
    // as const in its original context. 
    // However, it’s safe if the object was not originally 
    // constant but passed around as a const reference 
    // or pointer.

}
```

### constexpr
- These expressions can be:
    - evaluated at the compile time (good optimisation).
    - they are thread safe.
- `const` variables are implicitly `constexpr` when initialised with some constant expression.
- they have potential to run at compile time (*not the guarantee*).

```cpp
#include <iostream>

// here this gcd function is evaluated at compile time only
constexpr int gcd(int a, int b) {
    return (b == 0) ? a : gcd(b, a % b);
}

int main() {
    // all the arguments must be known at compile time
    constexpr int result = gcd(48, 18);  // Compile-time GCD calculation
    std::cout << "GCD of 48 and 18 is: " << result << std::endl;
    return 0;
}
```
- C++20 supports the `constexpr` containers: std::vectors and std::string. 
    - Meaning, the memory is allocated and released at compile time (Transient Allocation).
    -   ```cpp
        #include <iostream>
        #include <vector>
        #include <algorithm>

        constexpr int maxElement() {
            std::vector<int> a {1, 22, 333, 44, 55};
            a.push_back(412);
            std::sort(a.begin(), a.end());
            return a.back();
        }

        int main() {
            // compile time (check godbolt assembly with --std=c++20)
            constexpr int m = maxElement();
            std::cout << m << '\n';
        }
        ```
    
### consteval
- must run at compile time (*strong guarantee*)

```cpp
#include <iostream>
#include <vector>
#include <algorithm>

int runTime(int a) {
    return a + 1;
}

constexpr int runOrCompileTime(int a) {
    return a + 1;
}

consteval int compileTime(int a) {
    return a + 1;
}

int main() {
    // constexpr int ans1 = runTime(100); // ERROR
    constexpr int ans2 = runOrCompileTime(100);
    constexpr int ans3 = compileTime(100);

    int f = 100;
    int ans4 = runOrCompileTime(f); // Fine: Because it can be evaluated at runtime too!
    // int ans5 = compileTime(f); // ERROR: consteval must be evaluated at compile time, but `f' is not const
    // making `f' const solves the above problem
}
```



================================================
FILE: notes/cpp_question_bank.md
================================================
## C++ Questions for HFT SWE


================================================
FILE: notes/exceptions.md
================================================
## Exceptions in C++

- Ignoring exception: Leads to core dump
- What is exception: 
    - Something that gets thrown
    - Something that gets caught (hopefully)
- Throwing and catching exceptions is `expensive`.

- If an exception is thrown:
    - During stack unwinding the program is terminated
    - Caught by a matching handler
    - All the intermediate objects in stack are destructed
    - If no matching handler is found, the functions `std::terminate` is called

- Desctructor throwing exception:
    - Bad situation
    - If an exception is thrown while another exception is already being 
    handled, this causes terminate to be called, which ends the program 
    abruptly. This occurs because C++ cannot handle two simultaneous exceptions.
	- For example, if an exception is thrown and the stack unwinds to destroy 
    objects in scope, any destructor that throws an exception will clash with 
    the already active exception.

- Handle the exception there itself (for Destructors)
    ```cpp
    struct ResourceHandler {
        ~ResourceHandler() {
            try {
                // Code that may throw an exception
                cleanup();
            } catch (const std::exception& e) {
                // Handle or log the exception, but do not rethrow
                std::cerr << "Exception in destructor: " << e.what() << std::endl;
            } catch (...) {
                std::cerr << "Unknown exception in destructor" << std::endl;
            }
        }
        
        void cleanup() {
            // Code that might throw an exception
        }
    };
    ```

- Exception Hygiene
    - Throw by value [*This memory is allocated on heap 
     actually, therefore expensive*]
    - Catch by (const) reference


**Custom Exception in C++:**

```cpp
#include <iostream>
#include <exception>
#include <string>

// Step 1: Define the custom exception class
class MyCustomException : public std::exception {
private:
    std::string message;  // Custom error message

public:
    // Constructor to initialize the error message
    MyCustomException(const std::string& msg) : message(msg) {}

    // Override the what() function
    virtual const char* what() const noexcept override {
        return message.c_str();
    }
};

// Function that may throw MyCustomException
void riskyFunction(bool triggerError) {
    if (triggerError) {
        throw MyCustomException("Something went wrong in riskyFunction!");
    }
    std::cout << "Function executed successfully." << std::endl;
}

int main() {
    try {
        // Step 2: Call the function and trigger the custom exception
        riskyFunction(true);  // Pass true to trigger the exception
    }
    catch (const MyCustomException& e) {  // Step 3: Catch the custom exception
        std::cerr << "Caught custom exception: " << e.what() << std::endl;
    }
    return 0;
}
```
**Re-throwing:**

Inside a catch block, using throw; without any argument will 
rethrow the currently caught exception. This is often done to 
perform some actions (like logging) and then pass the 
exception up the stack without changing it.

```cpp
#include <iostream>
#include <exception>

void innerFunction() {
    throw std::runtime_error("Error in innerFunction");  // Throwing an exception
}

void outerFunction() {
    try {
        innerFunction();
    }
    catch (const std::exception& e) {
        std::cerr << "Caught in outerFunction: " << e.what() << std::endl;
        throw;  // Rethrow the same exception to propagate it further
    }
}

int main() {
    try {
        outerFunction();
    }
    catch (const std::exception& e) {
        std::cerr << "Caught in main: " << e.what() << std::endl;
    }
    return 0;
}
```

> Prefer to keep exceptions as "Rare" as you can, meaning for 
serious, uncommon errors

> Resource management should always use RAII


================================================
FILE: notes/find.md
================================================
## Find command in linux

- search for files and directories (recursively) based on various criterias

- `find [path] [expression]` 
  
  - [path]:  The directory where the search will be started
  
  - [expression]: Filter criteria
  
  - when using `-name -iname` as criteria * is supported for wildcard

- `find /home/user -name "example.txt"` find a file with following name in /home/user directory

- `find /home/user -iname "example.txt"`: case insensitive search `iname`

- `find /path -type f`: find only files not directories

- `find /path -type d`: find only directories

- `find /path -type l`: find symbolic links

- `find / -size +100M`: find files greater than 100MB

- `find / -size -5k`: find files smaller than 5KB

- `find /home -type f -perm 644`: find files with permission mode

- `find /home -type f -mtime -7`: find files modified in last 7 days

- `find /home -type f -atime -2`: find files accessed in last 2 days

- `find /path -type f -name "*.log" -exec rm {} +`
  
  - `-exec`: executes the given command
  
  - `{}`: placeholder for the attributes
  
  - `+`: end of the command

- `find /path -type f -empty`: find empty files (use d for directories)

- `find /home -type f -name "*.log" -exec grep -i "error" {} +`: searching inside files (find all the log files which contain "error") 

- `find /var/log -type f -name "*.log" | xargs rm`: xargs takes a list and passes each element as arguments to another command


================================================
FILE: notes/function_inlining.md
================================================
## Function Inlining

C++ FAQs: [Link](https://isocpp.org/wiki/faq/inline-functions)

Assuming that we already know about the ODR (One Definition Rule) and how 
marking a function inline helps that, lets discuss function inling from 
performance POV.

### Why Inlining

When the compiler inline-expands a function call, the function’s code gets 
inserted into the caller’s code stream.

When a program makes a function call, the instruction pointer (IP) jumps to a 
different memory address, executes the instructions at that location, and then 
jumps back to the original location.

This jumping to a new address can be inefficient because the next instruction 
to be executed may not be cached in the L1-I cache.

If the function is small, it often makes more sense for it to be inlined in the 
caller’s code stream. In such cases, there is no jump to an arbitrary location, 
and the L1-I cache remains warm.

Additionally, compilers are generally better suited to apply optimizations when 
the code is inlined, compared to optimizing across multiple distinct functions.


### Why not always inline
Inlining all function calls can lead to code bloat, increasing the size of the 
executable and potentially causing cache thrashing.

Consider a scenario in the hot path: before sending an order to the exchange, we 
perform a sanity check. If there is an error, we call the function logAndDebug, 
which handles some bookkeeping internally. In the typical case (the happy path), 
the order is sent to the exchange.

```cpp
bool isError = checkOrder(order);

if (isError) {
    logAndDebug(order);
} else {
    sendOrderToExchange(order);
}

```

Here, `isError` is rarely true, and the happy path is executed most of the time.

If the function `logAndDebug` were inlined, unnecessary instructions—executed only 
in rare cases—would occupy space in the instruction cache, potentially polluting 
it. This could slow down the program instead of improving performance.

================================================
FILE: notes/git-sheet.md
================================================
## Git Command Sheet

- Complete documentation can be found here: [Pro Git Book](https://git-scm.com/book/en/v2)
- Install `lazygit` for nice TUI experience.

## Basics

1. `git clone url [new repo name]`
2. Created a alias `git config --global alias.glog "log --graph --pretty=format:'%C(yellow)%h%C(reset) - %C(cyan)%an%C(reset) - %C(blue)%ad%C(reset) - %s' --date=short"`.

   Use `git glog` now

3. To check the remote servers (eg GitHub): `git remote -v`.

   If you clone a repository, the command automatically adds that remote repository under the name “origin”. If your current branch is set up to track a remote branch, you can use the `git pull` command to automatically fetch and then merge that remote branch into your current branch. By default, the `git clone` command automatically sets up your local master branch to track the remote master branch (or whatever the default branch is called) on the server you cloned from. Running `git pull` generally fetches data from the server you originally cloned from and automatically tries to merge it into the code you’re currently working on.

4. Pushing branch to remote: `git push <remote> <branch>`

5. Inspecting a remote in detail: `git remote show <origin>`

## Branches

1. Create a new branch and checkout to it: `git checkout -b <newbranchname>`

2. Merge a branch to master: `git checkout master && git merge <branch to be merged>`

3. See all branches: `git branch --all -vv`

4. Pushing to new remote branch name: you could run `git push origin serverfix:awesomebranch` to push your local serverfix branch to the awesomebranch branch on the remote project.

5. `git checkout -b <branch> <remote>/<branch>` : Create a local tracking branch for a remote branch

6. For pt 5, shortcut is `git checkout <branch>` (If the branch name you’re trying to checkout (a) doesn’t exist and (b) exactly matches a name on only one remote, Git will create a tracking branch for you)

## Stashing

1. `git stash`

2. `git stash list`

3. `git stash apply / pop`


================================================
FILE: notes/http.md
================================================
## HTTP
- Hyper Text Transfer Protocol
- Communication between web servers and clients
- HTTP Requests / Response
- Its stateless (each request is independent)

`GET`
Retrieves data from the server

`POST`
Submit data to the server

`PUT`
Update data already on the server

`DELETE`
Deletes data from the server

```
200 - OK
201 - OK created
301 - Moved to new URL
304 - Not modified (Cached version)
400 - Bad request
401 - Unauthorized
404 - Not found
500 - Internal server error
```

### HTTP in C++ using libcurl

For `json` parsing, I have used this amazing library [nlohmann](https://github.com/nlohmann/json).

```cpp
#include <iostream>
#include <curl/curl.h>
#include "json.hpp"
#include <fstream>

using json = nlohmann::json;
```
This callback is called everytime we receive a response from the server. We pass
into it, a pointer to user string and append the response in this string.
```cpp
// Helper function to capture server responses into a string
static size_t WriteCallback(void* contents, size_t size, size_t nmemb, std::string* out) {
    size_t totalSize = size * nmemb;
    out->append((char*)contents, totalSize);
    return totalSize;
}
```
Sending a `GET` request (it takes no argument).
We have stored the response in a JSON object:
```cpp
json httpGet(const std::string& url) {
    CURL* curl;
    CURLcode res;
    std::string readBuffer;

    curl = curl_easy_init();
    if(curl) {
        curl_easy_setopt(curl, CURLOPT_URL, url.c_str());
        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);
        curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);

        res = curl_easy_perform(curl);
        curl_easy_cleanup(curl);

        if(res != CURLE_OK) {
            std::cerr << "GET request failed: " << curl_easy_strerror(res) << std::endl;
        }
    }
    // Do this only when the response type is JSON
    return json::parse(readBuffer);
}
```

Headers are appended like this:
```cpp
struct curl_slist* headers = NULL;
headers = curl_slist_append(headers, "Content-Type: application/json");
// headers = curl_slist_append(headers, "<SOME MORE HEADER>");
curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);
```

Sending a `POST` request. It takes the data we want to post as
argument in JSON format.
Inside the function, we dump this JSON object into a c-style string.
```cpp
json httpPost(const std::string& url, const json& data) {
    CURL* curl;
    CURLcode res;
    std::string readBuffer;

    curl = curl_easy_init();
    if(curl) {
        curl_easy_setopt(curl, CURLOPT_URL, url.c_str());
        curl_easy_setopt(curl, CURLOPT_POST, 1L);

        // JSON data
        std::string jsonData = data.dump();
        curl_easy_setopt(curl, CURLOPT_POSTFIELDS, jsonData.c_str());

        struct curl_slist* headers = NULL;
        headers = curl_slist_append(headers, "Content-Type: application/json");
        curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);

        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);
        curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);

        res = curl_easy_perform(curl);
        curl_easy_cleanup(curl);

        if(res != CURLE_OK) {
            std::cerr << "POST request failed: " << curl_easy_strerror(res) << std::endl;
        }
        long response_code;
        curl_easy_getinfo(curl, CURLINFO_RESPONSE_CODE, &response_code);
        std::cout << "Got response code: " << response_code << std::endl;
    }

    return json::parse(readBuffer);
}
```

Function to send PUT request:
```cpp
json httpPut(const std::string& url, const json& data) {
    CURL* curl;
    CURLcode res;
    std::string readBuffer;

    curl = curl_easy_init();
    if(curl) {
        curl_easy_setopt(curl, CURLOPT_URL, url.c_str());
        curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "PUT");

        std::string jsonData = data.dump();
        curl_easy_setopt(curl, CURLOPT_POSTFIELDS, jsonData.c_str());

        struct curl_slist* headers = NULL;
        headers = curl_slist_append(headers, "Content-Type: application/json");
        curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);

        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);
        curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);

        res = curl_easy_perform(curl);
        curl_easy_cleanup(curl);

        if(res != CURLE_OK) {
            std::cerr << "PUT request failed: " << curl_easy_strerror(res) << std::endl;
        }
    }

    return json::parse(readBuffer);
}
```

Function to send PATCH request:
```cpp
json httpPatch(const std::string& url, const json& data) {
    CURL* curl;
    CURLcode res;
    std::string readBuffer;

    curl = curl_easy_init();
    if(curl) {
        curl_easy_setopt(curl, CURLOPT_URL, url.c_str());
        curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "PATCH");

        std::string jsonData = data.dump();
        curl_easy_setopt(curl, CURLOPT_POSTFIELDS, jsonData.c_str());

        struct curl_slist* headers = NULL;
        headers = curl_slist_append(headers, "Content-Type: application/json");
        curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);

        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);
        curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);

        res = curl_easy_perform(curl);
        curl_easy_cleanup(curl);

        if(res != CURLE_OK) {
            std::cerr << "PATCH request failed: " << curl_easy_strerror(res) << std::endl;
        }
    }

    return json::parse(readBuffer);
}
```
Function to send DELETE request:
```cpp
void httpDelete(const std::string& url) {
    CURL* curl;
    CURLcode res;

    curl = curl_easy_init();
    if(curl) {
        curl_easy_setopt(curl, CURLOPT_URL, url.c_str());
        curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "DELETE");

        res = curl_easy_perform(curl);
        curl_easy_cleanup(curl);

        if(res != CURLE_OK) {
            std::cerr << "DELETE request failed: " << curl_easy_strerror(res) << std::endl;
        }
    }
}
```
#### Putting everything together:

```cpp
#include <iostream>
#include <curl/curl.h>
#include "json.hpp"
#include <fstream>

using json = nlohmann::json;

// Helper function to capture server responses into a string
static size_t WriteCallback(void* contents, size_t size, size_t nmemb, std::string* out) {
    size_t totalSize = size * nmemb;
    out->append((char*)contents, totalSize);
    return totalSize;
}

// Function to send GET request
json httpGet(const std::string& url) {
    CURL* curl;
    CURLcode res;
    std::string readBuffer;

    curl = curl_easy_init();
    if(curl) {
        curl_easy_setopt(curl, CURLOPT_URL, url.c_str());
        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);
        curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);

        res = curl_easy_perform(curl);
        curl_easy_cleanup(curl);

        if(res != CURLE_OK) {
            std::cerr << "GET request failed: " << curl_easy_strerror(res) << std::endl;
        }
    }

    return json::parse(readBuffer);
}

// Function to send POST request
json httpPost(const std::string& url, const json& data) {
    CURL* curl;
    CURLcode res;
    std::string readBuffer;

    curl = curl_easy_init();
    if(curl) {
        curl_easy_setopt(curl, CURLOPT_URL, url.c_str());
        curl_easy_setopt(curl, CURLOPT_POST, 1L);

        // JSON data
        std::string jsonData = data.dump();
        curl_easy_setopt(curl, CURLOPT_POSTFIELDS, jsonData.c_str());

        struct curl_slist* headers = NULL;
        headers = curl_slist_append(headers, "Content-Type: application/json");
        curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);

        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);
        curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);

        res = curl_easy_perform(curl);
        curl_easy_cleanup(curl);

        if(res != CURLE_OK) {
            std::cerr << "POST request failed: " << curl_easy_strerror(res) << std::endl;
        }
    }

    return json::parse(readBuffer);
}

// Function to send PUT request
json httpPut(const std::string& url, const json& data) {
    CURL* curl;
    CURLcode res;
    std::string readBuffer;

    curl = curl_easy_init();
    if(curl) {
        curl_easy_setopt(curl, CURLOPT_URL, url.c_str());
        curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "PUT");

        std::string jsonData = data.dump();
        curl_easy_setopt(curl, CURLOPT_POSTFIELDS, jsonData.c_str());

        struct curl_slist* headers = NULL;
        headers = curl_slist_append(headers, "Content-Type: application/json");
        curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);

        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);
        curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);

        res = curl_easy_perform(curl);
        curl_easy_cleanup(curl);

        if(res != CURLE_OK) {
            std::cerr << "PUT request failed: " << curl_easy_strerror(res) << std::endl;
        }
    }

    return json::parse(readBuffer);
}

// Function to send PATCH request
json httpPatch(const std::string& url, const json& data) {
    CURL* curl;
    CURLcode res;
    std::string readBuffer;

    curl = curl_easy_init();
    if(curl) {
        curl_easy_setopt(curl, CURLOPT_URL, url.c_str());
        curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "PATCH");

        std::string jsonData = data.dump();
        curl_easy_setopt(curl, CURLOPT_POSTFIELDS, jsonData.c_str());

        struct curl_slist* headers = NULL;
        headers = curl_slist_append(headers, "Content-Type: application/json");
        curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);

        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);
        curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);

        res = curl_easy_perform(curl);
        curl_easy_cleanup(curl);

        if(res != CURLE_OK) {
            std::cerr << "PATCH request failed: " << curl_easy_strerror(res) << std::endl;
        }
    }

    return json::parse(readBuffer);
}

// Function to send DELETE request
void httpDelete(const std::string& url) {
    CURL* curl;
    CURLcode res;

    curl = curl_easy_init();
    if(curl) {
        curl_easy_setopt(curl, CURLOPT_URL, url.c_str());
        curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "DELETE");

        res = curl_easy_perform(curl);
        curl_easy_cleanup(curl);

        if(res != CURLE_OK) {
            std::cerr << "DELETE request failed: " << curl_easy_strerror(res) << std::endl;
        }
    }
}

int main() {
    std::string baseUrl = "https://jsonplaceholder.typicode.com/users";

    // 1. GET Request - Retrieve a list of users
    std::cout << "GET Request: Fetching users..." << std::endl;
    json users = httpGet(baseUrl);
    std::ofstream file("../key.json");
    file << users.dump(4);
    std::cout << users[0]["address"].dump(4) << std::endl;  // Pretty print the JSON

    // 2. Modify user data (update name)
    json user = users[0];
    user["name"] = "John Doe Updated";
    
    // 3. POST Request - Send the modified user back to the server
    std::cout << "\nPOST Request: Creating a new user..." << std::endl;
    json newUser = httpPost(baseUrl, user);
    std::cout << newUser.dump(4) << std::endl;
    
    // // 4. PUT Request - Update the entire user
    std::string putUrl = baseUrl + "/1";  // Assuming user ID is 1
    std::cout << "\nPUT Request: Updating user 1..." << std::endl;
    json updatedUser = httpPut(putUrl, user);
    std::cout << updatedUser.dump(4) << std::endl;
    
    // 5. PATCH Request - Update a single field of the user
    json patchData;
    patchData["email"] = "updated.email@example.com";
    std::cout << "\nPATCH Request: Updating user's email..." << std::endl;
    json patchedUser = httpPatch(putUrl, patchData);
    std::cout << patchedUser.dump(4) << std::endl;

    // 6. DELETE Request - Remove the user
    std::cout << "\nDELETE Request: Deleting user 1..." << std::endl;
    httpDelete(putUrl);
    std::cout << "User 1 deleted." << std::endl;

    return 0;
}
```

================================================
FILE: notes/lambdas.md
================================================
## Lambdas in C++

```cpp
class Plus {
    int value;
public:
    Plus(int v): value(v) {}
    int operator() (int x) const {
        return x + value;
    }
};

// The above thing can be achived as (lambdas reduce boilercode):

auto plus = [value = 1] (int x) {
    return x + value;
};

// this plus is basically an object of some class whose name I can't spell.
// it has a data member `value` (captures)
``` 

`lambda` is object of some anonymous class, but the `this` keyword inside this
lambda won't work as usual. It wouldn't point to this object. Rather `this`
inside a lambda refers to the `this` of outer context (wherever this lambda is
being declared)!

![](../assets/lambda.png)
Here `kitten` makes a copy when it needs.

`cat` makes a copy when initialised.

> By default, lambdas that capture variables by value are const (meaning you 
cannot modify the captured variables inside the lambda). If you want to modify 
the captured value, you can use the mutable keyword.

Sending lambda as a parameter to the function:
![](../assets/lambda-param.png)

Alternatively we can use `std::function` in the parameter type and
pass a lambda to it.

Copying of lambdas:

![](../assets/copy.png)

================================================
FILE: notes/latency_numbers.md
================================================
## Interesting Latency Numbers

The notes are taken from this [link](https://gist.github.com/hellerbarde/2843375).

### CPU Frequency (Clock Speed):
- The CPU frequency refers to the number of cycles the CPU can execute per second. It’s typically measured in Hertz (Hz), with modern CPUs operating in the range of Gigahertz (GHz) (1 GHz = 1 billion cycles per second).
- Higher frequency generally means more cycles per second, but that doesn’t directly translate to faster execution if other bottlenecks, like memory access or instruction complexity, exist.
- Example: A CPU with a frequency of 3 GHz can execute up to 3 billion cycles per second.

`On Average, 1 Clock Cycle = 0.3 ns`

```cpp
L1 cache reference ......................... 0.5 ns
Branch mispredict ............................ 5 ns
L2 cache reference ........................... 7 ns
Mutex lock/unlock ........................... 25 ns
Main memory reference ...................... 100 ns             
Compress 1K bytes with Zippy ............. 3,000 ns  =   3 µs
Send 2K bytes over 1 Gbps network ....... 20,000 ns  =  20 µs
SSD random read ........................ 150,000 ns  = 150 µs
Read 1 MB sequentially from memory ..... 250,000 ns  = 250 µs
Round trip within same datacenter ...... 500,000 ns  = 0.5 ms
Read 1 MB sequentially from SSD* ..... 1,000,000 ns  =   1 ms
Disk seek ........................... 10,000,000 ns  =  10 ms
Read 1 MB sequentially from disk .... 20,000,000 ns  =  20 ms
Send packet CA->Netherlands->CA .... 150,000,000 ns  = 150 ms
```

### Lets multiply all these durations by a billion:

Magnitudes:

#### Minute:
    L1 cache reference                  0.5 s         Blink of eye (0.5 s)
    Branch mispredict                   5 s           Sip of water
    L2 cache reference                  7 s           Long yawn
    Mutex lock/unlock                   25 s          Making a coffee

### Hour:
    Main memory reference               100 s         Brushing your teeth
    Compress 1K bytes with Zippy        50 min        One episode of a TV show (including ad breaks)

### Day:
    Send 2K bytes over 1 Gbps network   5.5 hr        From lunch to end of work day

### Week
    SSD random read                     1.7 days      A normal weekend
    Read 1 MB sequentially from memory  2.9 days      A long weekend
    Round trip within same datacenter   5.8 days      A medium vacation
    Read 1 MB sequentially from SSD    11.6 days      Waiting for almost 2 weeks for a delivery

### Year
    Disk seek                           16.5 weeks    A semester in university
    Read 1 MB sequentially from disk    7.8 months    Almost producing a new human being
    The above 2 together                1 year

### Decade
    Send packet CA->Netherlands->CA     4.8 years     CS5 degree at IITD

================================================
FILE: notes/linux_tcp.md
================================================
# LINUX Networking

## High Level Socket API

1. [CS361 Video](https://youtu.be/XXfdzwEsxFk?si=VGb5lymk8Fkaglqk)
2. [Video on Protocol Stack](https://www.youtube.com/watch?v=3b_TAYtzuho)

`Server`: This is the component which listens for connections.

`Client`: It sends a connect request to the server. This is the component, which
initiates a connection.

```mermaid
sequenceDiagram
    participant Client
    participant Server
    participant SocketServer as Server Socket
    participant SocketClient as Client Socket

    Server->>SocketServer: socket()
    Server->>SocketServer: bind()
    Server->>SocketServer: listen()
    Server-->>Client: Waiting for connection
    
    Client->>SocketClient: socket()
    Client->>SocketServer: connect()

    Server->>SocketServer: accept()
    Server-->>Client: Connection accepted
    
    Client->>SocketServer: send("Hello, Server!")
    SocketServer->>Server: recv()
    Server-->>Client: send("Hello, Client!")
    Client->>SocketClient: recv()

    Client->>SocketClient: close()
    Server->>SocketServer: close()
```

### Client Side

`socket`: It just creates a file descriptor, but doesn't do anything more!

`connect`: Takes a file descriptor and server's address & sends a connection
request to the server.

`send & recv`: Given a connected file descriptor, submit bytes to the OS for
delivery and ask OS to deliver the bytes. (Similar to read/write.)
- These functions don't do the actual transfer of bytes, they are just asking the
OS to do these things.

`close`: Given a connected file descriptor, it tells the OS that this connection
can be terminated.
- Kernel continues sending the buffered bytes.
- At the end of buffered bytes, sends special 'EOF' message: which tells the receiver
that I am done sending, and will close the connection, you can close too!

### Server Side

`bind`: Given a file descriptor, tells the kernel to associate it with the 
given IP and port (Making a reservation at this address.)

`listen`: Given a file descriptor that has been binded to a IP/Port, it now asks
the OS that it wishes to start receiving connections.

`accept`: Given a fd which is listening, it creates a `new fd` that can be used to
communicate with individual client. This call is blocking by default, until a 
client shows up.
- Note that here a new fd has been created to talk to this client. The old fd is
not for sending/receiving messages from the client. It is just used for accepting
connections from the client.


*Keeping track of the return values of send/recv are very crucial. The returned
bytes may be less than what you asked for. So keep trying until you have received
what you wanted.
You are only submitting the bytes to the operating system, not actually sending
them to the server.*


## How The Kernel Handles A TCP Connection

Packets and Syscall Analysis:  [Video](https://www.youtube.com/watch?v=ck4WvYM9V4c)

### TCP Three-Way Handshake Overview

```mermaid
sequenceDiagram
    participant cc as Client[192.168.1.105:1234]
    participant ss as Server[192.168.1.102:5000]

    cc->>ss: SYN (SEQ = X)
    ss-->>cc: SYN-ACK (SEQ = Y, ACK = X + 1)
    cc->>ss: ACK (ACK = Y + 1)
    Note over cc: 3 Way Handshaking done
    ss->>cc: Hello World 
```
### Using `ncat`

- Creating server: `ncat -l 1234`

	- `-l`: Tells Ncat to listen for incoming connections.
    - `1234`: The port where Ncat is listening for connections.
- Connect to a server: `ncat <ip> <port>`

- Create UDP server: `ncat -l -u 1234`
- Connect to UDP server: `ncat -u <ip> <port>`

### Server Side Analysis

The following events happen while running `ncat` on the server side:

1. `socket` system call: It requests an ipv4 socket to be created. 
On success, it returns a file descriptor for the newly created 
socket (let fd = 3).

2. `bind` system call: It takes a file descriptor (fd 3) (returned by 
socket), and binds it to an ip address and port. Socket itself is 
just a data structure with a buffer. We need to bind it. 

3. `listen` system call: Listens on this socket for any `TCP SYN`
request.

4. Then we make a `select` system call. This call waits on a file descriptor 
(fd 3),
until its available for read or write. This is generally used in IO multiplexing.
In our case, we wait on the socket's fd. Here are waiting for someone to send 
connection requests, since we are listening for connection. The `ncat` process
is wait-blocked at this moment. It is unblocked, when some desirable event occurs.

5. When it is unblocked, it means a `syn` request has arrived, and kernel has
completed 3-way handshake with the client. 
Thus `accept` system call is ran on this fd 3. The accept system call on success
returns a new data socket for that connection. A `new fd (let fd 4)` is now
created to talk to this client.
 
6. `close` system call is called on fd 3 (this is specific to ncat, since it only
accepts on client connection).   

7. We then make `select` system call on fd = 0, 4. `fd 0` is for stdin, incase we 
want to send some data to client and `fd 4` is the data socket, which will tell
if we have something to read.

Suppose we receive a packet over wifi; an interrupt will be raised and 
corresponding interrupt handler will be called. 
Inside the kernel thread running the interrupt handler, we consume the packet
from the buffer, create `skb` struct (socket kernel buffer).
After consuming, we copy the packet into the socket buffer of the `ncat`. At 
this point the `select` syscall (waiting on this fd) of the user process returns. 

### Handshaking from Process' POV
The protocol stack of Linux does the handshaking without involving the application.

The kernel unblocks the process waiting on `select` syscall (while listening
for new connection), only when the handshake is completed. The user process is not
involved in the handshake. 
It only receives the last `ack` of 3-way handshake. On receiving, it can accept
the new connection and can  start to `send/recv` data.

![](../assets/syn.png)
*Note that size of accept queue is bounded. If we don't accept the ready connections, new connections would be dropped* 

### ACKs
Even the `acks/retransmissions` are handled by the kernel's protocol stack. 
The application is not notified about this. The TCP header contain flags, which
tell if it contains some data or ack. Based on the flag, the protocol stack takes
appropriate decisions. 

Generally the user process if only unblocked (from `select` syscall), when the
kernel copies some useful data into the fd's read buffer, which the process can
consume. 

### Some benchmarks

*take it with pinch of salt*
 
- From receiving interrupt for SYN to accepting the connection: 1.7 ms 
(*It actually depends between the distance between client and server, as 2 more
exchanges are involved in between. This data is from localhost I think*) 

- Processing a packet interrupt: 200 microseconds


================================================
FILE: notes/memory_reordering.md
================================================
# Memory Reordering

### Some Background
Modern CPUs employ lots of techniques to counteract the latency cost of going 
to main memory.  These days CPUs can process hundreds of instructions in the 
time it takes to read or write data to the DRAM memory banks.

Memory access is generally the bottlenack.

Hardware caches are the most common tools used to hide this latency.
Unfortunately CPUs are now so fast that even these caches cannot keep up at 
times.  So to further hide this latency a number of less well known buffers 
are used. 

#### Lets understand about `store buffers` 

When a CPU executes a store operation it will try to write the data to the `L1 
cache` nearest to the CPU. If a cache miss occurs at this stage the CPU goes 
out to the next layer of cache. At this point on an Intel, and many other, 
CPUs a technique known as `write combining` comes into play. 

While the request for ownership of the L2 cache line is outstanding the data to 
be stored is written to one of a number of cache line sized buffers on the 
processor itself, known as `store buffers` on Intel CPUs.  These on chip 
buffers allow the CPU to continue processing instructions while the cache 
sub-system gets ready to receive and process the data.  The biggest advantage 
comes when the data is not present in any of the other cache layers.

These buffers become very interesting when subsequent writes happen to require 
the same cache line.  The subsequent writes can be combined into the buffer 
before it is committed down the cache hierarchy.

> What happens if the program wants to read some of the data that has been 
written to a buffer?  Well our hardware friends have thought of that and they 
will snoop the buffers before they read the caches.

![](../assets/store_buffer.png)

*Loads and stores to the caches and main memory are buffered and re-ordered 
using the load, store, and write-combining buffers.  These buffers are 
associative queues that allow fast lookup.  This lookup is necessary when a 
later load needs to read the value of a previous store that has not yet reached 
the cache.*

### Fencing

When a program is executed it does not matter if its instructions are 
re-ordered provided the same end result is achieved. For example, within a loop 
it does not matter when the loop counter is updated if no operation within the 
loop uses it.  
The compiler and CPU are free to re-order the instructions to best utilise the 
CPU provided it is updated by the time the next iteration is about to 
commence.  
Also over the execution of a loop this variable may be stored in a register and 
never pushed out to cache or main memory, thus it is never visible to another 
CPU.

> We use `volatile` keyword to tell the compiler that don't store this variable
in register. Always push down the changes to memory. So that other CPU can 
always see the latest value.

Provided “program order” is preserved the CPU, and compiler, are free to do 
whatever they see fit to improve performance.




### Hardware Memory Ordering in x86 Processors

The term memory ordering refers to the order in which the processor issues 
reads (loads) and writes (stores) through the system bus to system memory.

For example, the Intel386 processor enforces program ordering 
(generally referred to as strong ordering), where reads and writes are issued 
on the system in the order they occur in the instruction stream.

But the hardware may reorder the instructions for some optimizations. Sometimes
reads could go ahead of buffered writes.

<mark> Reads may be reordered with older writes to different memory locations 
but not with older writes to same memory location.</mark>

That is, if we write to location 1 and read from location 2, then the read from
location 2 could become globally visible before write to location 1.

```
let x = y = 0

processor 0                     processor 1
x = 1                           y = 1
print y                         print x

output = (0, 0) is possible
```

<mark> Stores are usually buffered before being sent to memory (L1 cache). We 
prioritise loads more than stores. Since they are on critical path. The instructions
are waiting for the data to be loaded before they can run. 
Although if a store followed by a load are for same memory location then we will
definitely follow program order.</mark>

### Software Reordering
Compiler can also sometimes reorder instructions in our program for optimizations.
For example, store to 2 different memory locations can be reordered by our
compiler.

### Avoid memory reordering
In a multi-threaded environment techniques need to be employed for making 
program results visible in a timely manner.
The techniques for making memory visible from a processor core are known as 
memory barriers or fences.

Memory barriers provide two properties.  Firstly, they preserve externally 
visible program order by ensuring all instructions either side of the barrier 
appear in the correct program order if observed from another CPU and, secondly, 
they make the memory visible by ensuring the data is propagated to the cache 
sub-system.

#### Asking compiler not to reorder
`asm volatile("" : : : "memory");` Fake instruction that asks compiler to not
reorder any memory instruction around this barrier. A hint to compiler that 
whole of the memory can be touched by this instruction: hence don't do any
reordering. 

*In this case the hardware can still reorder instructions, even though we asked
our compiler to not reorder! Hence we will have to use hardware barriers.*

```cpp
#include <emmintrin.h>
void _mm_mfence (void) // Use this instruction as a barrier to prevent re-ordering in the hardware!
```

Perform serializing operation on all `load-from-memory` and `store-to-memory` 
instructions that were issued prior to this instruction. 
Guarantees that every memory access that precedes, in program order the memory 
fence instruction is globally visible before any memory instruction which 
follows the fence in program order.

*It drains the `store buffer`, before any following `loads` can go into memory.*

### Performance Impact of Memory Barriers

Memory barriers prevent a CPU from performing a lot of techniques to hide 
memory latency therefore they have a significant performance cost which must be 
considered.  To achieve maximum performance it is best to model the problem so 
the processor can do units of work, then have all the necessary memory barriers 
occur on the boundaries of these work units

## C++ Memory Model

- [Memory Model Article](https://dev.to/kprotty/understanding-atomics-and-memory-ordering-2mom)
- [Post on Stack Overflow](https://stackoverflow.com/questions/12346487/what-do-each-memory-order-mean)




================================================
FILE: notes/metaprogramming.md
================================================
## Template Metaprogramming

[Link to CPPCON Talk](https://youtu.be/Am2is2QCvxY?si=QrulPFBy7Dg5poQ1)

- Do work at compile time that otherwise would be done at Runtime.
- In C++, the template instantiation happens at the compile time, hence we make
use of it.
- For example if we call `f(x)` the compiler will manufacture(instantiate) the
function for us (assume f is a template here).
- It is not free, as the heavy work needs to be done at compile time which 
leads to increased compile time.
- Can't rely on runtime primitives like virtual functions, dynamic dispatch.
Keep things constant while metaprogramming.

1. **Absolute Value Metafunction**

```cpp
template<int N>
struct ABS {
    static constexpr int value = (N < 0) ? -N : N;
};
```

- A metafunction call: The arguments are passed through the template's 
arguments.
- `Call` syntax is a request for the template's static value.
- `const int ans = ABS<-142>::value;`

2. **Compile Time GCD**

Here we use compile time recursion. For base cases, we have to do pattern 
matching.

```cpp
template<int N, int M>
struct gcd {
    static constexpr int value = gcd<M, N % M>::value;
};

template<int N>
struct gcd<N, 0> {
    static_assert(N != 0);
    static constexpr int value = N;
};
```

3. **Metafunction can take a type as Parameter/Argument**
We can make a metafunction similar to `sizeof`

```cpp
// primary template handles scalar (non-array) types as base case:
template<class T> 
struct rank {
    static constexpr size_t value = 0u;
};

// partial specialization recognizes any array type:
template<class U, size_t N>
struct rank<U[N]> {
    static constexpr size_t value = 1 + rank<U>::value;
};

const int N = rank<int[2][1][4]>::value; // gives 3 at compile time
```

*Here we didn't recurse on the primary template, but did on the specialisation.*

4. **Type**
```cpp
#include <iostream>
#include <type_traits>

// A simple type trait to remove constness
template <typename T>
struct RemoveConst {
    using type = T;  // Default case: T is unchanged
};

template <typename T>
struct RemoveConst<const T> {
    using type = T;  // Specialized case: remove const qualifier
};

int main() {
    // Using the RemoveConst trait
    RemoveConst<const int>::type x = 42;  // 'RemoveConst<const int>::type' is equivalent to 'int'
    std::cout << "x = " << x << std::endl;
    
    return 0;
}
```

5. **Conditional Types during compile time**
```cpp
#include <iostream>
#include <stdexcept>


template<typename T>
struct type_is {
	using type = T;
};

// primary template assumes the bool value is true:
template<bool, typename T, typename Q>
struct conditional_type : type_is<T> {};

// partial specialization recognizes a false value:
template<typename T, typename Q>
struct conditional_type<false, T, Q> : type_is<Q> {};

int main() {
    constexpr bool q = false;
    conditional_type<q, int, double>::type s;
    std::cout << sizeof(s) << '\n';
    return 0;
}
```
`false_type` and `true_type` can have static value with F/T.

![](../assets/meta1.png)

How to deal with parameters pack:

![](../assets/meta2.png)



================================================
FILE: notes/move_semantics.md
================================================
## Move Semantics

Move Semantics by Klaus Iglberger (CppCon 2019): [Link](https://www.youtube.com/watch?v=St0MNEU5b0o)

```cpp
std::vector<int> v1 {1, 2, 3, 4, 5};
// in stack we will just store the pointers (to the start and the end)
// the actual elements will be stored in heap
std::vector<int> v2 {};
v2 = v1;
// when we do v2 = v1, we copy the contents, that is new memory is assigned in
// heap and content of old vectors are copied

v2 = std::move(v1);
// in this case we are transferring the ownership. that is, now the pointer of
// v2 will point to the heap memory orignally pointed by v1.
// and pointers of v1 will be set to 0.
```

### L value vs R value

1. **L-Value (Left Value)**:
An **L-value** is an expression that **refers to a memory location** and 
can persist beyond a single expression. It typically appears on 
the **left-hand side** of an assignment, but it can also be used on the 
right-hand side.

- **L-values** have an identifiable location in memory, meaning you can take 
their address using the `&` operator.
- Examples of **L-values** include variables, dereferenced pointers, or array 
elements.
- **Modifiable L-values** are L-values that can be changed (i.e., non-const), 
while **non-modifiable L-values** are constants.

#### Examples of L-values:
```cpp
int x = 5;    // `x` is an L-value because it refers to a memory location
x = 10;       // `x` can appear on the left-hand side of an assignment

int* p = &x;  // You can take the address of an L-value
*p = 20;      // Dereferenced pointer `*p` is an L-value
```

In the code above:
- `x` is an L-value because it refers to a memory location that persists across 
expressions.
- `*p` is also an L-value because it refers to the value stored at the 
address in `p`.

2. **R-Value (Right Value)**:
An **R-value** is an expression that does **not have a persistent memory location**. 
It usually represents **temporary values** that only exist during the evaluation 
of an expression. R-values typically appear on the **right-hand side** of an 
assignment and are not addressable (you can't take the address of an R-value).

- R-values are usually **temporary objects**, **literals**, or **expressions** 
like `2 + 3`.
- You can't assign to an R-value because they do not refer to a memory location 
that can be modified.

#### Examples of R-values:
```cpp
int x = 5 + 10;  // `5 + 10` is an R-value (a temporary value)
int y = 42;      // `42` is an R-value (literal)

x = y + 1;       // `y + 1` is an R-value (result of the expression)
```

In the code above:
- `5 + 10` is an R-value because it's a temporary result and cannot be assigned to.
- `42` is a literal R-value.


#### L-value References:
- Traditional references (as introduced in earlier versions of C++) are **L-value references**.
- They can only bind to L-values.

Example:
```cpp
int x = 10;
int& ref = x;  // L-value reference to `x`
ref = 20;      // Modifies `x`
```

#### R-value References:
- **R-value references** (introduced in C++11) are used to bind to R-values, 
allowing you to modify them.
- They are denoted by `&&`.
- Commonly used in **move semantics** to avoid unnecessary copies.

Example:
```cpp
int&& rref = 5;  // R-value reference to the temporary value `5`
rref = 10;       // Modifies the R-value
```

Here, `rref` is an R-value reference that allows us to bind to a temporary 
object (`5`) and even modify it.

- **Move semantics** make use of R-value references to "move" resources from 
one object to another, avoiding expensive deep copies. This is especially useful
when dealing with temporary objects (R-values).
  
Example:
```cpp
std::string s1 = "Hello";
std::string s2 = std::move(s1);  // Moves the contents of `s1` to `s2`
// after ownership transfer s1 will now be a valid but undefined state!
```

In the code above:
- `std::move(s1)` is an R-value reference that allows the move constructor of 
`std::string` to transfer ownership of the data from `s1` to `s2`.
- <mark>`std::move` unconditionally casts its input into an rvalue reference. 
It doesnt move anything!
- <mark>`std::move(s)` when s is const, leads to copy not move!

```cpp
const std::string s1 = "Shivam Verma";
std::string s2 = std::move(s1);  // this is COPY not MOVE
std::cout << s1 << ' ' << s2 << '\n';
```


### Operators
1. Copy Assignment Operator

    `vector& operator=(const vector& rhs);` It takes an lvalue

2. Move Assignment Operator

    `vector& operator=(vector&& rhs);` It takes an rvalue

```cpp
class Widget {
    private:
        int i {0};
        std::string s{};
        int* pi {nullptr};
    
    public:
        // Move constructor: Goal is to transfer content of w into this
        // Leave w in a valid but undefined state
        Widget (Widget&& w) : i (w.i), 
                              s (std::move(w.s)), 
                              pi (w.pi) {
            w.pi = nullptr;

            // we could also do: i (std::move(w.i)), 
            //                   s (std::move(w.s)), 
            //                   pi (std::move(w.pi))

        }

        // Move assignment operator
        Widget& operator=(Widget&& w) {
            i = std::move(w.i);
            // s = w.s // don't do this, it copies not move
            s = std::move(w.s);
            delete pi; // need to clear exisiting resources first!
            pi = std::move(w.pi);

            w.pi = nullptr; // reset content of w

            return *this;
        }
}
```


### Small string optimisation


## Small String Optimization (SSO)

```mermaid
graph TD;
    A[Small String] -->|Stored in| B[Stack Buffer];
    C[Larger String] -->|Stored in| D[Heap Allocation];
    B --> E{SSO};
    D --> F{Heap Allocator};
```

- For small strings (e.g., "short str"), the data is stored directly in the 
object on the stack.
- For larger strings, the data is dynamically allocated on the heap, and the 
object holds a pointer to that data.


## Universal References (Forwarding Reference)

```cpp
template<typename T>
void f(T&& x); // Forwarding Reference

auto&& var2 = var1; // Forwarding Reference
```
They represent: 
- an `lvalue` reference if they are initialised by an lvalue.
- an `rvalue` reference if they are initialised by an rvalue.

```cpp
template<typename T>
void foo(T&& ) {
    print("foo(T&&)");
}

int main () {
    Widget w{};
    foo(w); // prints "foo(T&&)" 
    foo(Wifget{}) // also prints "foo(T&&)"

    // w was lvalue, Widget{} was rvalue: T&& binded to both
}
```
- <mark>`std::forward` conditionally casts its input into an rvalue reference. 
It doesnt forward anything!

    - If given value is lvalue, cast to an lvalue reference.
    - If given value is rvalue, only then cast to an rvalue reference.

`rvalues` can bind to lvalue reference to const, but not to lvalue reference.
```cpp
void f(Widget& );           // 1
void f(const Widget& );     // 2
template<typename T>        // 3
void f(T&& );

int main() {
    f(getWidget{});             // this can bind to 3, 2 but not 1
}
```  

![](../assets/binding.png)

================================================
FILE: notes/os_booting.md
================================================
# Linux Boot Process

Booting is the process of loading an OS from disk and starting it.

## The OS Boot Process

1. **Hit the power button**

-   Triggers a `power good` signal.
    -   Electric pulse sent to reset pin of the CPU (Power On Reset).
    -   CPU is in `Reset` mode, i.e., it is not executing any instructions.
-   All devices get power and initialize themselves.
-   Every register is set to zero, except `Code Segment (CS)` and 
`Instruction Pointer (IP)`, which are set to `0xf000` and `0xfff0` respectively.
    -   Thus, the `physical address = (CS << 4) + IP = (0xf000 << 4) + 0xfff0 = 0xf0000 + 0xfff0 = 0xffff0` (We are operating in 16-bit mode right now).
        -   This physical address is the place where the CPU starts executing instructions.
-   The CPU is activated in Real Mode and it starts executing from `0xffff0` (or `ffff0h`), which is a memory location in the `BIOS chip` and not in the RAM.
    - The BIOS chip (Basic Input/Output System) is a small, `non-volatile` memory chip located on the motherboard of a computer.
    -   Real Mode
        -   Only 1 MB of RAM addressability in the range `0x0` to `0x100000`.
            -   This is because there are 20 physical address bus lines available. (2^20 = 1048576 = 1 MB)
        -   `16 bit addressing:` Available registers (Eg: `AX`) are of size 16 bits, so two registers are combined to give the physical address.
            -   Logical address (LBA) = segment:offset
            -   `Physical address = (segment << 4) + offset`
                -   The segments are segments/parts of the addressable 1 MB of RAM and the offsets are offsets into that segment.
                -   Eg: If the segment is `0xf000` and the offset is `0xfff0`, then the `physical address = (CS << 4) + IP = (0xf000 << 4) + 0xfff0 = 0xf0000 + 0xfff0 = 0xffff0`
2.  **Basic Input/Output System (BIOS) takes over.**
-   Placed in Flash/EPROM Non-Volatile Memory. Its job is load the bootloader.
-   In a multi-processor environment, one processor is a `Boot Processor (BSP)` which executes all instructions and the others are `Application Processors (APs)`.
-   Conducts a Power-On Self-Test (`POST`).
    -   Performs system inventory.
    -   Checks and registers all devices connected.
-   Finds `Master Boot Record (MBR)` in the first sector of a device (the hard disk, SSD, USB, etc.) that is usually 512 bytes in size, loads it into RAM at position `0x7c00`, jumps to that location and starts executing.
    -   MBR is a 512 byte sector that's logically split into three sections.
        -   The first 446 bytes is reserved for a program, which is usually a Bootloader. (Eg: [GRUB](https://www.gnu.org/software/grub))
        -   The next 64 bytes (16x4 bytes) are for a partition table with four partitions in it.
        -   The last two bytes are for the Boot Signature bytes `0x55` (or `55h`) and `0xAA`(or `AAh`) in order, that identify that a particular sector is the MBR.
            -   If this signature is not found in the first sector, then the next device is searched.
-   The BIOS loads the Bootloader into memory.
    -   This might be the first stage of the Bootloader, which loads the second stage of the Bootloader into memory, as 446 bytes are not sufficient to store all the complex logic required to load an OS.
    -   Bootloader might give an option to load a particular Operating System.
    -   Sets up the `GDT/IVT` for the Operating System.
    -   Switches from `Real Mode` to `Protected Mode`.
        -   Memory addressability goes from 1 MB to the entire range of available RAM.
3.   **The Bootloader starts executing and checking the partition table for an active/bootable partition table.**

-   On finding the bootable partition, the Bootloader loads the first sector of that partition (called the Boot Record) from the hard disk to the RAM.
4.   **The Boot Record loads the operating system into memory.**
5.   **Timers, devices, hard disks, etc. are initialized by the Operating System in the Kernel Space.**
6.   **In Linux, the `init` process is the first process in User Space that initializes the OS processes, daemons and displays login prompt.**

```mermaid
graph TD
    A[Power Button Pressed] --> B[Power Good Signal]
    B --> C[CPU Starts in Reset Mode]
    C --> D[BIOS Loads from 0xFFFF0]
    D --> E[Power-On Self-Test POST]
    E --> F[BIOS Searches for MBR]
    F --> G[MBR Found on Bootable Device]
    G --> H[BIOS Loads Bootloader at 0x7C00]
    H --> I[Bootloader Starts Executing]
    I --> J[Bootloader Checks Partition Table]
    J --> K[Bootloader Finds Active Partition]
    K --> L[Bootloader Loads OS Kernel]
    L --> M[Switch to Protected Mode]
    M --> N[Kernel Starts Initializing Hardware]
    N --> O[Kernel Mounts Root Filesystem]
    O --> P[Kernel Starts init/systemd Process]
    P --> Q[init/systemd Initializes System Services]
    Q --> R[User Login Prompt Displayed]
```
## Resources

-   [PC Booting: How PC Boots](https://www.youtube.com/watch?v=ZplB2v2eMas)
-   [Booting an Operating System](https://www.youtube.com/watch?v=7D4qiFIosWk)



================================================
FILE: notes/packet_handling.md
================================================
## The Network Packet's Diary

PDF [Notes](../assets/From%20NIC%20to%20Application.pdf)

A packet consists of:

```
| Ethernet Header | IP Header | TCP Header | Data |
```
Network Interface Controller (NIC):
- Receives the packet
- Compares the MAC destination address (the address to be compared against is
programmed by the OS)
- Verifies the Ethernet (Frame) Checksum FCS
- Stores the packet to buffer programmed by the driver using DMA
- Triggers an interrupt

Interrupt:
- Top half processing:
    - Acknowledge the interrupt
    - Schedule the 'Bottom Half Processing'

- Bottom half processing:
    - It identifies the memory where the packet is stored.
    - Allocates `sk_buf` (it is a struct which contains various pointers, like 
    pointer to different headers, pointer to data and other metadata).

    ```mermaid
    graph TD;
    A[sk_buff] --> B[Memory Buffer];
    B --> C[Head Pointer];
    B --> D[Data Pointer];
    B --> E[Tail Pointer];
    B --> F[End Pointer];
    
    A --> G[Packet Headers];
    G --> H[Ethernet Header]; 
    G --> I[IP Header];
    G --> J[TCP/UDP Header];

    A --> K[Metadata];
    K --> L[Reference Counters];
    K --> M[Checksum Info];
    K --> N[Flags];
    
    B --> O[Packet Data];
    ```
    - Passes the `sk_buf` to the protocol stack.

The `sk_buf` traverses various levels, where some checksums are verified, headers
are removed and other metadata processing is done.

Eventually, it reaches TCP stack:
- TCP checksum verified
- Handles the TCP state machine
- Enqueues the data to socket's recevive queue
- Signals the fd that the data is available (for processes sleeping on `select`)

On Socket read (by user process):
- Dequeue data from socket's receive queue
- Copy to user buffer
- Release the `sk_buf`


================================================
FILE: notes/padding_packing.md
================================================
## The Lost Art of Structure Packing & Unaligned Memory Accesses

[Padding and Packing](http://www.catb.org/esr/structure-packing/)

[Memory Alignment](https://docs.kernel.org/core-api/unaligned-memory-access.html)

Unaligned memory accesses occur when you try to read `N` bytes of data starting 
from an address that is not evenly divisible by `N` (i.e. `addr % N != 0`). 

For example, reading 4 bytes of data from address `0x10004` is fine, but reading 
4 bytes of data from address `0x10005` would be an unaligned memory access.

`Natural Alignment`: When accessing `N bytes` of memory, the base memory address 
must be evenly divisible by `N`, i.e. `addr % N == 0`.

*When writing code, assume the target architecture has natural alignment requirements.*

### Why unaligned access is bad

The effects of performing an unaligned memory access vary from architecture 
to architecture. A summary of the common scenarios is presented below:

- Some architectures are able to perform unaligned memory accesses transparently, 
but there is usually a significant performance cost.

- Some architectures raise processor exceptions when unaligned accesses happen. 
The exception handler is able to correct the unaligned access, at significant 
cost to performance.

- Some architectures raise processor exceptions when unaligned accesses happen, 
but the exceptions do not contain enough information for the unaligned access to 
be corrected.

- Some architectures are not capable of unaligned memory access, but will 
silently perform a different memory access to the one that was requested, 
resulting in a subtle code bug that is hard to detect!

If our code causes unaligned memory accesses to happen, out code will not work 
correctly on certain platforms and will cause performance problems on others.

### How Compiler helps

The way our compiler lays out basic datatypes in memory is constrained in order 
to make memory accesses faster.

*Each type except `char` has an alignment requirement: `chars` can start on any 
byte address, but `2-byte shorts` must start on an even address, `4-byte ints` 
or `floats` must start on an address divisible by 4, and `8-byte longs` or 
`doubles` must start on an address divisible by 8. 
Signed or unsigned makes no difference.*

Self-alignment makes access faster because it facilitates generating 
single-instruction fetches and puts of the typed data. Without alignment 
constraints, on the other hand, the code might end up having to do two or more 
accesses spanning machine-word boundaries. 

Characters are a special case: they’re equally expensive from anywhere they 
live inside a single machine word. That’s why they don’t have a preferred alignment.

```cpp
// consider these variables declaration
char *p;
char c;
int x;

// actual layout in memory
char *p;      /* 4 or 8 bytes */
char c;       /* 1 byte */
char pad[3];  /* 3 bytes */
int x;        /* 4 bytes */
```

```cpp
// consider these variables declaration
char *p;
char c;
long x;

// actual layout in memory
char *p;     /* 8 bytes */
char c;      /* 1 byte */
char pad[7]; /* 7 bytes */
long x;      /* 8 bytes */
```

![](../assets/pad.png)

#### Structure alignment and padding

```cpp
struct foo1 {
    char *p;
    char c;
    long x;
};

// Assuming a 64-bit machine, any instance of struct foo1 will have 8-byte alignment.

struct foo1 {
    char *p;     /* 8 bytes */
    char c;      /* 1 byte
    char pad[7]; /* 7 bytes */
    long x;      /* 8 bytes */
};

```

```cpp
struct foo5 {
    char c;
    struct foo5_inner {
        char *p;
        short x;
    } inner;
};

// The char *p member in the inner struct forces the outer struct to be pointer-aligned as well as the inner. 

struct foo5 {
    char c;           /* 1 byte*/
    char pad1[7];     /* 7 bytes */
    struct foo5_inner {
        char *p;      /* 8 bytes */
        short x;      /* 2 bytes */
        char pad2[6]; /* 6 bytes */
    } inner;
};
```

In the below example, we can observe that padding is even added at the end, for complete alignment (in case we 
have array of structs). Even if we don't have an array, we will have this padding:
```cpp
struct mystruct_A {
    char a;
    char pad1[3]; /* inserted by compiler: for alignment of b */
    int b;
    char c;
    char pad2[3]; /* -"-: for alignment of the whole struct in an array */
} x;
```

Now that we know how and why compilers insert padding in and after our structures 
we’ll examine what we can do to squeeze out the slop. 
This is the art of structure packing.

The first thing to notice is that slop only happens in two places:
- One is where storage bound to a larger data type (with stricter alignment 
requirements) follows storage bound to a smaller one. 
- The other is where a struct naturally ends before its stride address, requiring 
padding so the next one will be properly aligned.

The simplest way to eliminate slop is to reorder the structure members by 
decreasing alignment. 

That is: make all the pointer-aligned subfields come first, because on a 64-bit 
machine they will be 8 bytes. Then the 4-byte ints; then the 2-byte shorts; 
then the character fields.

#### Overriding Alignment Rules

We can ask our compiler to not use the processor’s normal alignment rules by 
using a pragma, usually `#pragma pack`.

*Do not do this casually, as it forces the generation of more expensive and slower code.*

```cpp
#pragma pack(1)  // Force 1-byte alignment
struct PackedExample {
    char a;  // 1 byte
    int b;   // 4 bytes
};

// Here, b is no longer aligned on a 4-byte boundary. 
// It forces the CPU to perform unaligned memory accesses.

#pragma pack()  // Reset to default alignment
```

### Endianness: Big Endian and Little Endian
It specifies the order in which bytes of a word are stored in memory.

*`x86(32/64 bit)` is `little-endian`.*

*While `ARM` defaults to `little-endian`, it can be configured to operate in 
big-endian mode as well.*

(Big-Endian is actually more intuitive at first sight!)

![](../assets/endian_1.png)
![](../assets/endian_2.png)


================================================
FILE: notes/performance.md
================================================
## Follow these guidelines

1. No unnecessary work
    - No extra copying
    - No extra allocations

2. Use all your computing power
    - Use all the cores available
    - SIMD

3. Avoid waits and stalls
    - Lockless Data Structures
    - Async APIs
    - Job Systems

4. Use Hardware efficiently
    - Cache friendly
    - Well predictable code

5. OS Level Efficiency

## Efficient C++

### Use constexpr wherever possible
Computation already done at compile time, saving us time during runtime. 

### Make Variables `const`
Knowing a variable is const allows the compiler to perform various 
optimisations. 

For example:
```cpp
{
    const float sum = std:: accumulate(data.begin(), data.end(),0.01);
    for (auto& num : data) {
        num -= sum / data.size():
    }
    return data;
}

// can be optimised to below code by compiler since sum is const

{
    const float sum = std:: accumulate(data.begin(), data.end(),0.01);
    const float __mean = sum / data.size(): // this expression is loop invariant
    for (auto& num : data) {
        num -= __mean; // no expensive division inside loop now
    }
    return data;
}
```

Also it can help compiler to do this optimization:

```cpp
bool condition = getBool();
for (int i {}; i < n; i++) {
    if (condition) {
        A(i);
    } else {
        B(i);
    }
}

// if variable was declared const:

const bool condition = getBool();
if (condition()) {
    for (int i = 0; i < n; i++) {
        A(i);
    }
} else {
    for (int i = 0; i < n; i++) {
        B(i);
    }
}
```
Here we have reduced branching. In earlier case when bool was not const, compiler
may be afraid that functions A or B, might change its value, thus preventing the
opportunity to optimise.

### Noexcept all the things
void f(); `could throw exception`

void f() noexcept; `WILL NEVER throw an exception`

In the functions call stack, the compiler can now not do exception handling, 
which reduces some overhead.

### Use static for internal linkage
The functions which are to be used only in this source file should be marked
static. It is another hint to the compiler to inline it, apart from using the
inline keyword.

### Use `[[likely]]` and `[[unlikely]]` in conditionals
Better branch predicting, if the conditionals are marked `[[likely]]` or 
`[[unlikely]]`.

### Avoid Copying in str. bindings

`auto [first person, age] = *map. begin();` is bad

`const auto& [first person, age] = *map.begin();` use this instead

### Cache Friendly

```plaintext
CONTIGUOUS          SCATTERED

std::array          std::list
std::vector         std::set
std::deque          std::unordered_set
std::flat_map       std::map
std::flat_set       std::unordered_map
```

Also while designing classes, we have certain member varibles which will be 
used very rarely, for example debugging info.

So instead of having an object of `debugInfo` inside our class, we can instead 
keep a `unique_ptr` to it. So that everytime our class' object is fetched into
cache, large non-important things are not pre-fetched (size of pointer will be
small, and small non-important thing will be feteched here.)

### False Sharing
Data on same cache line, being accessed by different threads.

Use `alignas` to prevent this.

### Avoid Indirect Calls
Virtual function call are indirect calls, as they require `vtable` lookup.


================================================
FILE: notes/placement_new.md
================================================
## Placement New in C++

- Allocation and construction are different.
- A memory allocator is simply supposed to return uninitialized bits of memory.
- It is not supposed to produce “objects”.
- Constructing the object is role of the constructor, which runs after the
memory allocator.

```cpp
// assume we have a allocator object pool
Pool pool;
void* raw = pool.alloc(sizeof(Foo));
Foo* p = new(raw) Foo(); 

// the above is equivalent to
Foo* p = new Foo();
```
- It is used to place an object at a particular location in memory. This is 
done by supplying the place as a pointer parameter to the new part of a new 
expression:

```cpp
#include <new>        // Must #include this to use "placement new"
int main ()
{
  char memory[sizeof(Fred)];     // creating memory big enough to hold fred
  void* place = memory;          // unnecessary step
  Fred* f = new(place) Fred();   // Contructing the object (call Fred::Fred())
  // The pointers f and place will be equal
  
  // Remark: 
  // We are taking sole responsibility that the pointer we pass to the 
  // “placement new” operator points to a region of memory that is big enough 
  // and is properly aligned for the object type that you’re creating.

  // Neither the compiler nor the run-time system will make any attempt to check 
  // whether we did this right.

  f->~Fred();
  // We need to explicitly call the destructor
}
```

- We may want to do this for optimization when we need to construct multiple 
instances of an object, and it is faster not to re-allocate memory each time 
we need a new instance. Instead, it might be more efficient to perform a 
single allocation for a chunk of memory that can hold multiple objects, 
even though we don't want to use all of it at once.

### How std::vector uses Placement New

- Take containers like unordered_map, vector, or deque. These all allocate 
more memory than is minimally required for the elements you've inserted so 
far to avoid requiring a heap allocation for every single insertion.

```cpp
vector<Foo> vec;

// Allocate memory for a thousand Foos:
vec.reserve(1000);
```
... that doesn't actually construct a thousand Foos. It simply allocates/reserves
memory for them. If vector did not use placement new here, it would be 
default-constructing Foos all over the place as well as having to invoke their 
destructors even for elements you never even inserted in the first place.

Vector Example: [Link](https://medium.com/@dgodfrey206/c-placement-new-1298ccbb076e)


================================================
FILE: notes/pre-post-increment.md
================================================
## `i++` vs `++i`

First lets see for our class how can we overload, the ++prefix and postfix++
operators.

```cpp
class Number {
public:
  Number& operator++ ();    // ++prefix
  Number  operator++ (int); // postfix++
};
```

*Note the different return types: the prefix version returns by reference, the
postfix version by value.*

```cpp
Number& Number::operator++ ()
{
  // do some logic here to increment
  return *this;
}

Number Number::operator++ (int)
{
  Number ans = *this;
  ++(*this);  // or just call operator++()
  return ans;
}
```

### Which is more efficient: `i++` or `++i`?

- `++i` is sometimes faster than, and is never slower than, `i++`.

- For intrinsic types like `int`, it doesn’t matter: `++i` and `i++` are the 
same speed. For Number class (above example), `++i` very well might be faster 
than `i++` since the latter might make a copy of the this object.


================================================
FILE: notes/program_to_process.md
================================================
## Story of the Program to a Process

`Preprocessing` is the first pass of any C++ compilation. It processes
include-files, conditional compilation instructions and macros.

`Compilation` is the second pass. It takes the output of the 
preprocessor, and the source code, and generates assembly source code.

`Assembler` is the third stage of compilation. It takes the assembly
source code and produces machine code with offsets. The
assembler output is stored in an object file.

`Linking` is the final stage of compilation. It takes one or more
object files or libraries as input and combines them to produce a
single (usually executable) file. In doing so, it resolves references
to external symbols, assigns final addresses to procedures/functions 
and variables, and revises code and data to reflect new addresses (a 
process called relocation).

![](/assets/compilation_steps.png)

### Object Files

After the source code has been assembled, it will produce an Object
files `(e.g. .o, .obj)` and then linked, producing an executable files.

An object and executable come in several formats such as `ELF`
(Executable and Linking Format) and `COFF` (Common Object-File Format).  
For example, ELF is used on Linux systems, while COFF is used on 
Windows systems.

The object file contains various areas called sections. These sections can
hold executable code, data, dynamic linking information, debugging data, 
symbol tables, relocation information, comments, string tables, and notes.

Some sections are loaded into the process image and some provide 
information needed in the building of a process image while still others 
are used only in linking object files.

![](../assets/sections.png)

`readelf` and `objdump` can be use to read headers and content of an object
file.

### Relocation Records:
Because the various object files will include references to each other's code
and/or data, these shall be combined during the link time.

For example, the object file that has main() includes calls to 
function printf().

After linking all of the object files together, the linker uses the 
relocation records to find all of the addresses that need to be filled in.

Each object file has a symbol table that contains a list of names and their
corresponding offsets in the text and data segments.

![](../assets/linking.png)

### Shared Libraries

- In a typical system, a number of programs will be running. Each program 
relies on a number of functions, some of which will be standard C library 
functions, like `printf()`, `malloc()`, `strcpy()`, etc. and some are 
non-standard or user defined functions.

- If every program uses the standard C library, it means that each program 
would normally have a unique copy of this particular library present within 
it. Unfortunately, this result in wasted resources, degrade the efficiency 
and performance.

- Since the C library is common, it is better to have each program reference 
the common, one instance of that library, instead of having each program 
contain a copy of the library.

- This is implemented during the linking process where some of the objects are linked during the link time whereas some done during the run time 
(Dyanimic Linking).

#### Static Linking
- The term `statically linked` means that the program and the particular 
library that it’s linked against are combined together by the linker at link 
time.

- Programs that are linked statically are linked against archives of objects 
(libraries) that typically have the extension of `.a`.  An example of such a 
collection of objects is the standard C library, `libc.a`.

- You might consider linking a program statically for example, in cases 
where you weren't sure whether the correct version of a library will be 
available at runtime, or if you were testing a new version of a library that 
you don't yet want to install as shared.

- The drawback of this technique is that the executable is quite big in 
size, as all the needed information need to be brought together.

#### Dynamic Linking
- The term `dynamically linked` means that the program and the particular library it references are not combined together by the linker at link time.

- Instead, the linker places information into the ELF that tells the 
loader which `shared object module` the code is in and which
`runtime linker` should be used to find and bind the references.

- This type of program is called a partially bound executable, because it 
isn't fully resolved. The linker, at link time, didn't cause all the 
referenced symbols in the program to be associated with specific code from 
the  library. Instead, the linker simply said something like: “This 
program calls some functions within a particular shared object, so I'll just 
make a note of which shared object these functions are in, and continue on”.

- The binding between the program and the shared object is done at runtime 
that is <mark> after the program starts </mark>, the appropriate shared 
objects are found and bound.

- Programs that are linked dynamically are linked against shared objects 
that have the extension `.so`. An example of such an object is the shared 
object version of the standard C library, `libc.so`.

- Some advantages:
    - Program files (on disk) become much smaller because they need not hold
     all necessary text and data segments information.
    
    - Dynamic linking permits two or more processes to share read-only 
    executable modules such as standard C libraries.  Using this technique, 
    only one copy of a module needs be resident in memory at any time, and 
    multiple processes, each can executes this shared code (read only).  
    This results in a considerable memory saving.

### Process Loading

1. In Linux processes loaded from a file system (using either the 
`execve()` or `spawn()` system calls) are in `ELF` format.

2. Before we can run an executable, firstly we have to load it into memory.

3. This is done by the loader, which is generally part of the operating system.

4. Memory and access validation: Firstly, the OS system kernel reads in the 
program file’s header information and does the validation for type, access 
permissions, memory requirement and its ability to run its instructions.  It 
confirms that file is an executable image and calculates memory requirements.

    - Allocates primary memory for the program's execution.
    - Copies address space from secondary to primary memory.
    - Copies the `.text` and `.data` sections from the executable into primary 
    memory.
    - Copies program arguments (e.g., command line arguments) onto the stack.
    - Initializes registers: sets the esp (stack pointer) to point to top of
    stack, clears the rest.
    - Jumps to `__start` routine, which: copies main()'s arguments off of the 
    stack, and jumps to `main()`.

5. The memory layout, consists of three segments (text, data, and stack).
The dynamic data segment is also referred to as the heap, the place dynamically 
allocated memory (such as from `malloc()` and `new()`) comes from. Dynamically 
allocated memory is memory allocated at run time instead of compile/link time.


### Runtime Linking

1. The runtime linker is invoked when a program that was linked against a 
shared object requests that a shared object be dynamically loaded.

2. Run-time dynamic linking: The application program is read from disk (ELF 
file) into memory and unresolved references are left as invalid (typically 
zero).  The first access of an invalid, unresolved, reference results in a 
software trap.  The run-time dynamic linker determines why this trap occurred 
and seeks the necessary external symbol.  Only this symbol is loaded into 
memory and linked into the calling program.

3. The runtime linker is contained within the C runtime library. The runtime 
linker performs several tasks when loading a shared library (`.so` file).

4. For resolving a symbol at runtime the runtime linker will search through the 
list of libraries for this symbol.  In ELF files, hash tables are used for the 
symbol lookup, so they're very fast.










================================================
FILE: notes/rvo.md
================================================
## Return Value Optimisation

The object to be returned is constructed in the return value slot. 
Expensive copying is avoided in this case. Otherwise the object would be first 
created in local stack and then copied into the return slot (copying involved).

Good Video by Arch Coffee: https://www.youtube.com/watch?v=Qp_XA8G5H3M

![](../assets/rvo.png)

*Here we can see that the un-named object S{} is directly created in the return
value slot. 
We have avoided unnecessary copying.*

![](../assets/no-rvo.png)

*In this case the compiler doesn't know which among `s1`, `s2` needs to be returned.
Hence it can't directly constuct the object in the return slot. It first creates
both the objects in the stack and then based on the conditional test, copies one
of them into the return value slot.*

### Understanding stack segment while function call

This is how the stack looks like. First of all we have the return slot.
Then the arguments to the function and finally the return address.
Stack pointer points below this at start of first instuction.

```cpp
Fruit apples_to_apples (int i, Fruit x, int j) {
    return x
}
```
<mark/>The return slot is allocated by the `caller` itself. And its address is passed to
the `callee` via the `rdi` register (this is a hidden parameter in this case).<mark>

<mark/>In case of RVO, the returned object will be constructed in this slot, otherwise it would
be created locally in stack and then copied here.<mark>


![](../assets/stack.png)

Here we can't elide copy, since the stack address of Fruit x and the return
slot are different. We must get data out of `x` and put it into the return slot.

### Slicing from derived to base

```cpp
struct Cat : Aniaml {
    int rats_eaten;
}

Animal chopped() {
    Cat x = ...;
    return x;
}
```

In this case we do control the physical location of x (where we can constuct it),
but x is of the wrong type for constructing into the return slot.
(*Slot of Animal return would be smaller size than Cat object size*).

In these cases, where return type is Base and the object returned is Derived,
the extra properties of the derived object is sliced away. We will only be 
returning the Animal part now. 
(*To avoid this, we use pointers. Run-Time Polymorphism*)


================================================
FILE: notes/set_pq.md
================================================
## Set vs Priority Queue

### Priority Queue
- Only gives access to one element in sorted order: the highest priority 
element. When we remove it, we get the next highest priority element.
- It is backed by a heap (implemented by a vector)
- In a heap `P < L` and `P < R` (Parent, Left, Right)
- The highest priority element is at top of tree (or front of vector)
- O(1) access to top element
- Deletion is `O(logn)` (We replace the top of tree with the extreme element
and then perform swapping to maintain heap property)
- Insertion is `O(logn)` (We put new element at the new extreme and then perform
swapping to maintain heap property)
- One point to note, is that operations in PQ involve a lot of swapping of 
elements.


### Set
- A set allows you full access in sorted order.
- We can do: find two elements somewhere in the middle of the set, then 
traverse in order from one to the other.
- Insert any element `O(log n)` and the constant factor is greater than in PQ.
- Much more operations (LB, UB, element lookup, iteration, etc).
- Backed by self balancing BSTs.
- In a binary tree `L < P < R`.
- Insert and erase operations slightly slower than PQ because `std::set` makes 
many memory allocations. Every element of `std::set` is stored at its own 
allocation.
- Good thing is it only involves pointer swapping. 

================================================
FILE: notes/smart_pointers.md
================================================
## Smart Pointers

- Unique Pointers
- Shared Pointers
- Weak Pointers

Syntax of usage is similar as before: Due to operator overloading usage remains
same
```cpp
std::shared_ptr<string> p = std::make_shared<string>("Hello");
auto q = p;
p = nullptr;
if (q != nullptr) {
    std::cout << q->length() << *q << '\n';
}
```

```cpp
{
    T* ptr = new T;
    // ...
    delete ptr;
// its programmers responsibility to delete this pointer after usage.
// otherwise there will be memory leak

template<class T> 
class uniqute_ptr {
    T* p_ = nullptr;

    ~unique_ptr() {
        delete p_;    // deletion done automatically upon destruction
    }
}
}
```

- A raw pointer `T*` is copyable. If I copy, then which of us has now the
ownership. Who holds the responisibility of cleaning up? Both can't clear.

- Unique pointer is not copyable, it is only movable. When the move from A to B,
the move constructor nulls out the source pointer (maintains unique ownsership).

```cpp
// unique pointer is always a template of two parameters.
// second parameter is defaulted to std::default_delete<T>

template<class T, class Deleter = std::default_delete<T>>
class unique_ptr {
    T* p_ = nullptr;
    Deleter d_;

    ~unique_ptr() {
        if (p_) d_(p_); // called deleter on this pointer
    }
};

template<class T> 
struct default_delete {
    void operator()(T *p) comst {
        delete p; 
    }
}

// now we can use this to do some nice things

struct FileCloser {
    void operator() (FILE *fp) const {
        assert (fp != nullptr);
        fclose(fp);   // instead of delete we call close
    }
}

FILE *fp = fopen("input.txt", "r");
std::unique_ptr<FILE, FileCloser> uptr(fp);
```

#### Rule of thumb for smart pointers
- Treat smart pointer just like raw pointer types
    - Pass by value!
    - Return by value (of course)!
    - Passing a pointer by reference 
- A function taking a unique_ptr by value shows transfer of ownership

```cpp
#include <iostream>
#include <memory>

class MyClass {
public:
    MyClass() { std::cout << "MyClass constructor\n"; }
    ~MyClass() { std::cout << "MyClass destructor\n"; }
    void show() { std::cout << "Hello from MyClass\n"; }
};

// Function that takes unique_ptr by value (transfers ownership)
void takeOwnership(std::unique_ptr<MyClass> ptr) {
    std::cout << "Taking ownership\n";
    ptr->show();
}

int main() {
    std::unique_ptr<MyClass> myPtr = std::make_unique<MyClass>();

    // Pass unique_ptr by value to the function, transferring ownership
    takeOwnership(std::move(myPtr)); // myPtr is moved

    // At this point, myPtr is no longer valid
    if (!myPtr) {
        std::cout << "myPtr is now null\n";
    }

    return 0;
}
```
#### Shared Pointer
- syntax similar to the unique_ptr
- It expresses shared ownsership. Reference counting.

![](../assets/sharedPtr.png)
![](../assets/class.png)

`F`, `V` are base classes. `T` is a derived class. Pointers of base class pointing
to object of derived class. Both will be pointing to a different offset in the
heap allocated object.

![](../assets/sharedPtr2.png)








================================================
FILE: notes/templates.md
================================================
## C++ Template

Template is not a thing: Its a recipe for making things.

### Function Templates
These are the recipes for making functions.
```cpp
template<class T>
T const& min(T const& a, T const& b) {
    return (a < b) ? a : b;
}

template(class RandomIt, class Compare) 
void sort(RandomIt first, RandomIt last, Compare comp);
```
### Class Templates
These are the recipes for making classes
```cpp
template<class T, size_t N>
struct array {
    ...
}
```
### Alias Templates (C++11)
These are recipes for making type aliases.
```cpp
template<class Key, class Val>
using my_map = map<Key, Value, greater<Key>>;

my_map<std::string, int> msi;
```

*If some `if/else` decision can be taken at compile time, then there
is no branching at run time, which is pretty efficient. 
Hence try to take more and more decisions at compile time.*

We put the `recipe` in the header files and include these header files
in all those source files where need to instantiate.
The compiler needs to see the recipe for making actual things.

These template definitions (recipe) are treated as `inline`.

`inline`: inline function and variables can have multiple definitions 
across different translation units (All definitions of the inline function must be identical across all translation units).

In simple words, it means that we can place declaration and definition
inside the header file and then include this header file in different
source files. This way we can have multiple definitions but each
definition is identical.

#### Concepts
We can use them to put some constraints for generic code. 
Because templates are just recipes, they are only instantiated when someone calls
them. So compiler will only generate actual thing upon being called. 

Could be used when we need to check if certain templated function has `push_back`
inside it and we don't call it with `set` container.
```cpp
template<typename Coll>
concept HasPushBack = requires (Coll c, Coll::value_type v) {
    c.push_back(v);
};
// this just checks that would calling push_back be valid or not?
```
![](../assets/concepts.png)


#### Variadic Templates

```cpp
#include <iostream>

// Base case: This function is called when there's only one argument left
template <typename T>
T sum(T value) {
    return value;
}

// Variadic template: This function is called with two or more arguments
template <typename T, typename... Args>
T sum(T first, Args... args) {
    return first + sum(args...);  // Recursively call sum with the remaining arguments
}

int main() {
    std::cout << "Sum of 1, 2, 3, 4, 5: " << sum(1, 2, 3, 4, 5) << std::endl;
    std::cout << "Sum of 1.5, 2.5, 3.5: " << sum(1.5, 2.5, 3.5) << std::endl;
    std::cout << "Sum of 10: " << sum(10) << std::endl;  // Single argument case
    return 0;
}
```

### Compile time prime checker

But there is a limit in maximum depth of template class instantiation in compiler.


```cpp
#include <iostream>

// Base case: Primary template to check divisibility
template<int num, int divisor>
struct IsPrimeHelper {
    static constexpr bool value = (num % divisor != 0) && IsPrimeHelper<num, divisor - 1>::value;
};

// Specialization for the base case when divisor reaches 1
template<int num>
struct IsPrimeHelper<num, 1> {
    static constexpr bool value = true;
};

// Specialization to handle numbers less than 2 (not prime)
template<>
struct IsPrimeHelper<1, 1> {
    static constexpr bool value = false;
};

// Main template to check if a number is prime
template<int num>
struct IsPrime {
    // The helper is instantiated with num and num / 2 as the starting divisor
    static constexpr bool value = (num > 1) && IsPrimeHelper<num, num / 2>::value;
};

int main() {
    // Check if 29 is prime
    constexpr int number1 = 29;
    constexpr bool isNumber1Prime = IsPrime<number1>::value;
    std::cout << number1 << (isNumber1Prime ? " is prime." : " is not prime.") << std::endl;

    // Check if 10 is prime
    constexpr int number2 = 10;
    constexpr bool isNumber2Prime = IsPrime<number2>::value;
    std::cout << number2 << (isNumber2Prime ? " is prime." : " is not prime.") << std::endl;

    return 0;
}
```

```cpp
#include <iostream>

// Helper function to check for divisibility at compile time
constexpr bool isDivisible(int num, int divisor) {
    return (num % divisor == 0);
}

// Recursive constexpr function to check if a number is prime
constexpr bool isPrimeHelper(int num, int divisor) {
    // Base case: If divisor is 1, it's prime
    if (divisor == 1) return true;
    // If the number is divisible by the current divisor, it's not prime
    if (isDivisible(num, divisor)) return false;
    // Recursively check for next divisor
    return isPrimeHelper(num, divisor - 1);
}

// Main constexpr function to check primality
constexpr bool isPrime(int num) {
    // Handle special cases for numbers less than 2
    return (num > 1) && isPrimeHelper(num, num / 2);
}

int main() {
    // Test the compile-time prime checker
    constexpr int number1 = 29;
    constexpr int number2 = 10;
    
    // These conditions are evaluated at compile time
    if constexpr (isPrime(number1)) {
        std::cout << number1 << " is prime." << std::endl;
    } else {
        std::cout << number1 << " is not prime." << std::endl;
    }

    if constexpr (isPrime(number2)) {
        std::cout << number2 << " is prime." << std::endl;
    } else {
        std::cout << number2 << " is not prime." << std::endl;
    }

    return 0;
}
```

================================================
FILE: notes/tmux.md
================================================
## Tmux cheat sheet

By default the prefix key is Ctrl + B, I have changed it to Ctrl + Space

Access my tmux config at [link](https://github.com/Shivam5022/.dotfiles/blob/main/tmux.conf)

Some of the commands below are as per my configuration, which is different from the default ones.

### Sessions

1.  Start a new session

            tmux new -s <session-name>

2.  Attach to a session (from outside tmux)

            tmux attach -t <session-name>
            tmux a

3.  Detach from a session

            Press <PREFIX>, then d

4.  List sessions

            tmux ls

### Windows

We can have windows inside a session

1.  Create a new window

            Press <PREFIX>, then c

2.  Rename the current window:

            Press <PREFIX>, then ,

3.  Kill a window / session

            Press Ctrl+d

4.  Switch between sessions and windows

            Press <PREFIX>, then w
            Press <PREFIX>, then <window number>
            Press <PREFIX>, then p (previous window)
            Press <PREFIX>, then n (next window)

### Panes

In a window we can have multiple panes:

1.  Split horizontal

            Press <PREFIX>, then |

2.  Split vertically

            Press <PREFIX>, then _

3.  Switch between panes

            Press <PREFIX>, then <arrow keys>

### More

1.  Rename session

            tmux rename-session -t <old-name> <new-name>

2.  Kill all sessions

            tmux kill-server


================================================
FILE: notes/vim.md
================================================
## NeoVim

**Update:**: I have moved to [Helix](https://helix-editor.com). The config is available at same repo as below. 
(Personally found it better than nvim, because it doesn't require much configuration. It simply works.)

My neovim configuration is available at [this link](https://github.com/Shivam5022/.dotfiles)

Some resources to get started with Modal Editing:

- The Primagen Youtube Playlist [Link](https://youtube.com/playlist?list=PLm323Lc7iSW_wuxqmKx_xxNtJC_hJbQ7R&si=cu_8_omQjZSTbiL7)
- Vim motions tutorial [Link](https://youtu.be/IiwGbcd8S7I?si=xO5xlPMpo-Vn5hrA)
- NeoVim configuration setup [Link](https://youtu.be/6pAG3BHurdM?si=2lm02xVFhozGPGFF)
- Kickstart.nvim [Link](https://youtu.be/m8C0Cq9Uv9o?si=Bz17f3KxKFoxVaQc)


================================================
FILE: notes/virtual_functions.md
================================================
## Virtual Functions and VTables

Must read C++ FAQ Article: [Link](https://isocpp.org/wiki/faq/virtual-functions)

- Used when we want to call member function of derived class by a pointer of 
base class.

```cpp
class Base {
    public:
        Base() {
            std::cout << "Base Constructed" << std::endl;
        }
        ~ Base () {
            std::cout << "Base Destructed" << std::endl;
        }
        void func() {
            std::cout << "Base member function" << std::endl;
        }
}

class Derived : public Base {
    public:
        Derived() {
            std::cout << "Derived Constructed" << std::endl;
        }
        ~ Derived () {
            std::cout << "Derived Destructed" << std::endl;
        }
        void func() {
            std::cout << "Derived member function" << std::endl;
        }
}

int main () {
    Derived instance;
    instance.func();

    // Base Constructed
    // Derived Constructed
    // Derived member function     ---> derived will be printed
    // Derived Destructed
    // Base Destructed


    Base* ptr = new Derived;
    ptr->func();
    delete ptr;

    // Base Constructed
    // Derived Constructed
    // Base member function     ---> base will be printed
    // Base Destructed

    // 2 things to note: Base member function called, and Derived not destructed.

}
```

In the above example if we wish that `func` of derived should be called then, we will have to mark the base function as `virtual` and the
derived function should override that function.

```cpp
virtual void func() {};     // in base class

void func() override {};    // in derived class

ptr->Base::func();          // to explicitly call base member function even after 
                            // declaring it virtual.
``` 

#### Both the destructors should be called!

To ensure this, mark the destructor of base class as virtual. This will ensure 
that the destructor of derived class is also called, upon object destruction.

`virtual ~Base() {}`

### How can C++ achieve dynamic binding yet also static typing?

When you have a pointer to an object, the object may actually be of 
a class that is derived from the class of the pointer (e.g., a 
Vehicle* that is actually pointing to a Car object; this is called 
“polymorphism”). Thus there are two types: the (static) type of the 
pointer (Vehicle, in this case), and the (dynamic) type of the 
pointed-to object (Car, in this case).

Static typing means that the legality of a member function 
invocation is checked at the earliest possible moment: by the 
compiler at compile time. The compiler uses the static type of the 
pointer to determine whether the member function invocation is 
legal. If the type of the pointer can handle the member function, 
certainly the pointed-to object can handle it as well. E.g., if 
Vehicle has a certain member function, certainly Car also has that 
member function since Car is a kind-of Vehicle.

Dynamic binding means that the address of the code in a member 
function invocation is determined at the last possible moment: 
based on the dynamic type of the object at run time. It is called 
“dynamic binding” because the binding to the code that actually 
gets called is accomplished dynamically (at run time). Dynamic 
binding is a result of virtual functions.

### Virtual Table (vTable) - Supports Dynamic Dispatch

To infer which function will be called, when we have some scenario like:

```cpp
base* ptr = &derived;
ptr->function();
```

- If `function()` is not virtual:
    - we will have `early binding` done during compile time only. The function 
    corresponding to the pointer type (here `base`) will be called.
- If `function()` is virtual:
    - we will have `late binding` done during the run time. We will *use `vtable`
    of the `derived class`* to fetch the appropriate function. If this function is
    overridden in the derived class then the vtable will point to this overridden
    function. Otherwise, if we have not overridden this function in the derived
    class, then the vtable will point to the function of base class (since all the
    base class function are inherited by the derived class).
    - Hence:
        - If overridden: call the overridden function of derived class.
        - Else call the function of base class.

[PDF version](../assets/Vtables.pdf)

![](../assets/vtables.png)

## Virtual Functions in Constructor and Destructor

![](../assets/constructor.png)

When the constructor of base class is being invoked, the derived class has yet
not been created: hence the function corresponding to base will be called.

Same is with destructor. The derived class has already been destroyed, and hence
the function corresponding to base will be called.

Therefore, don't do any fancy things inside constructor/destructors which
involve invoking virtual functions.

Download .txt
gitextract_7xs099x_/

├── .gitignore
├── README.md
└── notes/
    ├── CRTP.md
    ├── RAII.md
    ├── allocators.md
    ├── atomic_instructions.md
    ├── buffer_overflow.md
    ├── casting.md
    ├── cheat.md
    ├── const_constexpr.md
    ├── cpp_question_bank.md
    ├── exceptions.md
    ├── find.md
    ├── function_inlining.md
    ├── git-sheet.md
    ├── http.md
    ├── lambdas.md
    ├── latency_numbers.md
    ├── linux_tcp.md
    ├── memory_reordering.md
    ├── metaprogramming.md
    ├── move_semantics.md
    ├── os_booting.md
    ├── packet_handling.md
    ├── padding_packing.md
    ├── performance.md
    ├── placement_new.md
    ├── pre-post-increment.md
    ├── program_to_process.md
    ├── rvo.md
    ├── set_pq.md
    ├── smart_pointers.md
    ├── templates.md
    ├── tmux.md
    ├── vim.md
    └── virtual_functions.md
Condensed preview — 36 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (133K chars).
[
  {
    "path": ".gitignore",
    "chars": 18,
    "preview": ".DS_Store\n.vscode/"
  },
  {
    "path": "README.md",
    "chars": 4690,
    "preview": "# Shivam's Knowledgebase\n\n## CS Fundamentals\n\n### Operating Systems\n\n- Operating Systems: A Linux Kernel-Oriented Perspe"
  },
  {
    "path": "notes/CRTP.md",
    "chars": 2682,
    "preview": "## Static Polymorphism Using CRTP\n\nStatic polymorphism is a type of polymorphism that is resolved at \ncompile time. It i"
  },
  {
    "path": "notes/RAII.md",
    "chars": 10890,
    "preview": "## More on Classes\n\nCompiler generates default functions: Constructor, Copy Constructor, Copy \nAssignment `only if any v"
  },
  {
    "path": "notes/allocators.md",
    "chars": 269,
    "preview": "## An Allocator is a Handle to a Heap\n\nCppNow Link: [Here](https://www.youtube.com/watch?v=0MdSJsCTRkY)\n\n(Wrong) An allo"
  },
  {
    "path": "notes/atomic_instructions.md",
    "chars": 2473,
    "preview": "## Atomics\n\nTo implement locks, we need hardware support for atomic instructions. This can't\nalone be done by software.\n"
  },
  {
    "path": "notes/buffer_overflow.md",
    "chars": 313,
    "preview": "## Buffer Overflow\n\n[Video 1](https://www.youtube.com/watch?v=scaz_pofc7A&list=PLEJxKK7AcSEGPOCFtQTJhOElU44J_JAun&index="
  },
  {
    "path": "notes/casting.md",
    "chars": 4127,
    "preview": "\n### Explicit Keyword\n\nBy default, C++ allows implicit conversions for single-argument constructors. \nThis means that if"
  },
  {
    "path": "notes/cheat.md",
    "chars": 620,
    "preview": "## Cheat.sh\n\nYou can find important usage info about command line tools like grep, find,\ncurl etc at [cheat.sh](https://"
  },
  {
    "path": "notes/const_constexpr.md",
    "chars": 5377,
    "preview": "## Resources:\n1. The below notes are from the CppCon 2021 talk by Rainer Grimm: [Link](https://www.youtube.com/watch?v=t"
  },
  {
    "path": "notes/cpp_question_bank.md",
    "chars": 29,
    "preview": "## C++ Questions for HFT SWE\n"
  },
  {
    "path": "notes/exceptions.md",
    "chars": 3844,
    "preview": "## Exceptions in C++\n\n- Ignoring exception: Leads to core dump\n- What is exception: \n    - Something that gets thrown\n  "
  },
  {
    "path": "notes/find.md",
    "chars": 1448,
    "preview": "## Find command in linux\n\n- search for files and directories (recursively) based on various criterias\n\n- `find [path] [e"
  },
  {
    "path": "notes/function_inlining.md",
    "chars": 1963,
    "preview": "## Function Inlining\n\nC++ FAQs: [Link](https://isocpp.org/wiki/faq/inline-functions)\n\nAssuming that we already know abou"
  },
  {
    "path": "notes/git-sheet.md",
    "chars": 2017,
    "preview": "## Git Command Sheet\n\n- Complete documentation can be found here: [Pro Git Book](https://git-scm.com/book/en/v2)\n- Insta"
  },
  {
    "path": "notes/http.md",
    "chars": 12109,
    "preview": "## HTTP\n- Hyper Text Transfer Protocol\n- Communication between web servers and clients\n- HTTP Requests / Response\n- Its "
  },
  {
    "path": "notes/lambdas.md",
    "chars": 1198,
    "preview": "## Lambdas in C++\n\n```cpp\nclass Plus {\n    int value;\npublic:\n    Plus(int v): value(v) {}\n    int operator() (int x) co"
  },
  {
    "path": "notes/latency_numbers.md",
    "chars": 2793,
    "preview": "## Interesting Latency Numbers\n\nThe notes are taken from this [link](https://gist.github.com/hellerbarde/2843375).\n\n### "
  },
  {
    "path": "notes/linux_tcp.md",
    "chars": 6880,
    "preview": "# LINUX Networking\n\n## High Level Socket API\n\n1. [CS361 Video](https://youtu.be/XXfdzwEsxFk?si=VGb5lymk8Fkaglqk)\n2. [Vid"
  },
  {
    "path": "notes/memory_reordering.md",
    "chars": 6729,
    "preview": "# Memory Reordering\n\n### Some Background\nModern CPUs employ lots of techniques to counteract the latency cost of going \n"
  },
  {
    "path": "notes/metaprogramming.md",
    "chars": 3088,
    "preview": "## Template Metaprogramming\n\n[Link to CPPCON Talk](https://youtu.be/Am2is2QCvxY?si=QrulPFBy7Dg5poQ1)\n\n- Do work at compi"
  },
  {
    "path": "notes/move_semantics.md",
    "chars": 7028,
    "preview": "## Move Semantics\n\nMove Semantics by Klaus Iglberger (CppCon 2019): [Link](https://www.youtube.com/watch?v=St0MNEU5b0o)\n"
  },
  {
    "path": "notes/os_booting.md",
    "chars": 5091,
    "preview": "# Linux Boot Process\n\nBooting is the process of loading an OS from disk and starting it.\n\n## The OS Boot Process\n\n1. **H"
  },
  {
    "path": "notes/packet_handling.md",
    "chars": 1773,
    "preview": "## The Network Packet's Diary\n\nPDF [Notes](../assets/From%20NIC%20to%20Application.pdf)\n\nA packet consists of:\n\n```\n| Et"
  },
  {
    "path": "notes/padding_packing.md",
    "chars": 6058,
    "preview": "## The Lost Art of Structure Packing & Unaligned Memory Accesses\n\n[Padding and Packing](http://www.catb.org/esr/structur"
  },
  {
    "path": "notes/performance.md",
    "chars": 3335,
    "preview": "## Follow these guidelines\n\n1. No unnecessary work\n    - No extra copying\n    - No extra allocations\n\n2. Use all your co"
  },
  {
    "path": "notes/placement_new.md",
    "chars": 2495,
    "preview": "## Placement New in C++\n\n- Allocation and construction are different.\n- A memory allocator is simply supposed to return "
  },
  {
    "path": "notes/pre-post-increment.md",
    "chars": 886,
    "preview": "## `i++` vs `++i`\n\nFirst lets see for our class how can we overload, the ++prefix and postfix++\noperators.\n\n```cpp\nclass"
  },
  {
    "path": "notes/program_to_process.md",
    "chars": 8111,
    "preview": "## Story of the Program to a Process\n\n`Preprocessing` is the first pass of any C++ compilation. It processes\ninclude-fil"
  },
  {
    "path": "notes/rvo.md",
    "chars": 2248,
    "preview": "## Return Value Optimisation\n\nThe object to be returned is constructed in the return value slot. \nExpensive copying is a"
  },
  {
    "path": "notes/set_pq.md",
    "chars": 1325,
    "preview": "## Set vs Priority Queue\n\n### Priority Queue\n- Only gives access to one element in sorted order: the highest priority \ne"
  },
  {
    "path": "notes/smart_pointers.md",
    "chars": 3092,
    "preview": "## Smart Pointers\n\n- Unique Pointers\n- Shared Pointers\n- Weak Pointers\n\nSyntax of usage is similar as before: Due to ope"
  },
  {
    "path": "notes/templates.md",
    "chars": 5480,
    "preview": "## C++ Template\n\nTemplate is not a thing: Its a recipe for making things.\n\n### Function Templates\nThese are the recipes "
  },
  {
    "path": "notes/tmux.md",
    "chars": 1421,
    "preview": "## Tmux cheat sheet\n\nBy default the prefix key is Ctrl + B, I have changed it to Ctrl + Space\n\nAccess my tmux config at "
  },
  {
    "path": "notes/vim.md",
    "chars": 742,
    "preview": "## NeoVim\n\n**Update:**: I have moved to [Helix](https://helix-editor.com). The config is available at same repo as below"
  },
  {
    "path": "notes/virtual_functions.md",
    "chars": 4858,
    "preview": "## Virtual Functions and VTables\n\nMust read C++ FAQ Article: [Link](https://isocpp.org/wiki/faq/virtual-functions)\n\n- Us"
  }
]

About this extraction

This page contains the full source code of the Shivam5022/Systems-and-CPP GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 36 files (124.5 KB), approximately 32.3k tokens. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!