[
  {
    "path": ".gitignore",
    "content": ".DS_Store\n.vscode/"
  },
  {
    "path": "README.md",
    "content": "# Shivam's Knowledgebase\n\n## CS Fundamentals\n\n### Operating Systems\n\n- Operating Systems: A Linux Kernel-Oriented Perspective by Prof. Smruti Sarangi:\n  [Book Link](https://www.cse.iitd.ac.in/~srsarangi/osbook/index.html)\n- OS lectures by Prof. Sorav Bansal: [Youtube Link](https://www.youtube.com/playlist?list=PLf3ZkSCyj1tdCS2oCYACXO6x-VKpDIMB6)\n\n### Caches\n\n- YouTube playlist by Prof. Harry Porter: [Link](https://www.youtube.com/playlist?list=PLbtzT1TYeoMgJ4NcWFuXpnF24fsiaOdGq)\n- Cache Coherence: See relevant videos [here](https://www.youtube.com/watch?v=ISaYWm8T8n4&list=PLUl4u3cNGP62WVs95MNq3dQBqY2vGOtQ2&index=170)\n- MIT Notes: [Intro](https://ocw.mit.edu/courses/6-004-computation-structures-spring-2017/pages/c14/c14s1/#17), [MESI](https://ocw.mit.edu/courses/6-004-computation-structures-spring-2017/pages/c21/c21s1/#18)\n\n### Systems Programming\n\n- CS 361 by Prof. Chris Kanich: [YT Link](https://www.youtube.com/playlist?list=PLhy9gU5W1fvUND_5mdpbNVHC1WCIaABbP)\n- Some Latency Number: [Notes](notes/latency_numbers.md)\n- To be added\n\n### OS\n\n- Atomic Instructions: [Notes](notes/atomic_instructions.md)\n- Memory Reordering: [Notes](notes/memory_reordering.md)\n- Padding and Packing (Aligned Memory Access): [Notes](notes/padding_packing.md)\n- Buffer Overflow Attacks: [Notes](notes/buffer_overflow.md)\n- OS Boot Process: [Notes](notes/os_booting.md)\n- From a Program to Process: [Notes](notes/program_to_process.md)\n- Stack Memory Management: [Link](https://organicprogrammer.com/2020/08/19/stack-frame/)\n- To be added\n\n### Networking\n\n- From NIC to User Processes: [Notes](notes/packet_handling.md)\n- How The Kernel Handles A TCP Connection: [Notes](notes/linux_tcp.md)\n\n### Some Interesting YT Videos\n\n- Revise OS Memory Management: [Link](https://www.youtube.com/watch?v=7aONIVSXiJ8&t=497s)\n- Why Composition better than Inheritance: [Link](https://www.youtube.com/watch?v=tXFqS31ZOFM&list=PLE28375D4AC946CC3&index=24)\n- `mmap` in database system: [Link](https://www.youtube.com/watch?v=1BRGU_AS25c)\n- How processes get more memory: [Link](https://www.youtube.com/watch?v=XV5sRaSVtXQ)\n- `mmap` for File Mapping: [Link](https://www.youtube.com/watch?v=m7E9piHcfr4)\n- `mmap` for IPC: [Link](https://www.youtube.com/watch?v=rPV6b8BUwxM)\n- From Silicon to Applications: [Link](https://youtu.be/5f3NJnvnk7k?si=zVW5JZbXZz8X74XI)\n\n## C++\n\n### General Concepts\n\n- Move Semantics: [Notes](notes/move_semantics.md)\n- Casting: [Notes](notes/casting.md)\n- Const, const_cast<>, constexpr, consteval: [Notes](notes/const_constexpr.md)\n- C++ Coding Practices: [Link](https://micro-os-plus.github.io/develop/sutter-101/)\n- Lambdas: [Notes](notes/lambdas.md)\n- Smart Pointers: [Notes](notes/smart_pointers.md)\n- Classes and RAII: [Notes](notes/RAII.md)\n- Templates: [Notes](notes/templates.md)\n- Virtual Functions and vTables: [Notes](notes/virtual_functions.md)\n- HTTP using libcurl: [Notes](notes/http.md)\n- More C++ Study Notes: [Link](https://encelo.github.io/notes.html)\n- Allocators: [Notes](notes/allocators.md)\n- Placement New: [Notes](notes/placement_new.md)\n- Exceptions (throw, try/catch): [Notes](notes/exceptions.md)\n- `i++` vs `++i` overloading: [Notes](notes/pre-post-increment.md)\n- CRTP: [Notes](notes/CRTP.md)\n- To be added\n\n### C++ Performance\n\n- Keep in Mind: [Notes](notes/performance.md)\n- Return Value Optimisation: [Notes](notes/rvo.md)\n- Template Metaprogramming: [Notes](notes/metaprogramming.md)\n- Set vs PQ: [Notes](notes/set_pq.md)\n- Function Inlining: [Notes](notes/function_inlining.md)\n\n### C++ Implementation\n\n- Unique Pointer: [Code](https://github.com/Shivam5022/CPP-Internals/blob/main/includes/unique_pointer.hpp)\n- Shared Pointer: [Code](https://github.com/Shivam5022/CPP-Internals/blob/main/includes/shared_pointer.hpp)\n- Thread Pool: [Code](https://github.com/Shivam5022/CPP-Internals/blob/main/includes/thread_pool.hpp)\n- HashMap: [Code](https://github.com/Shivam5022/CPP-Internals/blob/main/includes/hashmap.hpp)\n- LRU Cache: [Code](https://github.com/Shivam5022/CPP-Internals/blob/main/includes/LRU_cache.hpp)\n- Scope Timer: [Code](https://github.com/Shivam5022/CPP-Internals/blob/main/includes/timer.hpp)\n- Memory Pool: [Code](https://github.com/Shivam5022/CPP-Internals/blob/main/includes/memory_pool.hpp)\n\n## Development\n\n### Work Environment\n\n- Get started with Vim/Nvim (Helix actually): [Notes](notes/vim.md)\n- Git command sheet: [Notes](notes/git-sheet.md)\n- Tmux cheat sheet: [Notes](notes/tmux.md)\n\n### Docker\n\n- Basic Introduction by Learn Linux TV: [YT Link](https://www.youtube.com/playlist?list=PLT98CRl2KxKECHltRib03tG8pyKEzwf9t)\n\n### CLI utilities\n\n- Cheat.sh: [Notes](/notes/cheat.md)\n- Find command linux: [Notes](/notes/find.md)\n"
  },
  {
    "path": "notes/CRTP.md",
    "content": "## Static Polymorphism Using CRTP\n\nStatic polymorphism is a type of polymorphism that is resolved at \ncompile time. It is primarily achieved through templates and CRTP \n(Curiously Recurring Template Pattern) in C++. In contrast to \ndynamic polymorphism (which uses virtual functions and \ninheritance), static polymorphism doesn’t rely on runtime checks \nlike vtables but instead utilizes compile-time mechanisms.\n\n```cpp\n#include <iostream>\n\n// Base class template that uses CRTP\ntemplate <typename Derived>\nclass Shape {\npublic:\n    // Static polymorphism: Calls the derived class's area() method at compile-time\n    void draw() const {\n        // Calling the derived class's area() method\n        static_cast<const Derived*>(this)->draw();\n    }\n\n    // Common interface for all derived classes\n    double getArea() const {\n        // Calling the derived class's area() method using CRTP\n        return static_cast<const Derived*>(this)->area();\n    }\n};\n\n// Derived class for Circle\nclass Circle : public Shape<Circle> {\npublic:\n    Circle(double radius) : radius_(radius) {}\n\n    // Method specific to Circle\n    void draw() const {\n        std::cout << \"Drawing Circle with radius: \" << radius_ << std::endl;\n    }\n\n    // Implementation of area() for Circle\n    double area() const {\n        return 3.14159 * radius_ * radius_;\n    }\n\nprivate:\n    double radius_;\n};\n\n// Derived class for Square\nclass Square : public Shape<Square> {\npublic:\n    Square(double side) : side_(side) {}\n\n    // Method specific to Square\n    void draw() const {\n        std::cout << \"Drawing Square with side: \" << side_ << std::endl;\n    }\n\n    // Implementation of area() for Square\n    double area() const {\n        return side_ * side_;\n    }\n\nprivate:\n    double side_;\n};\n\nint main() {\n    Circle circle(5.0);\n    Square square(4.0);\n\n    // Static polymorphism: Compile-time resolution\n    circle.draw();  // Calls Circle's draw()\n    square.draw();  // Calls Square's draw()\n\n    std::cout << \"Circle Area: \" << circle.getArea() << std::endl;\n    std::cout << \"Square Area: \" << square.getArea() << std::endl;\n\n    return 0;\n}\n```\n> The Shape class in this static polymorphism example acts as an \ninterface-like structure, but at compile-time. Unlike traditional \ndynamic polymorphism (with a virtual base class), Shape is not used \nfor runtime polymorphism or to manage different types via base \nclass pointers. Instead, it ensures that each derived class (like \nCircle or Square) implements certain methods such as draw() and area().\n\n> It provides a common template for other shapes (like Circle or Square) to \nfollow, forcing them to implement specific methods (draw() and area()).\n\n\n\n"
  },
  {
    "path": "notes/RAII.md",
    "content": "## More on Classes\n\nCompiler generates default functions: Constructor, Copy Constructor, Copy \nAssignment `only if any variant of them are not present`.\n\n### Disallowing Functions\n`f() = delete` : It will prevent the compiler from generating it.\n\n### Private Destructor\nIt means the object cannot be stored in stack. Because when the stack unwinds,\nthe destructor of the objects are called but this destuctor is private.\n\nAlso it can only be destroyed by a factory member function or a friend function.\n*Yes, friends are worse than enemies*.\n\n```cpp\nclass MyClass {\nprivate:\n    ~MyClass() {\n        std::cout << \"Private Destructor Called\" << std::endl;\n    }\n};\n\nint main() {\n    MyClass obj;  // Error: Destructor is private, stack object can't be destroyed\n    MyClass* ptr = new MyClass(); // Yes can be created like this\n    delete ptr;   // Error: Destructor is private, can't delete heap object\n}\n```\n\n**How to destroy it:**\n\n1. Private Constuctor\n```cpp\nclass HeapOnly {\npublic:\n    static HeapOnly* createInstance() {\n        return new HeapOnly();\n    }\n\n    void destroyInstance() {\n        delete this;  // Allows deletion, but only through this function\n    }\n\nprivate:\n    HeapOnly() { std::cout << \"HeapOnly Constructor\" << std::endl; }\n    ~HeapOnly() { std::cout << \"HeapOnly Destructor\" << std::endl; }\n};\n\nint main() {\n    // HeapOnly obj;  // Error: Constructor is private (can't allocate on stack)\n    HeapOnly* obj = HeapOnly::createInstance();\n    obj->destroyInstance();  // Properly deletes the object\n}\n```\n\n2. Public Constructor:\n\n```cpp\n#include <iostream>\n\nclass MyClass {\npublic:\n    // Public constructor\n    MyClass() {\n        std::cout << \"Constructor: MyClass object created!\" << std::endl;\n    }\n\n    // Method to safely delete the object\n    void destroyInstance() {\n        delete this;  // Allows controlled deletion of the object\n    }\n\nprivate:\n    // Private destructor\n    ~MyClass() {\n        std::cout << \"Destructor: MyClass object destroyed!\" << std::endl;\n    }\n};\n\nint main() {\n    // MyClass obj; // ERROR: destructor is private\n    // Creating the object dynamically on the heap\n    MyClass* obj = new MyClass();\n\n    // Deleting the object through the controlled method\n    obj->destroyInstance();\n\n    // obj->~MyClass();    // ERROR: Destructor is private and cannot be called directly\n    // delete obj;         // ERROR: Cannot delete directly, destructor is private\n\n    return 0;\n}\n```\n\n\n## RAII: Resource Acquisition is Initialisation\n\nC++ program can have different type of resources:\n- Allocated memory on heap\n- FILE handles (fopen, fclose)\n- Mutex Locks\n- C++ threads\n\nSome of these resources are `unique` like mutex lock and some can be duplicated\nlike heap allocations and file handlers (they can be `duped`).\n> Some actions needs to be taken by the program in order to free these resources.\n\nTry to do cleanups in the destructor of the object. Since destructor is always\ncalled whenever the object goes out of scope: we don't need to release resources\nexplicitly.\n\n```cpp\nclass NaiveVector {\n    int* arr;\n    size_t size;\n\n    // assume we have released resource in destructor\n}\n\n{\n    NaiveVector v;\n    v.push_back(1);\n    {\n        NaiveVector w = v;  // this would also copy the pointer int* arr\n    } // here int * arr would be released since w is now out of scope\n\n    std::cout << v[0] << '\\n';  // this is invalid now. since arr is deleted\n\n}  // double delete here. arr is already freed, we will free it again.\n\n// the problem above was, NaiveVector w = v, will copy all the member variables\n// as it is, if we don't define our custom copy constructor.\n```\n#### Adding copy constructor\nThe destructor was responsible for freeing resources to avoid any leaks. The\ncopy constructor is responsible for duplicating resources to avoid double frees.\n\n![](../assets/CC.png)\n\n**Initialisation vs Assignment**\n```cpp\n// 1. This is initialisation (construction). Calls copy constructor\nNaiveVector w = v; \n\n// 2. This is assignment to existing object w. Calls assignment operator\nNaiveVector w;\nw = v;\n```\n\n![](../assets/RAII.png)\n\nIn C++, the handling of `try-catch` blocks during an exception involves manipulating the **call stack**. Here’s how it works step by step:\n\n### 1. **Normal Execution and Call Stack Behavior**\n- Under normal execution, each function call pushes a new stack frame onto the call stack.\n- This stack frame holds local variables, return addresses, and other function context.\n- When a function completes, its stack frame is popped off, and control returns to the calling function.\n\n### 2. **When an Exception is Thrown**\nWhen an exception is thrown inside a `try` block:\n- The program immediately **stops executing** the normal flow of code and begins **unwinding the call stack**.\n- This process is known as **stack unwinding**.\n\n### 3. **Stack Unwinding**\nDuring stack unwinding:\n- The function that threw the exception doesn’t return normally. Instead, the runtime looks for a `catch` block that can handle the exception.\n- As the runtime searches for the appropriate `catch`, it starts **popping stack frames** off the call stack, effectively **exiting functions** in reverse order until a suitable handler is found.\n- If any objects are going out of scope as part of this unwinding (i.e., objects with automatic storage duration in the stack frames), their destructors are called to properly clean up resources. This ensures that **RAII** (Resource Acquisition Is Initialization) is respected, and resources such as memory or file handles are properly released.\n\n### 4. **Finding the Appropriate `catch` Block**\n- The runtime checks each function in the call stack, starting with the function where the exception was thrown, to see if there is a `catch` block that matches the exception type.\n- If a matching `catch` block is found, control is transferred to it, and the stack unwinding stops.\n- If no matching `catch` block is found in the current function, the stack unwinding continues to the next function in the call stack.\n\n### 5. **Uncaught Exceptions**\n- If the runtime unwinds all the way through the call stack without finding a matching `catch` block, the program terminates.\n- In this case, the runtime will call `std::terminate`, which by default ends the program, often producing an error message like \"terminate called after throwing an instance of...\".\n\n### Example:\n\n```cpp\n#include <iostream>\n#include <stdexcept>\n\nvoid funcC() {\n    std::cout << \"In funcC\\n\";\n    throw std::runtime_error(\"Exception in funcC\");\n}\n\nvoid funcB() {\n    std::cout << \"In funcB\\n\";\n    funcC();  // Call to funcC, which will throw an exception\n    std::cout << \"In funcB after exception\\n\";  // won't be printed\n}\n\nvoid funcA() {\n    std::cout << \"In funcA\\n\";\n    try {\n        funcB();  // Call to funcB, which will call funcC and eventually throw an exception\n        std::cout << \"In func A return\\n\";  // won't be printed\n    } catch (const std::exception& e) {\n        std::cout << \"Caught exception: \" << e.what() << '\\n';\n    }\n    std::cout << \"Handling Done\\n\";\n}\n\nint main() {\n    funcA();  // Start the chain of function calls\n    return 0;\n}\n```\n\n### Output:\n```plaintext\nIn funcA\nIn funcB\nIn funcC\nCaught exception: Exception in funcC\nHandling Done\n```\n\n### What Happens in the Call Stack:\n1. **`main`** calls **`funcA`**, which adds a stack frame for `funcA` to the call stack.\n2. **`funcA`** calls **`funcB`**, which adds another stack frame for `funcB` to the call stack.\n3. **`funcB`** calls **`funcC`**, which adds yet another stack frame for `funcC` to the call stack.\n4. **`funcC`** throws a `std::runtime_error`. The runtime starts stack unwinding.\n   - The stack frame for `funcC` is popped off the stack, and the destructor of any local variables in `funcC` (if any) are called.\n5. **`funcB`** doesn’t have a `catch` block, so its stack frame is also popped off the stack, and local objects (if any) are destroyed.\n6. Control reaches **`funcA`**, which has a matching `catch` block for `std::exception`. The exception is caught, and stack unwinding stops.\n7. The program continues execution in the `catch` block of `funcA`.\n\n\n- **RAII**: Objects are properly destroyed even during stack unwinding, as destructors are automatically invoked.\n\n- If a matching `catch` block is found, the exception is handled; otherwise, the program terminates.\n\n\n### The Rule of Zero\nIf your class does not directly manage any resource, but merely use library \ncomponents such as vector and string, then write NO special member function.\n\nLet the compiler generate all of them default:\n- Default destructor\n- Default copy constructor\n- Default copy assignment operator \n\n```cpp\n#include <iostream>\n#include <cstring>\n\nclass MyString {\nprivate:\n    char* data; // Dynamically allocated memory to hold a string\npublic:\n    // 1. Default Constructor\n    MyString(const char* str = \"\") {\n        data = new char[std::strlen(str) + 1];\n        std::strcpy(data, str);\n        std::cout << \"Constructor called\\n\";\n    }\n\n    // 2. Destructor\n    ~MyString() {\n        delete[] data;\n        std::cout << \"Destructor called\\n\";\n    }\n\n    // 3. Copy Constructor\n    MyString(const MyString& other) {\n        data = new char[std::strlen(other.data) + 1];\n        std::strcpy(data, other.data);\n        std::cout << \"Copy Constructor called\\n\";\n    }\n\n    // 4. Copy Assignment Operator\n    MyString& operator=(const MyString& other) {\n        if (this == &other) return *this; // Self-assignment check\n\n        delete[] data; // Release old memory\n        data = new char[std::strlen(other.data) + 1]; // Allocate new memory\n        std::strcpy(data, other.data); // Copy the data\n        std::cout << \"Copy Assignment Operator called\\n\";\n        return *this;\n    }\n\n    // 5. Move Constructor\n    MyString(MyString&& other) noexcept : data(other.data) {\n        other.data = nullptr; // Release ownership of the moved-from object\n        std::cout << \"Move Constructor called\\n\";\n    }\n\n    // 6. Move Assignment Operator\n    MyString& operator=(MyString&& other) noexcept {\n        if (this == &other) return *this; // Self-assignment check\n\n        delete[] data; // Release old memory\n        data = other.data; // Steal the data pointer\n        other.data = nullptr; // Release ownership of the moved-from object\n        std::cout << \"Move Assignment Operator called\\n\";\n        return *this;\n    }\n\n    // Helper method to print the string\n    void print() const {\n        std::cout << \"String: \" << (data ? data : \"null\") << '\\n';\n    }\n};\n\nint main() {\n    MyString s1(\"Hello\");\n    MyString s2 = s1; // Invokes Copy Constructor\n    MyString s3;\n    s3 = s1; // Invokes Copy Assignment Operator\n\n    MyString s4 = std::move(s1); // Invokes Move Constructor\n    MyString s5;\n    s5 = std::move(s2); // Invokes Move Assignment Operator\n\n    s4.print();\n    s5.print();\n\n    return 0;\n}\n```"
  },
  {
    "path": "notes/allocators.md",
    "content": "## An Allocator is a Handle to a Heap\n\nCppNow Link: [Here](https://www.youtube.com/watch?v=0MdSJsCTRkY)\n\n(Wrong) An allocator object represents a source of memory.\n\n(Correct) An allocator represents a handle to the source of memory.\n\n*Incomplete, haven't covered fully*"
  },
  {
    "path": "notes/atomic_instructions.md",
    "content": "## Atomics\n\nTo implement locks, we need hardware support for atomic instructions. This can't\nalone be done by software.\n\n### CMPXCHG (Compare-And-Exchange/Swap)\nAtomically compares the value in memory to a register. If the values are equal, \nit writes a new value; otherwise, it leaves the memory unchanged and sets flags\nindicating failure.\n\n`CMPXCHG [mem], reg`\n\n```cpp\nbool CAS(int* addr, int expected, int new_val) {\n    if (*addr == expected) {\n        *addr = new_val; // update the memory with new_val\n        return true;     // return success\n    } else {\n        return false;    // return failure\n    }\n}\n```\n\n### XCHG (Exchange)\nAtomically swaps the contents of a register and a memory location.\n\n`XCHG reg, [mem]`\n\n### LOCK Prefix\nIn x86, certain instructions can be prefixed with the LOCK instruction to make \nthem atomic, meaning the instruction will operate atomically on the memory location.\n\nApplies to instructions like ADD, SUB, INC, DEC, XOR, OR, AND, etc. \nThese operations can then modify memory atomically.\n\n`LOCK ADD [mem], reg`\n\n### Simple spinlock using CAS\n```cpp\ntypedef int spinlock_t; // 0 = unlocked, 1 = locked\n\nvoid spinlock_init(spinlock_t* lock) {\n    *lock = 0; // Initialize the lock as unlocked\n}\n\nvoid spinlock_acquire(spinlock_t* lock) {\n    while (!CAS(lock, 0, 1)) {\n        // Spin until we successfully acquire the lock\n    }\n}\n\nvoid spinlock_release(spinlock_t* lock) {\n    *lock = 0; // Release the lock by setting it to unlocked\n}\n```\n```assembly\nspinlock_acquire:\n    mov eax, 0          ; expected value (unlocked)\nacquire_retry:\n    mov ebx, 1          ; new value (locked)\n    lock cmpxchg [lock], ebx ; compare and swap\n    jne acquire_retry   ; if lock was not acquired, retry\n    ret\n\nspinlock_release:\n    mov [lock], 0       ; set lock to unlocked\n    ret\n```\n\nMore on different types of spinlocks can be found \n[here](https://github.com/Shivam5022/Spin-Locks-and-Contention) in my second \nassignment of COL818.\n\n### How hardware supports atomic instructions:\n\n<mark>The simplest CAS implementations (and the easiest mental model) will simply freeze the local cache coherence protocol state machine after the load part of the CAS brings the relevant cache line into the nearest (e.g. L1) cache in exclusive mode, and will unfreeze it after the (optional) store completes. This, by definition, makes the CAS operation as a whole atomic with relation to any other participant in the cache coherence protocol. </mark>\n\n"
  },
  {
    "path": "notes/buffer_overflow.md",
    "content": "## Buffer Overflow\n\n[Video 1](https://www.youtube.com/watch?v=scaz_pofc7A&list=PLEJxKK7AcSEGPOCFtQTJhOElU44J_JAun&index=33)\n\n[Video 2](https://www.youtube.com/watch?v=o3pcY-bRRgs&list=PLEJxKK7AcSEGPOCFtQTJhOElU44J_JAun&index=34&pp=iAQB)\n\n\n![](../assets/stack.svg)\n\n[Pdf Notes Here](../assets/buffer_overflow.pdf)\n"
  },
  {
    "path": "notes/casting.md",
    "content": "\n### Explicit Keyword\n\nBy default, C++ allows implicit conversions for single-argument constructors. \nThis means that if you have a constructor with one parameter, the compiler \nwill automatically convert objects of that parameter’s type into objects of \nyour class if needed.\n\n- The explicit keyword prevents these implicit conversions.\n\n```cpp\n#include <iostream>\n\nclass MyClass {\npublic:\n    // Constructor with 'explicit'\n    explicit MyClass(int x) {\n        std::cout << \"MyClass constructor called with value: \" << x << std::endl;\n    }\n};\n\nvoid func(MyClass obj) {\n    std::cout << \"In func()\" << std::endl;\n}\n\nint main() {\n    // func(5);  // Error: implicit conversion from int to MyClass is not allowed!\n    func(MyClass(5));  // This works because we explicitly create a MyClass object\n    return 0;\n}\n```\n\n### Casting\n\n#### Static Casting\n\n`static_cast` is used for compile-time type conversions between compatible \ntypes. It performs the conversion at compile time, ensuring type safety in \nmost cases, *but it does not perform runtime checks (unlike dynamic_cast).*\n\n```cpp\n#include <iostream>\n\nint main() {\n    // Basic type conversion\n    float f = 9.5;\n    int i = static_cast<int>(f);  // Converts float to int\n    std::cout << \"int i: \" << i << std::endl;\n\n    // Upcasting (Derived to Base)\n    class Base {};\n    class Derived : public Base {};\n    Derived d;\n    Base* basePtr = static_cast<Base*>(&d);  // Safe upcast (Derived* -> Base*)\n\n    return 0;\n}\n```\n*static_cast performs no runtime checks, so downcasting (casting from base to \nderived) is unsafe unless you’re sure of the object type.*\n\n#### Dynamic Casting\n\n`dynamic_cast` is used for runtime type checking and safe downcasting in \ninheritance hierarchies. It is primarily used for casting between base and \nderived class pointers or references when polymorphism is involved (i.e., \nwhen you have a virtual function in the base class).\n\n```cpp\n#include <iostream>\n\nclass Base {\npublic:\n    virtual ~Base() = default;  // Must have at least one virtual function\n};\n\nclass Derived : public Base {};\n\nint main() {\n    Base* basePtr = new Derived();  // Pointer to Base, but actually a Derived object\n\n    // Safe downcast: checks at runtime if basePtr actually points to a Derived object\n    Derived* derivedPtr = dynamic_cast<Derived*>(basePtr);\n    if (derivedPtr) {\n        std::cout << \"Successfully casted to Derived\" << std::endl;\n    } else {\n        std::cout << \"Failed to cast to Derived\" << std::endl;\n    }\n\n    delete basePtr;\n    return 0;\n}\n```\n\n*dynamic_cast only works with pointers or references to polymorphic types (classes with at least one virtual function).*\n\n#### Re-interpret Casting\n\n`reinterpret_cast` is the most dangerous cast, used for low-level type \nreinterpretation. It allows you to treat a block of memory as if it were a \ndifferent type entirely. This is often used for pointer conversions or \ntype-punning.\n\nThis example shows how to use reinterpret_cast to interpret a 32-bit integer \n(std::uint32_t) as an array of bytes. This kind of operation can be useful in \nnetworking or binary file I/O, where you need to break a larger value into \nits individual bytes (little-endian or big-endian conversion).\n\n```cpp\n#include <iostream>\n#include <cstdint>  // For uint32_t\n\nvoid printBytes(const std::uint8_t* byteArray, std::size_t size) {\n    for (std::size_t i = 0; i < size; ++i) {\n        std::cout << \"Byte \" << i << \": 0x\" << std::hex << static_cast<int>(byteArray[i]) << std::endl;\n    }\n}\n\nint main() {\n    std::uint32_t value = 0x12345678;  // A 32-bit integer (hexadecimal representation)\n    \n    // Reinterpret the 32-bit integer as a byte array\n    const std::uint8_t* byteArray = reinterpret_cast<const std::uint8_t*>(&value);\n    \n    // Print the individual bytes\n    std::cout << \"Value as bytes:\" << std::endl;\n    printBytes(byteArray, sizeof(value));  // Should print the 4 bytes of the integer\n\n    return 0;\n}\n```\n**Output in Little Endian Machine:**\n```\nValue as bytes:\nByte 0: 0x78\nByte 1: 0x56\nByte 2: 0x34\nByte 3: 0x12\n```\n\n#### Const Casting\nRefer [these](./const_constexpr.md) notes."
  },
  {
    "path": "notes/cheat.md",
    "content": "## Cheat.sh\n\nYou can find important usage info about command line tools like grep, find,\ncurl etc at [cheat.sh](https://cheat.sh).\n\nHere is a very minimal script for getting documentation about tools from\ncommand line:\n\n```sh\n# Function to query cheat.sh and display results with less\ncheat() {\n    if [ -z \"$1\" ]; then\n        echo \"Usage: cheat <topic>\"\n        echo \"Example: cheat grep\"\n        return 1\n    fi\n\n    local topic=\"$1\"\n\n    # Fetch the cheat sheet from cheat.sh\n    echo \"https://cheat.sh/${topic}\"\n    curl -s \"https://cheat.sh/${topic}\" | less -R\n}\n```\n\nAdd it in `.zshrc` and use like: `cheat grep`\n"
  },
  {
    "path": "notes/const_constexpr.md",
    "content": "## Resources:\n1. The below notes are from the CppCon 2021 talk by Rainer Grimm: [Link](https://www.youtube.com/watch?v=tA6LbPyYdco)\n\n\n### const\n- Declare a variable const: means you cannot modify it afterwards.\n\n- const objects:\n\n    - must be initialised.\n    - cannot be modified.\n    - they cant be victim of data races. since they are read only.\n    - can only invoke const member functions.\n\n- `const member functions` of a class cannot change the state of the object.\nThat is, they can't change the value of member variables. \nAlthough they can change the value of objects which dont belong to this class.\n\n- use `mutable` keyword, in case you want a member variable to get modified inside a `const` member function.\n\n```cpp\nint f = 100;\nstruct Widget {\n    int a;\n    mutable int c = 0;\n    Widget(int init) : a(init) {}\n    void test(int& p) const {\n        p++; // valid since `p` doesnt belong to this class\n        f++; // valid since `f` doesnt belong to this class\n        c++; // valid since `c` is mutable\n        const_cast<Widget*>(this)->c++; // another way to increment c, without mutable.\n        // a++; // not valid\n        std::cout << a << '\\n';\n    }\n};\n```\n\n- `const char* const a`: means a is a const pointer to a const char. which means the value of neither the pointer nor the pointee can be altered.\n\n\n### const_cast\n- used to remove `const` or `volatile` from a variable. \n\n- modifying the value of  a `const` object by removing its constness is undefined behaviour.\n\n```cpp\n#include <iostream>\n\n\nint main() {\n    const int a = 10;\n    int* b = const_cast<int*> (&a);\n    *b = 11;\n    std::cout << *b << '\\n'; // prints 11\n    std::cout << a << '\\n'; // prints 10\n\n}\n```\n\n<mark> Modifying an object declared as const after casting away its \nconstness using const_cast results in undefined behavior if the \nobject was truly defined as const in its original context. However, \nit’s safe if the object was not originally constant but passed \naround as a const reference or pointer. </mark>\n\n\n```cpp\n#include <iostream>\n#include <vector>\n#include <algorithm>\n#include <cassert>\n\nint main () {\n    // here i is implicitly constant\n    // hence, using const_cast is undefined\n    const int i = 5;\n    // int& j = i;  // error\n    int& j = const_cast<int&>(i);\n    j = 7;\n    std::cout << i << '\\n'; // prints 5\n    std::cout << j << '\\n'; // prints 7\n\n    // int* ptr = &i; // error\n    int* ptr = const_cast<int*> (&i);\n    *ptr = 10;\n    std::cout << i << '\\n'; // still prints 5\n    std::cout << *ptr << '\\n'; // prints 10\n\n    // although same address :)\n    std::cout << ptr << ' ' << &i << '\\n';\n\n\n    // safe usage\n\n    int original = 11; // not inherently constant\n\n    auto change = [](const int& ref) {\n        // ref = 15;  // error\n        int& nonconst = const_cast<int&> (ref);\n        nonconst = 15; // this is applicable\n    };\n\n    change(original);\n\n    std::cout << original << '\\n'; // prints 15 now :)\n\n    // Modifying an object declared as const after casting \n    // away its const-ness using const_cast results in \n    // undefined behavior if the object was truly defined \n    // as const in its original context. \n    // However, it’s safe if the object was not originally \n    // constant but passed around as a const reference \n    // or pointer.\n\n}\n```\n\n### constexpr\n- These expressions can be:\n    - evaluated at the compile time (good optimisation).\n    - they are thread safe.\n- `const` variables are implicitly `constexpr` when initialised with some constant expression.\n- they have potential to run at compile time (*not the guarantee*).\n\n```cpp\n#include <iostream>\n\n// here this gcd function is evaluated at compile time only\nconstexpr int gcd(int a, int b) {\n    return (b == 0) ? a : gcd(b, a % b);\n}\n\nint main() {\n    // all the arguments must be known at compile time\n    constexpr int result = gcd(48, 18);  // Compile-time GCD calculation\n    std::cout << \"GCD of 48 and 18 is: \" << result << std::endl;\n    return 0;\n}\n```\n- C++20 supports the `constexpr` containers: std::vectors and std::string. \n    - Meaning, the memory is allocated and released at compile time (Transient Allocation).\n    -   ```cpp\n        #include <iostream>\n        #include <vector>\n        #include <algorithm>\n\n        constexpr int maxElement() {\n            std::vector<int> a {1, 22, 333, 44, 55};\n            a.push_back(412);\n            std::sort(a.begin(), a.end());\n            return a.back();\n        }\n\n        int main() {\n            // compile time (check godbolt assembly with --std=c++20)\n            constexpr int m = maxElement();\n            std::cout << m << '\\n';\n        }\n        ```\n    \n### consteval\n- must run at compile time (*strong guarantee*)\n\n```cpp\n#include <iostream>\n#include <vector>\n#include <algorithm>\n\nint runTime(int a) {\n    return a + 1;\n}\n\nconstexpr int runOrCompileTime(int a) {\n    return a + 1;\n}\n\nconsteval int compileTime(int a) {\n    return a + 1;\n}\n\nint main() {\n    // constexpr int ans1 = runTime(100); // ERROR\n    constexpr int ans2 = runOrCompileTime(100);\n    constexpr int ans3 = compileTime(100);\n\n    int f = 100;\n    int ans4 = runOrCompileTime(f); // Fine: Because it can be evaluated at runtime too!\n    // int ans5 = compileTime(f); // ERROR: consteval must be evaluated at compile time, but `f' is not const\n    // making `f' const solves the above problem\n}\n```\n\n"
  },
  {
    "path": "notes/cpp_question_bank.md",
    "content": "## C++ Questions for HFT SWE\n"
  },
  {
    "path": "notes/exceptions.md",
    "content": "## Exceptions in C++\n\n- Ignoring exception: Leads to core dump\n- What is exception: \n    - Something that gets thrown\n    - Something that gets caught (hopefully)\n- Throwing and catching exceptions is `expensive`.\n\n- If an exception is thrown:\n    - During stack unwinding the program is terminated\n    - Caught by a matching handler\n    - All the intermediate objects in stack are destructed\n    - If no matching handler is found, the functions `std::terminate` is called\n\n- Desctructor throwing exception:\n    - Bad situation\n    - If an exception is thrown while another exception is already being \n    handled, this causes terminate to be called, which ends the program \n    abruptly. This occurs because C++ cannot handle two simultaneous exceptions.\n\t- For example, if an exception is thrown and the stack unwinds to destroy \n    objects in scope, any destructor that throws an exception will clash with \n    the already active exception.\n\n- Handle the exception there itself (for Destructors)\n    ```cpp\n    struct ResourceHandler {\n        ~ResourceHandler() {\n            try {\n                // Code that may throw an exception\n                cleanup();\n            } catch (const std::exception& e) {\n                // Handle or log the exception, but do not rethrow\n                std::cerr << \"Exception in destructor: \" << e.what() << std::endl;\n            } catch (...) {\n                std::cerr << \"Unknown exception in destructor\" << std::endl;\n            }\n        }\n        \n        void cleanup() {\n            // Code that might throw an exception\n        }\n    };\n    ```\n\n- Exception Hygiene\n    - Throw by value [*This memory is allocated on heap \n     actually, therefore expensive*]\n    - Catch by (const) reference\n\n\n**Custom Exception in C++:**\n\n```cpp\n#include <iostream>\n#include <exception>\n#include <string>\n\n// Step 1: Define the custom exception class\nclass MyCustomException : public std::exception {\nprivate:\n    std::string message;  // Custom error message\n\npublic:\n    // Constructor to initialize the error message\n    MyCustomException(const std::string& msg) : message(msg) {}\n\n    // Override the what() function\n    virtual const char* what() const noexcept override {\n        return message.c_str();\n    }\n};\n\n// Function that may throw MyCustomException\nvoid riskyFunction(bool triggerError) {\n    if (triggerError) {\n        throw MyCustomException(\"Something went wrong in riskyFunction!\");\n    }\n    std::cout << \"Function executed successfully.\" << std::endl;\n}\n\nint main() {\n    try {\n        // Step 2: Call the function and trigger the custom exception\n        riskyFunction(true);  // Pass true to trigger the exception\n    }\n    catch (const MyCustomException& e) {  // Step 3: Catch the custom exception\n        std::cerr << \"Caught custom exception: \" << e.what() << std::endl;\n    }\n    return 0;\n}\n```\n**Re-throwing:**\n\nInside a catch block, using throw; without any argument will \nrethrow the currently caught exception. This is often done to \nperform some actions (like logging) and then pass the \nexception up the stack without changing it.\n\n```cpp\n#include <iostream>\n#include <exception>\n\nvoid innerFunction() {\n    throw std::runtime_error(\"Error in innerFunction\");  // Throwing an exception\n}\n\nvoid outerFunction() {\n    try {\n        innerFunction();\n    }\n    catch (const std::exception& e) {\n        std::cerr << \"Caught in outerFunction: \" << e.what() << std::endl;\n        throw;  // Rethrow the same exception to propagate it further\n    }\n}\n\nint main() {\n    try {\n        outerFunction();\n    }\n    catch (const std::exception& e) {\n        std::cerr << \"Caught in main: \" << e.what() << std::endl;\n    }\n    return 0;\n}\n```\n\n> Prefer to keep exceptions as \"Rare\" as you can, meaning for \nserious, uncommon errors\n\n> Resource management should always use RAII\n"
  },
  {
    "path": "notes/find.md",
    "content": "## Find command in linux\n\n- search for files and directories (recursively) based on various criterias\n\n- `find [path] [expression]` \n  \n  - [path]:  The directory where the search will be started\n  \n  - [expression]: Filter criteria\n  \n  - when using `-name -iname` as criteria * is supported for wildcard\n\n- `find /home/user -name \"example.txt\"` find a file with following name in /home/user directory\n\n- `find /home/user -iname \"example.txt\"`: case insensitive search `iname`\n\n- `find /path -type f`: find only files not directories\n\n- `find /path -type d`: find only directories\n\n- `find /path -type l`: find symbolic links\n\n- `find / -size +100M`: find files greater than 100MB\n\n- `find / -size -5k`: find files smaller than 5KB\n\n- `find /home -type f -perm 644`: find files with permission mode\n\n- `find /home -type f -mtime -7`: find files modified in last 7 days\n\n- `find /home -type f -atime -2`: find files accessed in last 2 days\n\n- `find /path -type f -name \"*.log\" -exec rm {} +`\n  \n  - `-exec`: executes the given command\n  \n  - `{}`: placeholder for the attributes\n  \n  - `+`: end of the command\n\n- `find /path -type f -empty`: find empty files (use d for directories)\n\n- `find /home -type f -name \"*.log\" -exec grep -i \"error\" {} +`: searching inside files (find all the log files which contain \"error\") \n\n- `find /var/log -type f -name \"*.log\" | xargs rm`: xargs takes a list and passes each element as arguments to another command\n"
  },
  {
    "path": "notes/function_inlining.md",
    "content": "## Function Inlining\n\nC++ FAQs: [Link](https://isocpp.org/wiki/faq/inline-functions)\n\nAssuming that we already know about the ODR (One Definition Rule) and how \nmarking a function inline helps that, lets discuss function inling from \nperformance POV.\n\n### Why Inlining\n\nWhen the compiler inline-expands a function call, the function’s code gets \ninserted into the caller’s code stream.\n\nWhen a program makes a function call, the instruction pointer (IP) jumps to a \ndifferent memory address, executes the instructions at that location, and then \njumps back to the original location.\n\nThis jumping to a new address can be inefficient because the next instruction \nto be executed may not be cached in the L1-I cache.\n\nIf the function is small, it often makes more sense for it to be inlined in the \ncaller’s code stream. In such cases, there is no jump to an arbitrary location, \nand the L1-I cache remains warm.\n\nAdditionally, compilers are generally better suited to apply optimizations when \nthe code is inlined, compared to optimizing across multiple distinct functions.\n\n\n### Why not always inline\nInlining all function calls can lead to code bloat, increasing the size of the \nexecutable and potentially causing cache thrashing.\n\nConsider a scenario in the hot path: before sending an order to the exchange, we \nperform a sanity check. If there is an error, we call the function logAndDebug, \nwhich handles some bookkeeping internally. In the typical case (the happy path), \nthe order is sent to the exchange.\n\n```cpp\nbool isError = checkOrder(order);\n\nif (isError) {\n    logAndDebug(order);\n} else {\n    sendOrderToExchange(order);\n}\n\n```\n\nHere, `isError` is rarely true, and the happy path is executed most of the time.\n\nIf the function `logAndDebug` were inlined, unnecessary instructions—executed only \nin rare cases—would occupy space in the instruction cache, potentially polluting \nit. This could slow down the program instead of improving performance."
  },
  {
    "path": "notes/git-sheet.md",
    "content": "## Git Command Sheet\n\n- Complete documentation can be found here: [Pro Git Book](https://git-scm.com/book/en/v2)\n- Install `lazygit` for nice TUI experience.\n\n## Basics\n\n1. `git clone url [new repo name]`\n2. Created a alias `git config --global alias.glog \"log --graph --pretty=format:'%C(yellow)%h%C(reset) - %C(cyan)%an%C(reset) - %C(blue)%ad%C(reset) - %s' --date=short\"`.\n\n   Use `git glog` now\n\n3. To check the remote servers (eg GitHub): `git remote -v`.\n\n   If you clone a repository, the command automatically adds that remote repository under the name “origin”. If your current branch is set up to track a remote branch, you can use the `git pull` command to automatically fetch and then merge that remote branch into your current branch. By default, the `git clone` command automatically sets up your local master branch to track the remote master branch (or whatever the default branch is called) on the server you cloned from. Running `git pull` generally fetches data from the server you originally cloned from and automatically tries to merge it into the code you’re currently working on.\n\n4. Pushing branch to remote: `git push <remote> <branch>`\n\n5. Inspecting a remote in detail: `git remote show <origin>`\n\n## Branches\n\n1. Create a new branch and checkout to it: `git checkout -b <newbranchname>`\n\n2. Merge a branch to master: `git checkout master && git merge <branch to be merged>`\n\n3. See all branches: `git branch --all -vv`\n\n4. Pushing to new remote branch name: you could run `git push origin serverfix:awesomebranch` to push your local serverfix branch to the awesomebranch branch on the remote project.\n\n5. `git checkout -b <branch> <remote>/<branch>` : Create a local tracking branch for a remote branch\n\n6. For pt 5, shortcut is `git checkout <branch>` (If the branch name you’re trying to checkout (a) doesn’t exist and (b) exactly matches a name on only one remote, Git will create a tracking branch for you)\n\n## Stashing\n\n1. `git stash`\n\n2. `git stash list`\n\n3. `git stash apply / pop`\n"
  },
  {
    "path": "notes/http.md",
    "content": "## HTTP\n- Hyper Text Transfer Protocol\n- Communication between web servers and clients\n- HTTP Requests / Response\n- Its stateless (each request is independent)\n\n`GET`\nRetrieves data from the server\n\n`POST`\nSubmit data to the server\n\n`PUT`\nUpdate data already on the server\n\n`DELETE`\nDeletes data from the server\n\n```\n200 - OK\n201 - OK created\n301 - Moved to new URL\n304 - Not modified (Cached version)\n400 - Bad request\n401 - Unauthorized\n404 - Not found\n500 - Internal server error\n```\n\n### HTTP in C++ using libcurl\n\nFor `json` parsing, I have used this amazing library [nlohmann](https://github.com/nlohmann/json).\n\n```cpp\n#include <iostream>\n#include <curl/curl.h>\n#include \"json.hpp\"\n#include <fstream>\n\nusing json = nlohmann::json;\n```\nThis callback is called everytime we receive a response from the server. We pass\ninto it, a pointer to user string and append the response in this string.\n```cpp\n// Helper function to capture server responses into a string\nstatic size_t WriteCallback(void* contents, size_t size, size_t nmemb, std::string* out) {\n    size_t totalSize = size * nmemb;\n    out->append((char*)contents, totalSize);\n    return totalSize;\n}\n```\nSending a `GET` request (it takes no argument).\nWe have stored the response in a JSON object:\n```cpp\njson httpGet(const std::string& url) {\n    CURL* curl;\n    CURLcode res;\n    std::string readBuffer;\n\n    curl = curl_easy_init();\n    if(curl) {\n        curl_easy_setopt(curl, CURLOPT_URL, url.c_str());\n        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);\n        curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);\n\n        res = curl_easy_perform(curl);\n        curl_easy_cleanup(curl);\n\n        if(res != CURLE_OK) {\n            std::cerr << \"GET request failed: \" << curl_easy_strerror(res) << std::endl;\n        }\n    }\n    // Do this only when the response type is JSON\n    return json::parse(readBuffer);\n}\n```\n\nHeaders are appended like this:\n```cpp\nstruct curl_slist* headers = NULL;\nheaders = curl_slist_append(headers, \"Content-Type: application/json\");\n// headers = curl_slist_append(headers, \"<SOME MORE HEADER>\");\ncurl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);\n```\n\nSending a `POST` request. It takes the data we want to post as\nargument in JSON format.\nInside the function, we dump this JSON object into a c-style string.\n```cpp\njson httpPost(const std::string& url, const json& data) {\n    CURL* curl;\n    CURLcode res;\n    std::string readBuffer;\n\n    curl = curl_easy_init();\n    if(curl) {\n        curl_easy_setopt(curl, CURLOPT_URL, url.c_str());\n        curl_easy_setopt(curl, CURLOPT_POST, 1L);\n\n        // JSON data\n        std::string jsonData = data.dump();\n        curl_easy_setopt(curl, CURLOPT_POSTFIELDS, jsonData.c_str());\n\n        struct curl_slist* headers = NULL;\n        headers = curl_slist_append(headers, \"Content-Type: application/json\");\n        curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);\n\n        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);\n        curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);\n\n        res = curl_easy_perform(curl);\n        curl_easy_cleanup(curl);\n\n        if(res != CURLE_OK) {\n            std::cerr << \"POST request failed: \" << curl_easy_strerror(res) << std::endl;\n        }\n        long response_code;\n        curl_easy_getinfo(curl, CURLINFO_RESPONSE_CODE, &response_code);\n        std::cout << \"Got response code: \" << response_code << std::endl;\n    }\n\n    return json::parse(readBuffer);\n}\n```\n\nFunction to send PUT request:\n```cpp\njson httpPut(const std::string& url, const json& data) {\n    CURL* curl;\n    CURLcode res;\n    std::string readBuffer;\n\n    curl = curl_easy_init();\n    if(curl) {\n        curl_easy_setopt(curl, CURLOPT_URL, url.c_str());\n        curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, \"PUT\");\n\n        std::string jsonData = data.dump();\n        curl_easy_setopt(curl, CURLOPT_POSTFIELDS, jsonData.c_str());\n\n        struct curl_slist* headers = NULL;\n        headers = curl_slist_append(headers, \"Content-Type: application/json\");\n        curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);\n\n        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);\n        curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);\n\n        res = curl_easy_perform(curl);\n        curl_easy_cleanup(curl);\n\n        if(res != CURLE_OK) {\n            std::cerr << \"PUT request failed: \" << curl_easy_strerror(res) << std::endl;\n        }\n    }\n\n    return json::parse(readBuffer);\n}\n```\n\nFunction to send PATCH request:\n```cpp\njson httpPatch(const std::string& url, const json& data) {\n    CURL* curl;\n    CURLcode res;\n    std::string readBuffer;\n\n    curl = curl_easy_init();\n    if(curl) {\n        curl_easy_setopt(curl, CURLOPT_URL, url.c_str());\n        curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, \"PATCH\");\n\n        std::string jsonData = data.dump();\n        curl_easy_setopt(curl, CURLOPT_POSTFIELDS, jsonData.c_str());\n\n        struct curl_slist* headers = NULL;\n        headers = curl_slist_append(headers, \"Content-Type: application/json\");\n        curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);\n\n        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);\n        curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);\n\n        res = curl_easy_perform(curl);\n        curl_easy_cleanup(curl);\n\n        if(res != CURLE_OK) {\n            std::cerr << \"PATCH request failed: \" << curl_easy_strerror(res) << std::endl;\n        }\n    }\n\n    return json::parse(readBuffer);\n}\n```\nFunction to send DELETE request:\n```cpp\nvoid httpDelete(const std::string& url) {\n    CURL* curl;\n    CURLcode res;\n\n    curl = curl_easy_init();\n    if(curl) {\n        curl_easy_setopt(curl, CURLOPT_URL, url.c_str());\n        curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, \"DELETE\");\n\n        res = curl_easy_perform(curl);\n        curl_easy_cleanup(curl);\n\n        if(res != CURLE_OK) {\n            std::cerr << \"DELETE request failed: \" << curl_easy_strerror(res) << std::endl;\n        }\n    }\n}\n```\n#### Putting everything together:\n\n```cpp\n#include <iostream>\n#include <curl/curl.h>\n#include \"json.hpp\"\n#include <fstream>\n\nusing json = nlohmann::json;\n\n// Helper function to capture server responses into a string\nstatic size_t WriteCallback(void* contents, size_t size, size_t nmemb, std::string* out) {\n    size_t totalSize = size * nmemb;\n    out->append((char*)contents, totalSize);\n    return totalSize;\n}\n\n// Function to send GET request\njson httpGet(const std::string& url) {\n    CURL* curl;\n    CURLcode res;\n    std::string readBuffer;\n\n    curl = curl_easy_init();\n    if(curl) {\n        curl_easy_setopt(curl, CURLOPT_URL, url.c_str());\n        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);\n        curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);\n\n        res = curl_easy_perform(curl);\n        curl_easy_cleanup(curl);\n\n        if(res != CURLE_OK) {\n            std::cerr << \"GET request failed: \" << curl_easy_strerror(res) << std::endl;\n        }\n    }\n\n    return json::parse(readBuffer);\n}\n\n// Function to send POST request\njson httpPost(const std::string& url, const json& data) {\n    CURL* curl;\n    CURLcode res;\n    std::string readBuffer;\n\n    curl = curl_easy_init();\n    if(curl) {\n        curl_easy_setopt(curl, CURLOPT_URL, url.c_str());\n        curl_easy_setopt(curl, CURLOPT_POST, 1L);\n\n        // JSON data\n        std::string jsonData = data.dump();\n        curl_easy_setopt(curl, CURLOPT_POSTFIELDS, jsonData.c_str());\n\n        struct curl_slist* headers = NULL;\n        headers = curl_slist_append(headers, \"Content-Type: application/json\");\n        curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);\n\n        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);\n        curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);\n\n        res = curl_easy_perform(curl);\n        curl_easy_cleanup(curl);\n\n        if(res != CURLE_OK) {\n            std::cerr << \"POST request failed: \" << curl_easy_strerror(res) << std::endl;\n        }\n    }\n\n    return json::parse(readBuffer);\n}\n\n// Function to send PUT request\njson httpPut(const std::string& url, const json& data) {\n    CURL* curl;\n    CURLcode res;\n    std::string readBuffer;\n\n    curl = curl_easy_init();\n    if(curl) {\n        curl_easy_setopt(curl, CURLOPT_URL, url.c_str());\n        curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, \"PUT\");\n\n        std::string jsonData = data.dump();\n        curl_easy_setopt(curl, CURLOPT_POSTFIELDS, jsonData.c_str());\n\n        struct curl_slist* headers = NULL;\n        headers = curl_slist_append(headers, \"Content-Type: application/json\");\n        curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);\n\n        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);\n        curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);\n\n        res = curl_easy_perform(curl);\n        curl_easy_cleanup(curl);\n\n        if(res != CURLE_OK) {\n            std::cerr << \"PUT request failed: \" << curl_easy_strerror(res) << std::endl;\n        }\n    }\n\n    return json::parse(readBuffer);\n}\n\n// Function to send PATCH request\njson httpPatch(const std::string& url, const json& data) {\n    CURL* curl;\n    CURLcode res;\n    std::string readBuffer;\n\n    curl = curl_easy_init();\n    if(curl) {\n        curl_easy_setopt(curl, CURLOPT_URL, url.c_str());\n        curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, \"PATCH\");\n\n        std::string jsonData = data.dump();\n        curl_easy_setopt(curl, CURLOPT_POSTFIELDS, jsonData.c_str());\n\n        struct curl_slist* headers = NULL;\n        headers = curl_slist_append(headers, \"Content-Type: application/json\");\n        curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);\n\n        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);\n        curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);\n\n        res = curl_easy_perform(curl);\n        curl_easy_cleanup(curl);\n\n        if(res != CURLE_OK) {\n            std::cerr << \"PATCH request failed: \" << curl_easy_strerror(res) << std::endl;\n        }\n    }\n\n    return json::parse(readBuffer);\n}\n\n// Function to send DELETE request\nvoid httpDelete(const std::string& url) {\n    CURL* curl;\n    CURLcode res;\n\n    curl = curl_easy_init();\n    if(curl) {\n        curl_easy_setopt(curl, CURLOPT_URL, url.c_str());\n        curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, \"DELETE\");\n\n        res = curl_easy_perform(curl);\n        curl_easy_cleanup(curl);\n\n        if(res != CURLE_OK) {\n            std::cerr << \"DELETE request failed: \" << curl_easy_strerror(res) << std::endl;\n        }\n    }\n}\n\nint main() {\n    std::string baseUrl = \"https://jsonplaceholder.typicode.com/users\";\n\n    // 1. GET Request - Retrieve a list of users\n    std::cout << \"GET Request: Fetching users...\" << std::endl;\n    json users = httpGet(baseUrl);\n    std::ofstream file(\"../key.json\");\n    file << users.dump(4);\n    std::cout << users[0][\"address\"].dump(4) << std::endl;  // Pretty print the JSON\n\n    // 2. Modify user data (update name)\n    json user = users[0];\n    user[\"name\"] = \"John Doe Updated\";\n    \n    // 3. POST Request - Send the modified user back to the server\n    std::cout << \"\\nPOST Request: Creating a new user...\" << std::endl;\n    json newUser = httpPost(baseUrl, user);\n    std::cout << newUser.dump(4) << std::endl;\n    \n    // // 4. PUT Request - Update the entire user\n    std::string putUrl = baseUrl + \"/1\";  // Assuming user ID is 1\n    std::cout << \"\\nPUT Request: Updating user 1...\" << std::endl;\n    json updatedUser = httpPut(putUrl, user);\n    std::cout << updatedUser.dump(4) << std::endl;\n    \n    // 5. PATCH Request - Update a single field of the user\n    json patchData;\n    patchData[\"email\"] = \"updated.email@example.com\";\n    std::cout << \"\\nPATCH Request: Updating user's email...\" << std::endl;\n    json patchedUser = httpPatch(putUrl, patchData);\n    std::cout << patchedUser.dump(4) << std::endl;\n\n    // 6. DELETE Request - Remove the user\n    std::cout << \"\\nDELETE Request: Deleting user 1...\" << std::endl;\n    httpDelete(putUrl);\n    std::cout << \"User 1 deleted.\" << std::endl;\n\n    return 0;\n}\n```"
  },
  {
    "path": "notes/lambdas.md",
    "content": "## Lambdas in C++\n\n```cpp\nclass Plus {\n    int value;\npublic:\n    Plus(int v): value(v) {}\n    int operator() (int x) const {\n        return x + value;\n    }\n};\n\n// The above thing can be achived as (lambdas reduce boilercode):\n\nauto plus = [value = 1] (int x) {\n    return x + value;\n};\n\n// this plus is basically an object of some class whose name I can't spell.\n// it has a data member `value` (captures)\n``` \n\n`lambda` is object of some anonymous class, but the `this` keyword inside this\nlambda won't work as usual. It wouldn't point to this object. Rather `this`\ninside a lambda refers to the `this` of outer context (wherever this lambda is\nbeing declared)!\n\n![](../assets/lambda.png)\nHere `kitten` makes a copy when it needs.\n\n`cat` makes a copy when initialised.\n\n> By default, lambdas that capture variables by value are const (meaning you \ncannot modify the captured variables inside the lambda). If you want to modify \nthe captured value, you can use the mutable keyword.\n\nSending lambda as a parameter to the function:\n![](../assets/lambda-param.png)\n\nAlternatively we can use `std::function` in the parameter type and\npass a lambda to it.\n\nCopying of lambdas:\n\n![](../assets/copy.png)"
  },
  {
    "path": "notes/latency_numbers.md",
    "content": "## Interesting Latency Numbers\n\nThe notes are taken from this [link](https://gist.github.com/hellerbarde/2843375).\n\n### CPU Frequency (Clock Speed):\n- The CPU frequency refers to the number of cycles the CPU can execute per second. It’s typically measured in Hertz (Hz), with modern CPUs operating in the range of Gigahertz (GHz) (1 GHz = 1 billion cycles per second).\n- Higher frequency generally means more cycles per second, but that doesn’t directly translate to faster execution if other bottlenecks, like memory access or instruction complexity, exist.\n- Example: A CPU with a frequency of 3 GHz can execute up to 3 billion cycles per second.\n\n`On Average, 1 Clock Cycle = 0.3 ns`\n\n```cpp\nL1 cache reference ......................... 0.5 ns\nBranch mispredict ............................ 5 ns\nL2 cache reference ........................... 7 ns\nMutex lock/unlock ........................... 25 ns\nMain memory reference ...................... 100 ns             \nCompress 1K bytes with Zippy ............. 3,000 ns  =   3 µs\nSend 2K bytes over 1 Gbps network ....... 20,000 ns  =  20 µs\nSSD random read ........................ 150,000 ns  = 150 µs\nRead 1 MB sequentially from memory ..... 250,000 ns  = 250 µs\nRound trip within same datacenter ...... 500,000 ns  = 0.5 ms\nRead 1 MB sequentially from SSD* ..... 1,000,000 ns  =   1 ms\nDisk seek ........................... 10,000,000 ns  =  10 ms\nRead 1 MB sequentially from disk .... 20,000,000 ns  =  20 ms\nSend packet CA->Netherlands->CA .... 150,000,000 ns  = 150 ms\n```\n\n### Lets multiply all these durations by a billion:\n\nMagnitudes:\n\n#### Minute:\n    L1 cache reference                  0.5 s         Blink of eye (0.5 s)\n    Branch mispredict                   5 s           Sip of water\n    L2 cache reference                  7 s           Long yawn\n    Mutex lock/unlock                   25 s          Making a coffee\n\n### Hour:\n    Main memory reference               100 s         Brushing your teeth\n    Compress 1K bytes with Zippy        50 min        One episode of a TV show (including ad breaks)\n\n### Day:\n    Send 2K bytes over 1 Gbps network   5.5 hr        From lunch to end of work day\n\n### Week\n    SSD random read                     1.7 days      A normal weekend\n    Read 1 MB sequentially from memory  2.9 days      A long weekend\n    Round trip within same datacenter   5.8 days      A medium vacation\n    Read 1 MB sequentially from SSD    11.6 days      Waiting for almost 2 weeks for a delivery\n\n### Year\n    Disk seek                           16.5 weeks    A semester in university\n    Read 1 MB sequentially from disk    7.8 months    Almost producing a new human being\n    The above 2 together                1 year\n\n### Decade\n    Send packet CA->Netherlands->CA     4.8 years     CS5 degree at IITD"
  },
  {
    "path": "notes/linux_tcp.md",
    "content": "# LINUX Networking\n\n## High Level Socket API\n\n1. [CS361 Video](https://youtu.be/XXfdzwEsxFk?si=VGb5lymk8Fkaglqk)\n2. [Video on Protocol Stack](https://www.youtube.com/watch?v=3b_TAYtzuho)\n\n`Server`: This is the component which listens for connections.\n\n`Client`: It sends a connect request to the server. This is the component, which\ninitiates a connection.\n\n```mermaid\nsequenceDiagram\n    participant Client\n    participant Server\n    participant SocketServer as Server Socket\n    participant SocketClient as Client Socket\n\n    Server->>SocketServer: socket()\n    Server->>SocketServer: bind()\n    Server->>SocketServer: listen()\n    Server-->>Client: Waiting for connection\n    \n    Client->>SocketClient: socket()\n    Client->>SocketServer: connect()\n\n    Server->>SocketServer: accept()\n    Server-->>Client: Connection accepted\n    \n    Client->>SocketServer: send(\"Hello, Server!\")\n    SocketServer->>Server: recv()\n    Server-->>Client: send(\"Hello, Client!\")\n    Client->>SocketClient: recv()\n\n    Client->>SocketClient: close()\n    Server->>SocketServer: close()\n```\n\n### Client Side\n\n`socket`: It just creates a file descriptor, but doesn't do anything more!\n\n`connect`: Takes a file descriptor and server's address & sends a connection\nrequest to the server.\n\n`send & recv`: Given a connected file descriptor, submit bytes to the OS for\ndelivery and ask OS to deliver the bytes. (Similar to read/write.)\n- These functions don't do the actual transfer of bytes, they are just asking the\nOS to do these things.\n\n`close`: Given a connected file descriptor, it tells the OS that this connection\ncan be terminated.\n- Kernel continues sending the buffered bytes.\n- At the end of buffered bytes, sends special 'EOF' message: which tells the receiver\nthat I am done sending, and will close the connection, you can close too!\n\n### Server Side\n\n`bind`: Given a file descriptor, tells the kernel to associate it with the \ngiven IP and port (Making a reservation at this address.)\n\n`listen`: Given a file descriptor that has been binded to a IP/Port, it now asks\nthe OS that it wishes to start receiving connections.\n\n`accept`: Given a fd which is listening, it creates a `new fd` that can be used to\ncommunicate with individual client. This call is blocking by default, until a \nclient shows up.\n- Note that here a new fd has been created to talk to this client. The old fd is\nnot for sending/receiving messages from the client. It is just used for accepting\nconnections from the client.\n\n\n*Keeping track of the return values of send/recv are very crucial. The returned\nbytes may be less than what you asked for. So keep trying until you have received\nwhat you wanted.\nYou are only submitting the bytes to the operating system, not actually sending\nthem to the server.*\n\n\n## How The Kernel Handles A TCP Connection\n\nPackets and Syscall Analysis:  [Video](https://www.youtube.com/watch?v=ck4WvYM9V4c)\n\n### TCP Three-Way Handshake Overview\n\n```mermaid\nsequenceDiagram\n    participant cc as Client[192.168.1.105:1234]\n    participant ss as Server[192.168.1.102:5000]\n\n    cc->>ss: SYN (SEQ = X)\n    ss-->>cc: SYN-ACK (SEQ = Y, ACK = X + 1)\n    cc->>ss: ACK (ACK = Y + 1)\n    Note over cc: 3 Way Handshaking done\n    ss->>cc: Hello World \n```\n### Using `ncat`\n\n- Creating server: `ncat -l 1234`\n\n\t- `-l`: Tells Ncat to listen for incoming connections.\n    - `1234`: The port where Ncat is listening for connections.\n- Connect to a server: `ncat <ip> <port>`\n\n- Create UDP server: `ncat -l -u 1234`\n- Connect to UDP server: `ncat -u <ip> <port>`\n\n### Server Side Analysis\n\nThe following events happen while running `ncat` on the server side:\n\n1. `socket` system call: It requests an ipv4 socket to be created. \nOn success, it returns a file descriptor for the newly created \nsocket (let fd = 3).\n\n2. `bind` system call: It takes a file descriptor (fd 3) (returned by \nsocket), and binds it to an ip address and port. Socket itself is \njust a data structure with a buffer. We need to bind it. \n\n3. `listen` system call: Listens on this socket for any `TCP SYN`\nrequest.\n\n4. Then we make a `select` system call. This call waits on a file descriptor \n(fd 3),\nuntil its available for read or write. This is generally used in IO multiplexing.\nIn our case, we wait on the socket's fd. Here are waiting for someone to send \nconnection requests, since we are listening for connection. The `ncat` process\nis wait-blocked at this moment. It is unblocked, when some desirable event occurs.\n\n5. When it is unblocked, it means a `syn` request has arrived, and kernel has\ncompleted 3-way handshake with the client. \nThus `accept` system call is ran on this fd 3. The accept system call on success\nreturns a new data socket for that connection. A `new fd (let fd 4)` is now\ncreated to talk to this client.\n \n6. `close` system call is called on fd 3 (this is specific to ncat, since it only\naccepts on client connection).   \n\n7. We then make `select` system call on fd = 0, 4. `fd 0` is for stdin, incase we \nwant to send some data to client and `fd 4` is the data socket, which will tell\nif we have something to read.\n\nSuppose we receive a packet over wifi; an interrupt will be raised and \ncorresponding interrupt handler will be called. \nInside the kernel thread running the interrupt handler, we consume the packet\nfrom the buffer, create `skb` struct (socket kernel buffer).\nAfter consuming, we copy the packet into the socket buffer of the `ncat`. At \nthis point the `select` syscall (waiting on this fd) of the user process returns. \n\n### Handshaking from Process' POV\nThe protocol stack of Linux does the handshaking without involving the application.\n\nThe kernel unblocks the process waiting on `select` syscall (while listening\nfor new connection), only when the handshake is completed. The user process is not\ninvolved in the handshake. \nIt only receives the last `ack` of 3-way handshake. On receiving, it can accept\nthe new connection and can  start to `send/recv` data.\n\n![](../assets/syn.png)\n*Note that size of accept queue is bounded. If we don't accept the ready connections, new connections would be dropped* \n\n### ACKs\nEven the `acks/retransmissions` are handled by the kernel's protocol stack. \nThe application is not notified about this. The TCP header contain flags, which\ntell if it contains some data or ack. Based on the flag, the protocol stack takes\nappropriate decisions. \n\nGenerally the user process if only unblocked (from `select` syscall), when the\nkernel copies some useful data into the fd's read buffer, which the process can\nconsume. \n\n### Some benchmarks\n\n*take it with pinch of salt*\n \n- From receiving interrupt for SYN to accepting the connection: 1.7 ms \n(*It actually depends between the distance between client and server, as 2 more\nexchanges are involved in between. This data is from localhost I think*) \n\n- Processing a packet interrupt: 200 microseconds\n"
  },
  {
    "path": "notes/memory_reordering.md",
    "content": "# Memory Reordering\n\n### Some Background\nModern CPUs employ lots of techniques to counteract the latency cost of going \nto main memory.  These days CPUs can process hundreds of instructions in the \ntime it takes to read or write data to the DRAM memory banks.\n\nMemory access is generally the bottlenack.\n\nHardware caches are the most common tools used to hide this latency.\nUnfortunately CPUs are now so fast that even these caches cannot keep up at \ntimes.  So to further hide this latency a number of less well known buffers \nare used. \n\n#### Lets understand about `store buffers` \n\nWhen a CPU executes a store operation it will try to write the data to the `L1 \ncache` nearest to the CPU. If a cache miss occurs at this stage the CPU goes \nout to the next layer of cache. At this point on an Intel, and many other, \nCPUs a technique known as `write combining` comes into play. \n\nWhile the request for ownership of the L2 cache line is outstanding the data to \nbe stored is written to one of a number of cache line sized buffers on the \nprocessor itself, known as `store buffers` on Intel CPUs.  These on chip \nbuffers allow the CPU to continue processing instructions while the cache \nsub-system gets ready to receive and process the data.  The biggest advantage \ncomes when the data is not present in any of the other cache layers.\n\nThese buffers become very interesting when subsequent writes happen to require \nthe same cache line.  The subsequent writes can be combined into the buffer \nbefore it is committed down the cache hierarchy.\n\n> What happens if the program wants to read some of the data that has been \nwritten to a buffer?  Well our hardware friends have thought of that and they \nwill snoop the buffers before they read the caches.\n\n![](../assets/store_buffer.png)\n\n*Loads and stores to the caches and main memory are buffered and re-ordered \nusing the load, store, and write-combining buffers.  These buffers are \nassociative queues that allow fast lookup.  This lookup is necessary when a \nlater load needs to read the value of a previous store that has not yet reached \nthe cache.*\n\n### Fencing\n\nWhen a program is executed it does not matter if its instructions are \nre-ordered provided the same end result is achieved. For example, within a loop \nit does not matter when the loop counter is updated if no operation within the \nloop uses it.  \nThe compiler and CPU are free to re-order the instructions to best utilise the \nCPU provided it is updated by the time the next iteration is about to \ncommence.  \nAlso over the execution of a loop this variable may be stored in a register and \nnever pushed out to cache or main memory, thus it is never visible to another \nCPU.\n\n> We use `volatile` keyword to tell the compiler that don't store this variable\nin register. Always push down the changes to memory. So that other CPU can \nalways see the latest value.\n\nProvided “program order” is preserved the CPU, and compiler, are free to do \nwhatever they see fit to improve performance.\n\n\n\n\n### Hardware Memory Ordering in x86 Processors\n\nThe term memory ordering refers to the order in which the processor issues \nreads (loads) and writes (stores) through the system bus to system memory.\n\nFor example, the Intel386 processor enforces program ordering \n(generally referred to as strong ordering), where reads and writes are issued \non the system in the order they occur in the instruction stream.\n\nBut the hardware may reorder the instructions for some optimizations. Sometimes\nreads could go ahead of buffered writes.\n\n<mark> Reads may be reordered with older writes to different memory locations \nbut not with older writes to same memory location.</mark>\n\nThat is, if we write to location 1 and read from location 2, then the read from\nlocation 2 could become globally visible before write to location 1.\n\n```\nlet x = y = 0\n\nprocessor 0                     processor 1\nx = 1                           y = 1\nprint y                         print x\n\noutput = (0, 0) is possible\n```\n\n<mark> Stores are usually buffered before being sent to memory (L1 cache). We \nprioritise loads more than stores. Since they are on critical path. The instructions\nare waiting for the data to be loaded before they can run. \nAlthough if a store followed by a load are for same memory location then we will\ndefinitely follow program order.</mark>\n\n### Software Reordering\nCompiler can also sometimes reorder instructions in our program for optimizations.\nFor example, store to 2 different memory locations can be reordered by our\ncompiler.\n\n### Avoid memory reordering\nIn a multi-threaded environment techniques need to be employed for making \nprogram results visible in a timely manner.\nThe techniques for making memory visible from a processor core are known as \nmemory barriers or fences.\n\nMemory barriers provide two properties.  Firstly, they preserve externally \nvisible program order by ensuring all instructions either side of the barrier \nappear in the correct program order if observed from another CPU and, secondly, \nthey make the memory visible by ensuring the data is propagated to the cache \nsub-system.\n\n#### Asking compiler not to reorder\n`asm volatile(\"\" : : : \"memory\");` Fake instruction that asks compiler to not\nreorder any memory instruction around this barrier. A hint to compiler that \nwhole of the memory can be touched by this instruction: hence don't do any\nreordering. \n\n*In this case the hardware can still reorder instructions, even though we asked\nour compiler to not reorder! Hence we will have to use hardware barriers.*\n\n```cpp\n#include <emmintrin.h>\nvoid _mm_mfence (void) // Use this instruction as a barrier to prevent re-ordering in the hardware!\n```\n\nPerform serializing operation on all `load-from-memory` and `store-to-memory` \ninstructions that were issued prior to this instruction. \nGuarantees that every memory access that precedes, in program order the memory \nfence instruction is globally visible before any memory instruction which \nfollows the fence in program order.\n\n*It drains the `store buffer`, before any following `loads` can go into memory.*\n\n### Performance Impact of Memory Barriers\n\nMemory barriers prevent a CPU from performing a lot of techniques to hide \nmemory latency therefore they have a significant performance cost which must be \nconsidered.  To achieve maximum performance it is best to model the problem so \nthe processor can do units of work, then have all the necessary memory barriers \noccur on the boundaries of these work units\n\n## C++ Memory Model\n\n- [Memory Model Article](https://dev.to/kprotty/understanding-atomics-and-memory-ordering-2mom)\n- [Post on Stack Overflow](https://stackoverflow.com/questions/12346487/what-do-each-memory-order-mean)\n\n\n"
  },
  {
    "path": "notes/metaprogramming.md",
    "content": "## Template Metaprogramming\n\n[Link to CPPCON Talk](https://youtu.be/Am2is2QCvxY?si=QrulPFBy7Dg5poQ1)\n\n- Do work at compile time that otherwise would be done at Runtime.\n- In C++, the template instantiation happens at the compile time, hence we make\nuse of it.\n- For example if we call `f(x)` the compiler will manufacture(instantiate) the\nfunction for us (assume f is a template here).\n- It is not free, as the heavy work needs to be done at compile time which \nleads to increased compile time.\n- Can't rely on runtime primitives like virtual functions, dynamic dispatch.\nKeep things constant while metaprogramming.\n\n1. **Absolute Value Metafunction**\n\n```cpp\ntemplate<int N>\nstruct ABS {\n    static constexpr int value = (N < 0) ? -N : N;\n};\n```\n\n- A metafunction call: The arguments are passed through the template's \narguments.\n- `Call` syntax is a request for the template's static value.\n- `const int ans = ABS<-142>::value;`\n\n2. **Compile Time GCD**\n\nHere we use compile time recursion. For base cases, we have to do pattern \nmatching.\n\n```cpp\ntemplate<int N, int M>\nstruct gcd {\n    static constexpr int value = gcd<M, N % M>::value;\n};\n\ntemplate<int N>\nstruct gcd<N, 0> {\n    static_assert(N != 0);\n    static constexpr int value = N;\n};\n```\n\n3. **Metafunction can take a type as Parameter/Argument**\nWe can make a metafunction similar to `sizeof`\n\n```cpp\n// primary template handles scalar (non-array) types as base case:\ntemplate<class T> \nstruct rank {\n    static constexpr size_t value = 0u;\n};\n\n// partial specialization recognizes any array type:\ntemplate<class U, size_t N>\nstruct rank<U[N]> {\n    static constexpr size_t value = 1 + rank<U>::value;\n};\n\nconst int N = rank<int[2][1][4]>::value; // gives 3 at compile time\n```\n\n*Here we didn't recurse on the primary template, but did on the specialisation.*\n\n4. **Type**\n```cpp\n#include <iostream>\n#include <type_traits>\n\n// A simple type trait to remove constness\ntemplate <typename T>\nstruct RemoveConst {\n    using type = T;  // Default case: T is unchanged\n};\n\ntemplate <typename T>\nstruct RemoveConst<const T> {\n    using type = T;  // Specialized case: remove const qualifier\n};\n\nint main() {\n    // Using the RemoveConst trait\n    RemoveConst<const int>::type x = 42;  // 'RemoveConst<const int>::type' is equivalent to 'int'\n    std::cout << \"x = \" << x << std::endl;\n    \n    return 0;\n}\n```\n\n5. **Conditional Types during compile time**\n```cpp\n#include <iostream>\n#include <stdexcept>\n\n\ntemplate<typename T>\nstruct type_is {\n\tusing type = T;\n};\n\n// primary template assumes the bool value is true:\ntemplate<bool, typename T, typename Q>\nstruct conditional_type : type_is<T> {};\n\n// partial specialization recognizes a false value:\ntemplate<typename T, typename Q>\nstruct conditional_type<false, T, Q> : type_is<Q> {};\n\nint main() {\n    constexpr bool q = false;\n    conditional_type<q, int, double>::type s;\n    std::cout << sizeof(s) << '\\n';\n    return 0;\n}\n```\n`false_type` and `true_type` can have static value with F/T.\n\n![](../assets/meta1.png)\n\nHow to deal with parameters pack:\n\n![](../assets/meta2.png)\n\n"
  },
  {
    "path": "notes/move_semantics.md",
    "content": "## Move Semantics\n\nMove Semantics by Klaus Iglberger (CppCon 2019): [Link](https://www.youtube.com/watch?v=St0MNEU5b0o)\n\n```cpp\nstd::vector<int> v1 {1, 2, 3, 4, 5};\n// in stack we will just store the pointers (to the start and the end)\n// the actual elements will be stored in heap\nstd::vector<int> v2 {};\nv2 = v1;\n// when we do v2 = v1, we copy the contents, that is new memory is assigned in\n// heap and content of old vectors are copied\n\nv2 = std::move(v1);\n// in this case we are transferring the ownership. that is, now the pointer of\n// v2 will point to the heap memory orignally pointed by v1.\n// and pointers of v1 will be set to 0.\n```\n\n### L value vs R value\n\n1. **L-Value (Left Value)**:\nAn **L-value** is an expression that **refers to a memory location** and \ncan persist beyond a single expression. It typically appears on \nthe **left-hand side** of an assignment, but it can also be used on the \nright-hand side.\n\n- **L-values** have an identifiable location in memory, meaning you can take \ntheir address using the `&` operator.\n- Examples of **L-values** include variables, dereferenced pointers, or array \nelements.\n- **Modifiable L-values** are L-values that can be changed (i.e., non-const), \nwhile **non-modifiable L-values** are constants.\n\n#### Examples of L-values:\n```cpp\nint x = 5;    // `x` is an L-value because it refers to a memory location\nx = 10;       // `x` can appear on the left-hand side of an assignment\n\nint* p = &x;  // You can take the address of an L-value\n*p = 20;      // Dereferenced pointer `*p` is an L-value\n```\n\nIn the code above:\n- `x` is an L-value because it refers to a memory location that persists across \nexpressions.\n- `*p` is also an L-value because it refers to the value stored at the \naddress in `p`.\n\n2. **R-Value (Right Value)**:\nAn **R-value** is an expression that does **not have a persistent memory location**. \nIt usually represents **temporary values** that only exist during the evaluation \nof an expression. R-values typically appear on the **right-hand side** of an \nassignment and are not addressable (you can't take the address of an R-value).\n\n- R-values are usually **temporary objects**, **literals**, or **expressions** \nlike `2 + 3`.\n- You can't assign to an R-value because they do not refer to a memory location \nthat can be modified.\n\n#### Examples of R-values:\n```cpp\nint x = 5 + 10;  // `5 + 10` is an R-value (a temporary value)\nint y = 42;      // `42` is an R-value (literal)\n\nx = y + 1;       // `y + 1` is an R-value (result of the expression)\n```\n\nIn the code above:\n- `5 + 10` is an R-value because it's a temporary result and cannot be assigned to.\n- `42` is a literal R-value.\n\n\n#### L-value References:\n- Traditional references (as introduced in earlier versions of C++) are **L-value references**.\n- They can only bind to L-values.\n\nExample:\n```cpp\nint x = 10;\nint& ref = x;  // L-value reference to `x`\nref = 20;      // Modifies `x`\n```\n\n#### R-value References:\n- **R-value references** (introduced in C++11) are used to bind to R-values, \nallowing you to modify them.\n- They are denoted by `&&`.\n- Commonly used in **move semantics** to avoid unnecessary copies.\n\nExample:\n```cpp\nint&& rref = 5;  // R-value reference to the temporary value `5`\nrref = 10;       // Modifies the R-value\n```\n\nHere, `rref` is an R-value reference that allows us to bind to a temporary \nobject (`5`) and even modify it.\n\n- **Move semantics** make use of R-value references to \"move\" resources from \none object to another, avoiding expensive deep copies. This is especially useful\nwhen dealing with temporary objects (R-values).\n  \nExample:\n```cpp\nstd::string s1 = \"Hello\";\nstd::string s2 = std::move(s1);  // Moves the contents of `s1` to `s2`\n// after ownership transfer s1 will now be a valid but undefined state!\n```\n\nIn the code above:\n- `std::move(s1)` is an R-value reference that allows the move constructor of \n`std::string` to transfer ownership of the data from `s1` to `s2`.\n- <mark>`std::move` unconditionally casts its input into an rvalue reference. \nIt doesnt move anything!\n- <mark>`std::move(s)` when s is const, leads to copy not move!\n\n```cpp\nconst std::string s1 = \"Shivam Verma\";\nstd::string s2 = std::move(s1);  // this is COPY not MOVE\nstd::cout << s1 << ' ' << s2 << '\\n';\n```\n\n\n### Operators\n1. Copy Assignment Operator\n\n    `vector& operator=(const vector& rhs);` It takes an lvalue\n\n2. Move Assignment Operator\n\n    `vector& operator=(vector&& rhs);` It takes an rvalue\n\n```cpp\nclass Widget {\n    private:\n        int i {0};\n        std::string s{};\n        int* pi {nullptr};\n    \n    public:\n        // Move constructor: Goal is to transfer content of w into this\n        // Leave w in a valid but undefined state\n        Widget (Widget&& w) : i (w.i), \n                              s (std::move(w.s)), \n                              pi (w.pi) {\n            w.pi = nullptr;\n\n            // we could also do: i (std::move(w.i)), \n            //                   s (std::move(w.s)), \n            //                   pi (std::move(w.pi))\n\n        }\n\n        // Move assignment operator\n        Widget& operator=(Widget&& w) {\n            i = std::move(w.i);\n            // s = w.s // don't do this, it copies not move\n            s = std::move(w.s);\n            delete pi; // need to clear exisiting resources first!\n            pi = std::move(w.pi);\n\n            w.pi = nullptr; // reset content of w\n\n            return *this;\n        }\n}\n```\n\n\n### Small string optimisation\n\n\n## Small String Optimization (SSO)\n\n```mermaid\ngraph TD;\n    A[Small String] -->|Stored in| B[Stack Buffer];\n    C[Larger String] -->|Stored in| D[Heap Allocation];\n    B --> E{SSO};\n    D --> F{Heap Allocator};\n```\n\n- For small strings (e.g., \"short str\"), the data is stored directly in the \nobject on the stack.\n- For larger strings, the data is dynamically allocated on the heap, and the \nobject holds a pointer to that data.\n\n\n## Universal References (Forwarding Reference)\n\n```cpp\ntemplate<typename T>\nvoid f(T&& x); // Forwarding Reference\n\nauto&& var2 = var1; // Forwarding Reference\n```\nThey represent: \n- an `lvalue` reference if they are initialised by an lvalue.\n- an `rvalue` reference if they are initialised by an rvalue.\n\n```cpp\ntemplate<typename T>\nvoid foo(T&& ) {\n    print(\"foo(T&&)\");\n}\n\nint main () {\n    Widget w{};\n    foo(w); // prints \"foo(T&&)\" \n    foo(Wifget{}) // also prints \"foo(T&&)\"\n\n    // w was lvalue, Widget{} was rvalue: T&& binded to both\n}\n```\n- <mark>`std::forward` conditionally casts its input into an rvalue reference. \nIt doesnt forward anything!\n\n    - If given value is lvalue, cast to an lvalue reference.\n    - If given value is rvalue, only then cast to an rvalue reference.\n\n`rvalues` can bind to lvalue reference to const, but not to lvalue reference.\n```cpp\nvoid f(Widget& );           // 1\nvoid f(const Widget& );     // 2\ntemplate<typename T>        // 3\nvoid f(T&& );\n\nint main() {\n    f(getWidget{});             // this can bind to 3, 2 but not 1\n}\n```  \n\n![](../assets/binding.png)"
  },
  {
    "path": "notes/os_booting.md",
    "content": "# Linux Boot Process\n\nBooting is the process of loading an OS from disk and starting it.\n\n## The OS Boot Process\n\n1. **Hit the power button**\n\n-   Triggers a `power good` signal.\n    -   Electric pulse sent to reset pin of the CPU (Power On Reset).\n    -   CPU is in `Reset` mode, i.e., it is not executing any instructions.\n-   All devices get power and initialize themselves.\n-   Every register is set to zero, except `Code Segment (CS)` and \n`Instruction Pointer (IP)`, which are set to `0xf000` and `0xfff0` respectively.\n    -   Thus, the `physical address = (CS << 4) + IP = (0xf000 << 4) + 0xfff0 = 0xf0000 + 0xfff0 = 0xffff0` (We are operating in 16-bit mode right now).\n        -   This physical address is the place where the CPU starts executing instructions.\n-   The CPU is activated in Real Mode and it starts executing from `0xffff0` (or `ffff0h`), which is a memory location in the `BIOS chip` and not in the RAM.\n    - The BIOS chip (Basic Input/Output System) is a small, `non-volatile` memory chip located on the motherboard of a computer.\n    -   Real Mode\n        -   Only 1 MB of RAM addressability in the range `0x0` to `0x100000`.\n            -   This is because there are 20 physical address bus lines available. (2^20 = 1048576 = 1 MB)\n        -   `16 bit addressing:` Available registers (Eg: `AX`) are of size 16 bits, so two registers are combined to give the physical address.\n            -   Logical address (LBA) = segment:offset\n            -   `Physical address = (segment << 4) + offset`\n                -   The segments are segments/parts of the addressable 1 MB of RAM and the offsets are offsets into that segment.\n                -   Eg: If the segment is `0xf000` and the offset is `0xfff0`, then the `physical address = (CS << 4) + IP = (0xf000 << 4) + 0xfff0 = 0xf0000 + 0xfff0 = 0xffff0`\n2.  **Basic Input/Output System (BIOS) takes over.**\n-   Placed in Flash/EPROM Non-Volatile Memory. Its job is load the bootloader.\n-   In a multi-processor environment, one processor is a `Boot Processor (BSP)` which executes all instructions and the others are `Application Processors (APs)`.\n-   Conducts a Power-On Self-Test (`POST`).\n    -   Performs system inventory.\n    -   Checks and registers all devices connected.\n-   Finds `Master Boot Record (MBR)` in the first sector of a device (the hard disk, SSD, USB, etc.) that is usually 512 bytes in size, loads it into RAM at position `0x7c00`, jumps to that location and starts executing.\n    -   MBR is a 512 byte sector that's logically split into three sections.\n        -   The first 446 bytes is reserved for a program, which is usually a Bootloader. (Eg: [GRUB](https://www.gnu.org/software/grub))\n        -   The next 64 bytes (16x4 bytes) are for a partition table with four partitions in it.\n        -   The last two bytes are for the Boot Signature bytes `0x55` (or `55h`) and `0xAA`(or `AAh`) in order, that identify that a particular sector is the MBR.\n            -   If this signature is not found in the first sector, then the next device is searched.\n-   The BIOS loads the Bootloader into memory.\n    -   This might be the first stage of the Bootloader, which loads the second stage of the Bootloader into memory, as 446 bytes are not sufficient to store all the complex logic required to load an OS.\n    -   Bootloader might give an option to load a particular Operating System.\n    -   Sets up the `GDT/IVT` for the Operating System.\n    -   Switches from `Real Mode` to `Protected Mode`.\n        -   Memory addressability goes from 1 MB to the entire range of available RAM.\n3.   **The Bootloader starts executing and checking the partition table for an active/bootable partition table.**\n\n-   On finding the bootable partition, the Bootloader loads the first sector of that partition (called the Boot Record) from the hard disk to the RAM.\n4.   **The Boot Record loads the operating system into memory.**\n5.   **Timers, devices, hard disks, etc. are initialized by the Operating System in the Kernel Space.**\n6.   **In Linux, the `init` process is the first process in User Space that initializes the OS processes, daemons and displays login prompt.**\n\n```mermaid\ngraph TD\n    A[Power Button Pressed] --> B[Power Good Signal]\n    B --> C[CPU Starts in Reset Mode]\n    C --> D[BIOS Loads from 0xFFFF0]\n    D --> E[Power-On Self-Test POST]\n    E --> F[BIOS Searches for MBR]\n    F --> G[MBR Found on Bootable Device]\n    G --> H[BIOS Loads Bootloader at 0x7C00]\n    H --> I[Bootloader Starts Executing]\n    I --> J[Bootloader Checks Partition Table]\n    J --> K[Bootloader Finds Active Partition]\n    K --> L[Bootloader Loads OS Kernel]\n    L --> M[Switch to Protected Mode]\n    M --> N[Kernel Starts Initializing Hardware]\n    N --> O[Kernel Mounts Root Filesystem]\n    O --> P[Kernel Starts init/systemd Process]\n    P --> Q[init/systemd Initializes System Services]\n    Q --> R[User Login Prompt Displayed]\n```\n## Resources\n\n-   [PC Booting: How PC Boots](https://www.youtube.com/watch?v=ZplB2v2eMas)\n-   [Booting an Operating System](https://www.youtube.com/watch?v=7D4qiFIosWk)\n\n"
  },
  {
    "path": "notes/packet_handling.md",
    "content": "## The Network Packet's Diary\n\nPDF [Notes](../assets/From%20NIC%20to%20Application.pdf)\n\nA packet consists of:\n\n```\n| Ethernet Header | IP Header | TCP Header | Data |\n```\nNetwork Interface Controller (NIC):\n- Receives the packet\n- Compares the MAC destination address (the address to be compared against is\nprogrammed by the OS)\n- Verifies the Ethernet (Frame) Checksum FCS\n- Stores the packet to buffer programmed by the driver using DMA\n- Triggers an interrupt\n\nInterrupt:\n- Top half processing:\n    - Acknowledge the interrupt\n    - Schedule the 'Bottom Half Processing'\n\n- Bottom half processing:\n    - It identifies the memory where the packet is stored.\n    - Allocates `sk_buf` (it is a struct which contains various pointers, like \n    pointer to different headers, pointer to data and other metadata).\n\n    ```mermaid\n    graph TD;\n    A[sk_buff] --> B[Memory Buffer];\n    B --> C[Head Pointer];\n    B --> D[Data Pointer];\n    B --> E[Tail Pointer];\n    B --> F[End Pointer];\n    \n    A --> G[Packet Headers];\n    G --> H[Ethernet Header]; \n    G --> I[IP Header];\n    G --> J[TCP/UDP Header];\n\n    A --> K[Metadata];\n    K --> L[Reference Counters];\n    K --> M[Checksum Info];\n    K --> N[Flags];\n    \n    B --> O[Packet Data];\n    ```\n    - Passes the `sk_buf` to the protocol stack.\n\nThe `sk_buf` traverses various levels, where some checksums are verified, headers\nare removed and other metadata processing is done.\n\nEventually, it reaches TCP stack:\n- TCP checksum verified\n- Handles the TCP state machine\n- Enqueues the data to socket's recevive queue\n- Signals the fd that the data is available (for processes sleeping on `select`)\n\nOn Socket read (by user process):\n- Dequeue data from socket's receive queue\n- Copy to user buffer\n- Release the `sk_buf`\n"
  },
  {
    "path": "notes/padding_packing.md",
    "content": "## The Lost Art of Structure Packing & Unaligned Memory Accesses\n\n[Padding and Packing](http://www.catb.org/esr/structure-packing/)\n\n[Memory Alignment](https://docs.kernel.org/core-api/unaligned-memory-access.html)\n\nUnaligned memory accesses occur when you try to read `N` bytes of data starting \nfrom an address that is not evenly divisible by `N` (i.e. `addr % N != 0`). \n\nFor example, reading 4 bytes of data from address `0x10004` is fine, but reading \n4 bytes of data from address `0x10005` would be an unaligned memory access.\n\n`Natural Alignment`: When accessing `N bytes` of memory, the base memory address \nmust be evenly divisible by `N`, i.e. `addr % N == 0`.\n\n*When writing code, assume the target architecture has natural alignment requirements.*\n\n### Why unaligned access is bad\n\nThe effects of performing an unaligned memory access vary from architecture \nto architecture. A summary of the common scenarios is presented below:\n\n- Some architectures are able to perform unaligned memory accesses transparently, \nbut there is usually a significant performance cost.\n\n- Some architectures raise processor exceptions when unaligned accesses happen. \nThe exception handler is able to correct the unaligned access, at significant \ncost to performance.\n\n- Some architectures raise processor exceptions when unaligned accesses happen, \nbut the exceptions do not contain enough information for the unaligned access to \nbe corrected.\n\n- Some architectures are not capable of unaligned memory access, but will \nsilently perform a different memory access to the one that was requested, \nresulting in a subtle code bug that is hard to detect!\n\nIf our code causes unaligned memory accesses to happen, out code will not work \ncorrectly on certain platforms and will cause performance problems on others.\n\n### How Compiler helps\n\nThe way our compiler lays out basic datatypes in memory is constrained in order \nto make memory accesses faster.\n\n*Each type except `char` has an alignment requirement: `chars` can start on any \nbyte address, but `2-byte shorts` must start on an even address, `4-byte ints` \nor `floats` must start on an address divisible by 4, and `8-byte longs` or \n`doubles` must start on an address divisible by 8. \nSigned or unsigned makes no difference.*\n\nSelf-alignment makes access faster because it facilitates generating \nsingle-instruction fetches and puts of the typed data. Without alignment \nconstraints, on the other hand, the code might end up having to do two or more \naccesses spanning machine-word boundaries. \n\nCharacters are a special case: they’re equally expensive from anywhere they \nlive inside a single machine word. That’s why they don’t have a preferred alignment.\n\n```cpp\n// consider these variables declaration\nchar *p;\nchar c;\nint x;\n\n// actual layout in memory\nchar *p;      /* 4 or 8 bytes */\nchar c;       /* 1 byte */\nchar pad[3];  /* 3 bytes */\nint x;        /* 4 bytes */\n```\n\n```cpp\n// consider these variables declaration\nchar *p;\nchar c;\nlong x;\n\n// actual layout in memory\nchar *p;     /* 8 bytes */\nchar c;      /* 1 byte */\nchar pad[7]; /* 7 bytes */\nlong x;      /* 8 bytes */\n```\n\n![](../assets/pad.png)\n\n#### Structure alignment and padding\n\n```cpp\nstruct foo1 {\n    char *p;\n    char c;\n    long x;\n};\n\n// Assuming a 64-bit machine, any instance of struct foo1 will have 8-byte alignment.\n\nstruct foo1 {\n    char *p;     /* 8 bytes */\n    char c;      /* 1 byte\n    char pad[7]; /* 7 bytes */\n    long x;      /* 8 bytes */\n};\n\n```\n\n```cpp\nstruct foo5 {\n    char c;\n    struct foo5_inner {\n        char *p;\n        short x;\n    } inner;\n};\n\n// The char *p member in the inner struct forces the outer struct to be pointer-aligned as well as the inner. \n\nstruct foo5 {\n    char c;           /* 1 byte*/\n    char pad1[7];     /* 7 bytes */\n    struct foo5_inner {\n        char *p;      /* 8 bytes */\n        short x;      /* 2 bytes */\n        char pad2[6]; /* 6 bytes */\n    } inner;\n};\n```\n\nIn the below example, we can observe that padding is even added at the end, for complete alignment (in case we \nhave array of structs). Even if we don't have an array, we will have this padding:\n```cpp\nstruct mystruct_A {\n    char a;\n    char pad1[3]; /* inserted by compiler: for alignment of b */\n    int b;\n    char c;\n    char pad2[3]; /* -\"-: for alignment of the whole struct in an array */\n} x;\n```\n\nNow that we know how and why compilers insert padding in and after our structures \nwe’ll examine what we can do to squeeze out the slop. \nThis is the art of structure packing.\n\nThe first thing to notice is that slop only happens in two places:\n- One is where storage bound to a larger data type (with stricter alignment \nrequirements) follows storage bound to a smaller one. \n- The other is where a struct naturally ends before its stride address, requiring \npadding so the next one will be properly aligned.\n\nThe simplest way to eliminate slop is to reorder the structure members by \ndecreasing alignment. \n\nThat is: make all the pointer-aligned subfields come first, because on a 64-bit \nmachine they will be 8 bytes. Then the 4-byte ints; then the 2-byte shorts; \nthen the character fields.\n\n#### Overriding Alignment Rules\n\nWe can ask our compiler to not use the processor’s normal alignment rules by \nusing a pragma, usually `#pragma pack`.\n\n*Do not do this casually, as it forces the generation of more expensive and slower code.*\n\n```cpp\n#pragma pack(1)  // Force 1-byte alignment\nstruct PackedExample {\n    char a;  // 1 byte\n    int b;   // 4 bytes\n};\n\n// Here, b is no longer aligned on a 4-byte boundary. \n// It forces the CPU to perform unaligned memory accesses.\n\n#pragma pack()  // Reset to default alignment\n```\n\n### Endianness: Big Endian and Little Endian\nIt specifies the order in which bytes of a word are stored in memory.\n\n*`x86(32/64 bit)` is `little-endian`.*\n\n*While `ARM` defaults to `little-endian`, it can be configured to operate in \nbig-endian mode as well.*\n\n(Big-Endian is actually more intuitive at first sight!)\n\n![](../assets/endian_1.png)\n![](../assets/endian_2.png)\n"
  },
  {
    "path": "notes/performance.md",
    "content": "## Follow these guidelines\n\n1. No unnecessary work\n    - No extra copying\n    - No extra allocations\n\n2. Use all your computing power\n    - Use all the cores available\n    - SIMD\n\n3. Avoid waits and stalls\n    - Lockless Data Structures\n    - Async APIs\n    - Job Systems\n\n4. Use Hardware efficiently\n    - Cache friendly\n    - Well predictable code\n\n5. OS Level Efficiency\n\n## Efficient C++\n\n### Use constexpr wherever possible\nComputation already done at compile time, saving us time during runtime. \n\n### Make Variables `const`\nKnowing a variable is const allows the compiler to perform various \noptimisations. \n\nFor example:\n```cpp\n{\n    const float sum = std:: accumulate(data.begin(), data.end(),0.01);\n    for (auto& num : data) {\n        num -= sum / data.size():\n    }\n    return data;\n}\n\n// can be optimised to below code by compiler since sum is const\n\n{\n    const float sum = std:: accumulate(data.begin(), data.end(),0.01);\n    const float __mean = sum / data.size(): // this expression is loop invariant\n    for (auto& num : data) {\n        num -= __mean; // no expensive division inside loop now\n    }\n    return data;\n}\n```\n\nAlso it can help compiler to do this optimization:\n\n```cpp\nbool condition = getBool();\nfor (int i {}; i < n; i++) {\n    if (condition) {\n        A(i);\n    } else {\n        B(i);\n    }\n}\n\n// if variable was declared const:\n\nconst bool condition = getBool();\nif (condition()) {\n    for (int i = 0; i < n; i++) {\n        A(i);\n    }\n} else {\n    for (int i = 0; i < n; i++) {\n        B(i);\n    }\n}\n```\nHere we have reduced branching. In earlier case when bool was not const, compiler\nmay be afraid that functions A or B, might change its value, thus preventing the\nopportunity to optimise.\n\n### Noexcept all the things\nvoid f(); `could throw exception`\n\nvoid f() noexcept; `WILL NEVER throw an exception`\n\nIn the functions call stack, the compiler can now not do exception handling, \nwhich reduces some overhead.\n\n### Use static for internal linkage\nThe functions which are to be used only in this source file should be marked\nstatic. It is another hint to the compiler to inline it, apart from using the\ninline keyword.\n\n### Use `[[likely]]` and `[[unlikely]]` in conditionals\nBetter branch predicting, if the conditionals are marked `[[likely]]` or \n`[[unlikely]]`.\n\n### Avoid Copying in str. bindings\n\n`auto [first person, age] = *map. begin();` is bad\n\n`const auto& [first person, age] = *map.begin();` use this instead\n\n### Cache Friendly\n\n```plaintext\nCONTIGUOUS          SCATTERED\n\nstd::array          std::list\nstd::vector         std::set\nstd::deque          std::unordered_set\nstd::flat_map       std::map\nstd::flat_set       std::unordered_map\n```\n\nAlso while designing classes, we have certain member varibles which will be \nused very rarely, for example debugging info.\n\nSo instead of having an object of `debugInfo` inside our class, we can instead \nkeep a `unique_ptr` to it. So that everytime our class' object is fetched into\ncache, large non-important things are not pre-fetched (size of pointer will be\nsmall, and small non-important thing will be feteched here.)\n\n### False Sharing\nData on same cache line, being accessed by different threads.\n\nUse `alignas` to prevent this.\n\n### Avoid Indirect Calls\nVirtual function call are indirect calls, as they require `vtable` lookup.\n"
  },
  {
    "path": "notes/placement_new.md",
    "content": "## Placement New in C++\n\n- Allocation and construction are different.\n- A memory allocator is simply supposed to return uninitialized bits of memory.\n- It is not supposed to produce “objects”.\n- Constructing the object is role of the constructor, which runs after the\nmemory allocator.\n\n```cpp\n// assume we have a allocator object pool\nPool pool;\nvoid* raw = pool.alloc(sizeof(Foo));\nFoo* p = new(raw) Foo(); \n\n// the above is equivalent to\nFoo* p = new Foo();\n```\n- It is used to place an object at a particular location in memory. This is \ndone by supplying the place as a pointer parameter to the new part of a new \nexpression:\n\n```cpp\n#include <new>        // Must #include this to use \"placement new\"\nint main ()\n{\n  char memory[sizeof(Fred)];     // creating memory big enough to hold fred\n  void* place = memory;          // unnecessary step\n  Fred* f = new(place) Fred();   // Contructing the object (call Fred::Fred())\n  // The pointers f and place will be equal\n  \n  // Remark: \n  // We are taking sole responsibility that the pointer we pass to the \n  // “placement new” operator points to a region of memory that is big enough \n  // and is properly aligned for the object type that you’re creating.\n\n  // Neither the compiler nor the run-time system will make any attempt to check \n  // whether we did this right.\n\n  f->~Fred();\n  // We need to explicitly call the destructor\n}\n```\n\n- We may want to do this for optimization when we need to construct multiple \ninstances of an object, and it is faster not to re-allocate memory each time \nwe need a new instance. Instead, it might be more efficient to perform a \nsingle allocation for a chunk of memory that can hold multiple objects, \neven though we don't want to use all of it at once.\n\n### How std::vector uses Placement New\n\n- Take containers like unordered_map, vector, or deque. These all allocate \nmore memory than is minimally required for the elements you've inserted so \nfar to avoid requiring a heap allocation for every single insertion.\n\n```cpp\nvector<Foo> vec;\n\n// Allocate memory for a thousand Foos:\nvec.reserve(1000);\n```\n... that doesn't actually construct a thousand Foos. It simply allocates/reserves\nmemory for them. If vector did not use placement new here, it would be \ndefault-constructing Foos all over the place as well as having to invoke their \ndestructors even for elements you never even inserted in the first place.\n\nVector Example: [Link](https://medium.com/@dgodfrey206/c-placement-new-1298ccbb076e)\n"
  },
  {
    "path": "notes/pre-post-increment.md",
    "content": "## `i++` vs `++i`\n\nFirst lets see for our class how can we overload, the ++prefix and postfix++\noperators.\n\n```cpp\nclass Number {\npublic:\n  Number& operator++ ();    // ++prefix\n  Number  operator++ (int); // postfix++\n};\n```\n\n*Note the different return types: the prefix version returns by reference, the\npostfix version by value.*\n\n```cpp\nNumber& Number::operator++ ()\n{\n  // do some logic here to increment\n  return *this;\n}\n\nNumber Number::operator++ (int)\n{\n  Number ans = *this;\n  ++(*this);  // or just call operator++()\n  return ans;\n}\n```\n\n### Which is more efficient: `i++` or `++i`?\n\n- `++i` is sometimes faster than, and is never slower than, `i++`.\n\n- For intrinsic types like `int`, it doesn’t matter: `++i` and `i++` are the \nsame speed. For Number class (above example), `++i` very well might be faster \nthan `i++` since the latter might make a copy of the this object.\n"
  },
  {
    "path": "notes/program_to_process.md",
    "content": "## Story of the Program to a Process\n\n`Preprocessing` is the first pass of any C++ compilation. It processes\ninclude-files, conditional compilation instructions and macros.\n\n`Compilation` is the second pass. It takes the output of the \npreprocessor, and the source code, and generates assembly source code.\n\n`Assembler` is the third stage of compilation. It takes the assembly\nsource code and produces machine code with offsets. The\nassembler output is stored in an object file.\n\n`Linking` is the final stage of compilation. It takes one or more\nobject files or libraries as input and combines them to produce a\nsingle (usually executable) file. In doing so, it resolves references\nto external symbols, assigns final addresses to procedures/functions \nand variables, and revises code and data to reflect new addresses (a \nprocess called relocation).\n\n![](/assets/compilation_steps.png)\n\n### Object Files\n\nAfter the source code has been assembled, it will produce an Object\nfiles `(e.g. .o, .obj)` and then linked, producing an executable files.\n\nAn object and executable come in several formats such as `ELF`\n(Executable and Linking Format) and `COFF` (Common Object-File Format).  \nFor example, ELF is used on Linux systems, while COFF is used on \nWindows systems.\n\nThe object file contains various areas called sections. These sections can\nhold executable code, data, dynamic linking information, debugging data, \nsymbol tables, relocation information, comments, string tables, and notes.\n\nSome sections are loaded into the process image and some provide \ninformation needed in the building of a process image while still others \nare used only in linking object files.\n\n![](../assets/sections.png)\n\n`readelf` and `objdump` can be use to read headers and content of an object\nfile.\n\n### Relocation Records:\nBecause the various object files will include references to each other's code\nand/or data, these shall be combined during the link time.\n\nFor example, the object file that has main() includes calls to \nfunction printf().\n\nAfter linking all of the object files together, the linker uses the \nrelocation records to find all of the addresses that need to be filled in.\n\nEach object file has a symbol table that contains a list of names and their\ncorresponding offsets in the text and data segments.\n\n![](../assets/linking.png)\n\n### Shared Libraries\n\n- In a typical system, a number of programs will be running. Each program \nrelies on a number of functions, some of which will be standard C library \nfunctions, like `printf()`, `malloc()`, `strcpy()`, etc. and some are \nnon-standard or user defined functions.\n\n- If every program uses the standard C library, it means that each program \nwould normally have a unique copy of this particular library present within \nit. Unfortunately, this result in wasted resources, degrade the efficiency \nand performance.\n\n- Since the C library is common, it is better to have each program reference \nthe common, one instance of that library, instead of having each program \ncontain a copy of the library.\n\n- This is implemented during the linking process where some of the objects are linked during the link time whereas some done during the run time \n(Dyanimic Linking).\n\n#### Static Linking\n- The term `statically linked` means that the program and the particular \nlibrary that it’s linked against are combined together by the linker at link \ntime.\n\n- Programs that are linked statically are linked against archives of objects \n(libraries) that typically have the extension of `.a`.  An example of such a \ncollection of objects is the standard C library, `libc.a`.\n\n- You might consider linking a program statically for example, in cases \nwhere you weren't sure whether the correct version of a library will be \navailable at runtime, or if you were testing a new version of a library that \nyou don't yet want to install as shared.\n\n- The drawback of this technique is that the executable is quite big in \nsize, as all the needed information need to be brought together.\n\n#### Dynamic Linking\n- The term `dynamically linked` means that the program and the particular library it references are not combined together by the linker at link time.\n\n- Instead, the linker places information into the ELF that tells the \nloader which `shared object module` the code is in and which\n`runtime linker` should be used to find and bind the references.\n\n- This type of program is called a partially bound executable, because it \nisn't fully resolved. The linker, at link time, didn't cause all the \nreferenced symbols in the program to be associated with specific code from \nthe  library. Instead, the linker simply said something like: “This \nprogram calls some functions within a particular shared object, so I'll just \nmake a note of which shared object these functions are in, and continue on”.\n\n- The binding between the program and the shared object is done at runtime \nthat is <mark> after the program starts </mark>, the appropriate shared \nobjects are found and bound.\n\n- Programs that are linked dynamically are linked against shared objects \nthat have the extension `.so`. An example of such an object is the shared \nobject version of the standard C library, `libc.so`.\n\n- Some advantages:\n    - Program files (on disk) become much smaller because they need not hold\n     all necessary text and data segments information.\n    \n    - Dynamic linking permits two or more processes to share read-only \n    executable modules such as standard C libraries.  Using this technique, \n    only one copy of a module needs be resident in memory at any time, and \n    multiple processes, each can executes this shared code (read only).  \n    This results in a considerable memory saving.\n\n### Process Loading\n\n1. In Linux processes loaded from a file system (using either the \n`execve()` or `spawn()` system calls) are in `ELF` format.\n\n2. Before we can run an executable, firstly we have to load it into memory.\n\n3. This is done by the loader, which is generally part of the operating system.\n\n4. Memory and access validation: Firstly, the OS system kernel reads in the \nprogram file’s header information and does the validation for type, access \npermissions, memory requirement and its ability to run its instructions.  It \nconfirms that file is an executable image and calculates memory requirements.\n\n    - Allocates primary memory for the program's execution.\n    - Copies address space from secondary to primary memory.\n    - Copies the `.text` and `.data` sections from the executable into primary \n    memory.\n    - Copies program arguments (e.g., command line arguments) onto the stack.\n    - Initializes registers: sets the esp (stack pointer) to point to top of\n    stack, clears the rest.\n    - Jumps to `__start` routine, which: copies main()'s arguments off of the \n    stack, and jumps to `main()`.\n\n5. The memory layout, consists of three segments (text, data, and stack).\nThe dynamic data segment is also referred to as the heap, the place dynamically \nallocated memory (such as from `malloc()` and `new()`) comes from. Dynamically \nallocated memory is memory allocated at run time instead of compile/link time.\n\n\n### Runtime Linking\n\n1. The runtime linker is invoked when a program that was linked against a \nshared object requests that a shared object be dynamically loaded.\n\n2. Run-time dynamic linking: The application program is read from disk (ELF \nfile) into memory and unresolved references are left as invalid (typically \nzero).  The first access of an invalid, unresolved, reference results in a \nsoftware trap.  The run-time dynamic linker determines why this trap occurred \nand seeks the necessary external symbol.  Only this symbol is loaded into \nmemory and linked into the calling program.\n\n3. The runtime linker is contained within the C runtime library. The runtime \nlinker performs several tasks when loading a shared library (`.so` file).\n\n4. For resolving a symbol at runtime the runtime linker will search through the \nlist of libraries for this symbol.  In ELF files, hash tables are used for the \nsymbol lookup, so they're very fast.\n\n\n\n\n\n\n\n\n"
  },
  {
    "path": "notes/rvo.md",
    "content": "## Return Value Optimisation\n\nThe object to be returned is constructed in the return value slot. \nExpensive copying is avoided in this case. Otherwise the object would be first \ncreated in local stack and then copied into the return slot (copying involved).\n\nGood Video by Arch Coffee: https://www.youtube.com/watch?v=Qp_XA8G5H3M\n\n![](../assets/rvo.png)\n\n*Here we can see that the un-named object S{} is directly created in the return\nvalue slot. \nWe have avoided unnecessary copying.*\n\n![](../assets/no-rvo.png)\n\n*In this case the compiler doesn't know which among `s1`, `s2` needs to be returned.\nHence it can't directly constuct the object in the return slot. It first creates\nboth the objects in the stack and then based on the conditional test, copies one\nof them into the return value slot.*\n\n### Understanding stack segment while function call\n\nThis is how the stack looks like. First of all we have the return slot.\nThen the arguments to the function and finally the return address.\nStack pointer points below this at start of first instuction.\n\n```cpp\nFruit apples_to_apples (int i, Fruit x, int j) {\n    return x\n}\n```\n<mark/>The return slot is allocated by the `caller` itself. And its address is passed to\nthe `callee` via the `rdi` register (this is a hidden parameter in this case).<mark>\n\n<mark/>In case of RVO, the returned object will be constructed in this slot, otherwise it would\nbe created locally in stack and then copied here.<mark>\n\n\n![](../assets/stack.png)\n\nHere we can't elide copy, since the stack address of Fruit x and the return\nslot are different. We must get data out of `x` and put it into the return slot.\n\n### Slicing from derived to base\n\n```cpp\nstruct Cat : Aniaml {\n    int rats_eaten;\n}\n\nAnimal chopped() {\n    Cat x = ...;\n    return x;\n}\n```\n\nIn this case we do control the physical location of x (where we can constuct it),\nbut x is of the wrong type for constructing into the return slot.\n(*Slot of Animal return would be smaller size than Cat object size*).\n\nIn these cases, where return type is Base and the object returned is Derived,\nthe extra properties of the derived object is sliced away. We will only be \nreturning the Animal part now. \n(*To avoid this, we use pointers. Run-Time Polymorphism*)\n"
  },
  {
    "path": "notes/set_pq.md",
    "content": "## Set vs Priority Queue\n\n### Priority Queue\n- Only gives access to one element in sorted order: the highest priority \nelement. When we remove it, we get the next highest priority element.\n- It is backed by a heap (implemented by a vector)\n- In a heap `P < L` and `P < R` (Parent, Left, Right)\n- The highest priority element is at top of tree (or front of vector)\n- O(1) access to top element\n- Deletion is `O(logn)` (We replace the top of tree with the extreme element\nand then perform swapping to maintain heap property)\n- Insertion is `O(logn)` (We put new element at the new extreme and then perform\nswapping to maintain heap property)\n- One point to note, is that operations in PQ involve a lot of swapping of \nelements.\n\n\n### Set\n- A set allows you full access in sorted order.\n- We can do: find two elements somewhere in the middle of the set, then \ntraverse in order from one to the other.\n- Insert any element `O(log n)` and the constant factor is greater than in PQ.\n- Much more operations (LB, UB, element lookup, iteration, etc).\n- Backed by self balancing BSTs.\n- In a binary tree `L < P < R`.\n- Insert and erase operations slightly slower than PQ because `std::set` makes \nmany memory allocations. Every element of `std::set` is stored at its own \nallocation.\n- Good thing is it only involves pointer swapping. "
  },
  {
    "path": "notes/smart_pointers.md",
    "content": "## Smart Pointers\n\n- Unique Pointers\n- Shared Pointers\n- Weak Pointers\n\nSyntax of usage is similar as before: Due to operator overloading usage remains\nsame\n```cpp\nstd::shared_ptr<string> p = std::make_shared<string>(\"Hello\");\nauto q = p;\np = nullptr;\nif (q != nullptr) {\n    std::cout << q->length() << *q << '\\n';\n}\n```\n\n```cpp\n{\n    T* ptr = new T;\n    // ...\n    delete ptr;\n// its programmers responsibility to delete this pointer after usage.\n// otherwise there will be memory leak\n\ntemplate<class T> \nclass uniqute_ptr {\n    T* p_ = nullptr;\n\n    ~unique_ptr() {\n        delete p_;    // deletion done automatically upon destruction\n    }\n}\n}\n```\n\n- A raw pointer `T*` is copyable. If I copy, then which of us has now the\nownership. Who holds the responisibility of cleaning up? Both can't clear.\n\n- Unique pointer is not copyable, it is only movable. When the move from A to B,\nthe move constructor nulls out the source pointer (maintains unique ownsership).\n\n```cpp\n// unique pointer is always a template of two parameters.\n// second parameter is defaulted to std::default_delete<T>\n\ntemplate<class T, class Deleter = std::default_delete<T>>\nclass unique_ptr {\n    T* p_ = nullptr;\n    Deleter d_;\n\n    ~unique_ptr() {\n        if (p_) d_(p_); // called deleter on this pointer\n    }\n};\n\ntemplate<class T> \nstruct default_delete {\n    void operator()(T *p) comst {\n        delete p; \n    }\n}\n\n// now we can use this to do some nice things\n\nstruct FileCloser {\n    void operator() (FILE *fp) const {\n        assert (fp != nullptr);\n        fclose(fp);   // instead of delete we call close\n    }\n}\n\nFILE *fp = fopen(\"input.txt\", \"r\");\nstd::unique_ptr<FILE, FileCloser> uptr(fp);\n```\n\n#### Rule of thumb for smart pointers\n- Treat smart pointer just like raw pointer types\n    - Pass by value!\n    - Return by value (of course)!\n    - Passing a pointer by reference \n- A function taking a unique_ptr by value shows transfer of ownership\n\n```cpp\n#include <iostream>\n#include <memory>\n\nclass MyClass {\npublic:\n    MyClass() { std::cout << \"MyClass constructor\\n\"; }\n    ~MyClass() { std::cout << \"MyClass destructor\\n\"; }\n    void show() { std::cout << \"Hello from MyClass\\n\"; }\n};\n\n// Function that takes unique_ptr by value (transfers ownership)\nvoid takeOwnership(std::unique_ptr<MyClass> ptr) {\n    std::cout << \"Taking ownership\\n\";\n    ptr->show();\n}\n\nint main() {\n    std::unique_ptr<MyClass> myPtr = std::make_unique<MyClass>();\n\n    // Pass unique_ptr by value to the function, transferring ownership\n    takeOwnership(std::move(myPtr)); // myPtr is moved\n\n    // At this point, myPtr is no longer valid\n    if (!myPtr) {\n        std::cout << \"myPtr is now null\\n\";\n    }\n\n    return 0;\n}\n```\n#### Shared Pointer\n- syntax similar to the unique_ptr\n- It expresses shared ownsership. Reference counting.\n\n![](../assets/sharedPtr.png)\n![](../assets/class.png)\n\n`F`, `V` are base classes. `T` is a derived class. Pointers of base class pointing\nto object of derived class. Both will be pointing to a different offset in the\nheap allocated object.\n\n![](../assets/sharedPtr2.png)\n\n\n\n\n\n\n"
  },
  {
    "path": "notes/templates.md",
    "content": "## C++ Template\n\nTemplate is not a thing: Its a recipe for making things.\n\n### Function Templates\nThese are the recipes for making functions.\n```cpp\ntemplate<class T>\nT const& min(T const& a, T const& b) {\n    return (a < b) ? a : b;\n}\n\ntemplate(class RandomIt, class Compare) \nvoid sort(RandomIt first, RandomIt last, Compare comp);\n```\n### Class Templates\nThese are the recipes for making classes\n```cpp\ntemplate<class T, size_t N>\nstruct array {\n    ...\n}\n```\n### Alias Templates (C++11)\nThese are recipes for making type aliases.\n```cpp\ntemplate<class Key, class Val>\nusing my_map = map<Key, Value, greater<Key>>;\n\nmy_map<std::string, int> msi;\n```\n\n*If some `if/else` decision can be taken at compile time, then there\nis no branching at run time, which is pretty efficient. \nHence try to take more and more decisions at compile time.*\n\nWe put the `recipe` in the header files and include these header files\nin all those source files where need to instantiate.\nThe compiler needs to see the recipe for making actual things.\n\nThese template definitions (recipe) are treated as `inline`.\n\n`inline`: inline function and variables can have multiple definitions \nacross different translation units (All definitions of the inline function must be identical across all translation units).\n\nIn simple words, it means that we can place declaration and definition\ninside the header file and then include this header file in different\nsource files. This way we can have multiple definitions but each\ndefinition is identical.\n\n#### Concepts\nWe can use them to put some constraints for generic code. \nBecause templates are just recipes, they are only instantiated when someone calls\nthem. So compiler will only generate actual thing upon being called. \n\nCould be used when we need to check if certain templated function has `push_back`\ninside it and we don't call it with `set` container.\n```cpp\ntemplate<typename Coll>\nconcept HasPushBack = requires (Coll c, Coll::value_type v) {\n    c.push_back(v);\n};\n// this just checks that would calling push_back be valid or not?\n```\n![](../assets/concepts.png)\n\n\n#### Variadic Templates\n\n```cpp\n#include <iostream>\n\n// Base case: This function is called when there's only one argument left\ntemplate <typename T>\nT sum(T value) {\n    return value;\n}\n\n// Variadic template: This function is called with two or more arguments\ntemplate <typename T, typename... Args>\nT sum(T first, Args... args) {\n    return first + sum(args...);  // Recursively call sum with the remaining arguments\n}\n\nint main() {\n    std::cout << \"Sum of 1, 2, 3, 4, 5: \" << sum(1, 2, 3, 4, 5) << std::endl;\n    std::cout << \"Sum of 1.5, 2.5, 3.5: \" << sum(1.5, 2.5, 3.5) << std::endl;\n    std::cout << \"Sum of 10: \" << sum(10) << std::endl;  // Single argument case\n    return 0;\n}\n```\n\n### Compile time prime checker\n\nBut there is a limit in maximum depth of template class instantiation in compiler.\n\n\n```cpp\n#include <iostream>\n\n// Base case: Primary template to check divisibility\ntemplate<int num, int divisor>\nstruct IsPrimeHelper {\n    static constexpr bool value = (num % divisor != 0) && IsPrimeHelper<num, divisor - 1>::value;\n};\n\n// Specialization for the base case when divisor reaches 1\ntemplate<int num>\nstruct IsPrimeHelper<num, 1> {\n    static constexpr bool value = true;\n};\n\n// Specialization to handle numbers less than 2 (not prime)\ntemplate<>\nstruct IsPrimeHelper<1, 1> {\n    static constexpr bool value = false;\n};\n\n// Main template to check if a number is prime\ntemplate<int num>\nstruct IsPrime {\n    // The helper is instantiated with num and num / 2 as the starting divisor\n    static constexpr bool value = (num > 1) && IsPrimeHelper<num, num / 2>::value;\n};\n\nint main() {\n    // Check if 29 is prime\n    constexpr int number1 = 29;\n    constexpr bool isNumber1Prime = IsPrime<number1>::value;\n    std::cout << number1 << (isNumber1Prime ? \" is prime.\" : \" is not prime.\") << std::endl;\n\n    // Check if 10 is prime\n    constexpr int number2 = 10;\n    constexpr bool isNumber2Prime = IsPrime<number2>::value;\n    std::cout << number2 << (isNumber2Prime ? \" is prime.\" : \" is not prime.\") << std::endl;\n\n    return 0;\n}\n```\n\n```cpp\n#include <iostream>\n\n// Helper function to check for divisibility at compile time\nconstexpr bool isDivisible(int num, int divisor) {\n    return (num % divisor == 0);\n}\n\n// Recursive constexpr function to check if a number is prime\nconstexpr bool isPrimeHelper(int num, int divisor) {\n    // Base case: If divisor is 1, it's prime\n    if (divisor == 1) return true;\n    // If the number is divisible by the current divisor, it's not prime\n    if (isDivisible(num, divisor)) return false;\n    // Recursively check for next divisor\n    return isPrimeHelper(num, divisor - 1);\n}\n\n// Main constexpr function to check primality\nconstexpr bool isPrime(int num) {\n    // Handle special cases for numbers less than 2\n    return (num > 1) && isPrimeHelper(num, num / 2);\n}\n\nint main() {\n    // Test the compile-time prime checker\n    constexpr int number1 = 29;\n    constexpr int number2 = 10;\n    \n    // These conditions are evaluated at compile time\n    if constexpr (isPrime(number1)) {\n        std::cout << number1 << \" is prime.\" << std::endl;\n    } else {\n        std::cout << number1 << \" is not prime.\" << std::endl;\n    }\n\n    if constexpr (isPrime(number2)) {\n        std::cout << number2 << \" is prime.\" << std::endl;\n    } else {\n        std::cout << number2 << \" is not prime.\" << std::endl;\n    }\n\n    return 0;\n}\n```"
  },
  {
    "path": "notes/tmux.md",
    "content": "## Tmux cheat sheet\n\nBy default the prefix key is Ctrl + B, I have changed it to Ctrl + Space\n\nAccess my tmux config at [link](https://github.com/Shivam5022/.dotfiles/blob/main/tmux.conf)\n\nSome of the commands below are as per my configuration, which is different from the default ones.\n\n### Sessions\n\n1.  Start a new session\n\n            tmux new -s <session-name>\n\n2.  Attach to a session (from outside tmux)\n\n            tmux attach -t <session-name>\n            tmux a\n\n3.  Detach from a session\n\n            Press <PREFIX>, then d\n\n4.  List sessions\n\n            tmux ls\n\n### Windows\n\nWe can have windows inside a session\n\n1.  Create a new window\n\n            Press <PREFIX>, then c\n\n2.  Rename the current window:\n\n            Press <PREFIX>, then ,\n\n3.  Kill a window / session\n\n            Press Ctrl+d\n\n4.  Switch between sessions and windows\n\n            Press <PREFIX>, then w\n            Press <PREFIX>, then <window number>\n            Press <PREFIX>, then p (previous window)\n            Press <PREFIX>, then n (next window)\n\n### Panes\n\nIn a window we can have multiple panes:\n\n1.  Split horizontal\n\n            Press <PREFIX>, then |\n\n2.  Split vertically\n\n            Press <PREFIX>, then _\n\n3.  Switch between panes\n\n            Press <PREFIX>, then <arrow keys>\n\n### More\n\n1.  Rename session\n\n            tmux rename-session -t <old-name> <new-name>\n\n2.  Kill all sessions\n\n            tmux kill-server\n"
  },
  {
    "path": "notes/vim.md",
    "content": "## NeoVim\n\n**Update:**: I have moved to [Helix](https://helix-editor.com). The config is available at same repo as below. \n(Personally found it better than nvim, because it doesn't require much configuration. It simply works.)\n\nMy neovim configuration is available at [this link](https://github.com/Shivam5022/.dotfiles)\n\nSome resources to get started with Modal Editing:\n\n- The Primagen Youtube Playlist [Link](https://youtube.com/playlist?list=PLm323Lc7iSW_wuxqmKx_xxNtJC_hJbQ7R&si=cu_8_omQjZSTbiL7)\n- Vim motions tutorial [Link](https://youtu.be/IiwGbcd8S7I?si=xO5xlPMpo-Vn5hrA)\n- NeoVim configuration setup [Link](https://youtu.be/6pAG3BHurdM?si=2lm02xVFhozGPGFF)\n- Kickstart.nvim [Link](https://youtu.be/m8C0Cq9Uv9o?si=Bz17f3KxKFoxVaQc)\n"
  },
  {
    "path": "notes/virtual_functions.md",
    "content": "## Virtual Functions and VTables\n\nMust read C++ FAQ Article: [Link](https://isocpp.org/wiki/faq/virtual-functions)\n\n- Used when we want to call member function of derived class by a pointer of \nbase class.\n\n```cpp\nclass Base {\n    public:\n        Base() {\n            std::cout << \"Base Constructed\" << std::endl;\n        }\n        ~ Base () {\n            std::cout << \"Base Destructed\" << std::endl;\n        }\n        void func() {\n            std::cout << \"Base member function\" << std::endl;\n        }\n}\n\nclass Derived : public Base {\n    public:\n        Derived() {\n            std::cout << \"Derived Constructed\" << std::endl;\n        }\n        ~ Derived () {\n            std::cout << \"Derived Destructed\" << std::endl;\n        }\n        void func() {\n            std::cout << \"Derived member function\" << std::endl;\n        }\n}\n\nint main () {\n    Derived instance;\n    instance.func();\n\n    // Base Constructed\n    // Derived Constructed\n    // Derived member function     ---> derived will be printed\n    // Derived Destructed\n    // Base Destructed\n\n\n    Base* ptr = new Derived;\n    ptr->func();\n    delete ptr;\n\n    // Base Constructed\n    // Derived Constructed\n    // Base member function     ---> base will be printed\n    // Base Destructed\n\n    // 2 things to note: Base member function called, and Derived not destructed.\n\n}\n```\n\nIn the above example if we wish that `func` of derived should be called then, we will have to mark the base function as `virtual` and the\nderived function should override that function.\n\n```cpp\nvirtual void func() {};     // in base class\n\nvoid func() override {};    // in derived class\n\nptr->Base::func();          // to explicitly call base member function even after \n                            // declaring it virtual.\n``` \n\n#### Both the destructors should be called!\n\nTo ensure this, mark the destructor of base class as virtual. This will ensure \nthat the destructor of derived class is also called, upon object destruction.\n\n`virtual ~Base() {}`\n\n### How can C++ achieve dynamic binding yet also static typing?\n\nWhen you have a pointer to an object, the object may actually be of \na class that is derived from the class of the pointer (e.g., a \nVehicle* that is actually pointing to a Car object; this is called \n“polymorphism”). Thus there are two types: the (static) type of the \npointer (Vehicle, in this case), and the (dynamic) type of the \npointed-to object (Car, in this case).\n\nStatic typing means that the legality of a member function \ninvocation is checked at the earliest possible moment: by the \ncompiler at compile time. The compiler uses the static type of the \npointer to determine whether the member function invocation is \nlegal. If the type of the pointer can handle the member function, \ncertainly the pointed-to object can handle it as well. E.g., if \nVehicle has a certain member function, certainly Car also has that \nmember function since Car is a kind-of Vehicle.\n\nDynamic binding means that the address of the code in a member \nfunction invocation is determined at the last possible moment: \nbased on the dynamic type of the object at run time. It is called \n“dynamic binding” because the binding to the code that actually \ngets called is accomplished dynamically (at run time). Dynamic \nbinding is a result of virtual functions.\n\n### Virtual Table (vTable) - Supports Dynamic Dispatch\n\nTo infer which function will be called, when we have some scenario like:\n\n```cpp\nbase* ptr = &derived;\nptr->function();\n```\n\n- If `function()` is not virtual:\n    - we will have `early binding` done during compile time only. The function \n    corresponding to the pointer type (here `base`) will be called.\n- If `function()` is virtual:\n    - we will have `late binding` done during the run time. We will *use `vtable`\n    of the `derived class`* to fetch the appropriate function. If this function is\n    overridden in the derived class then the vtable will point to this overridden\n    function. Otherwise, if we have not overridden this function in the derived\n    class, then the vtable will point to the function of base class (since all the\n    base class function are inherited by the derived class).\n    - Hence:\n        - If overridden: call the overridden function of derived class.\n        - Else call the function of base class.\n\n[PDF version](../assets/Vtables.pdf)\n\n![](../assets/vtables.png)\n\n## Virtual Functions in Constructor and Destructor\n\n![](../assets/constructor.png)\n\nWhen the constructor of base class is being invoked, the derived class has yet\nnot been created: hence the function corresponding to base will be called.\n\nSame is with destructor. The derived class has already been destroyed, and hence\nthe function corresponding to base will be called.\n\nTherefore, don't do any fancy things inside constructor/destructors which\ninvolve invoking virtual functions.\n\n"
  }
]