[
  {
    "path": ".gitignore",
    "content": ".vscode/\n"
  },
  {
    "path": "LICENSE.txt",
    "content": "Copyright (c) 2021, Thanh Nguyen, thanh.it1995 (at) gmail.com\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without modification,\nare permitted provided that the following conditions are met:\n\n - Redistributions of source code must retain the above copyright notice,\n   this list of conditions and the following disclaimer.\n\n - Redistributions in binary form must reproduce the above copyright notice,\n   this list of conditions and the following disclaimer in the documentation\n   and/or other materials provided with the distribution.\n\n - Neither the name of Thanh Nguyen nor the names of its contributors may\n   be used to endorse or promote products derived from this software without\n   specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR\nANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\nLOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON\nANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "README.md",
    "content": "# MULTIPLE THREADING IN PRACTICE\n\n## DESCRIPTION\n\nThis repo helps you to practise multithreading in a logical sequence, which is divided into several demonstrations.\nPlus, you could apply your learning better by doing exercises.\n\nThe repo consists of two main sections:\n\n- \"demo\" (demostrations).\n- \"exer\" (exercises).\n\nAll the demos (and exers) are really simple, easy to understand, even for difficult terms.\n\nIf you find it helpful, please give my repo a star. Thank you.\n\n&nbsp;\n\n## AUTHOR & LICENSE\n\nAuthor: Thanh Nguyen\n\n- Email: thanh.it1995@gmail.com\n- Facebook: <https://www.facebook.com/thanh.it95>\n\nThis repo is licensed under the [3-Clause BSD License](LICENSE.txt).\n\n&nbsp;\n\n## LANGUAGES SUPPORTED\n\n| Directory name | Description                  |\n| -------------- | ---------------------------- |\n| `cpp-std`      | C++20 std threading          |\n| `cpp-pthread`  | C++11 POSIX threading        |\n| `cpp-boost`    | C++98 Boost threading        |\n| `csharp`       | C# 7.3 with Dot Net 6        |\n| `java`         | Java JDK 17                  |\n| `python`       | Python 3.10                  |\n| `js-nodejs`    | Javascript ES2019/Nodejs 18  |\n\nSpecial notes for C++ demos/exers: Please read the specified `readme.md` in corresponding directory.\n\n&nbsp;\n\n## THE NOTES AND ARTICLES\n\nThe notes and articles are the additional resources for the source code, which guides you for better research, step by step. You may consider it the comment/description at the beginning of the source code.\n\n```text\n  ORIGINAL SOURCE CODE FILE                  SOURCE CODE FILE              NOTES AND ARTICLES\n------------------------------        ------------------------------     ----------------------\n|                            |        |                            |     |                    |\n| /* THE COMMENTS... */      |        |                            |     |    THE COMMENTS    |\n|                            |        |                            |     |                    |\n| #include <iostream>        |        | #include <iostream>        |     |                    |\n| using namespace std;       |        | using namespace std;       |     |                    |\n|                            |  ===>  |                            |  +  |                    |\n| int main() {               |        | int main() {               |     |                    |\n|   cout << \"Hello thread\";  |        |   cout << \"Hello thread\";  |     |                    |\n|   return 0;                |        |   return 0;                |     |                    |\n| }                          |        | }                          |     |                    |\n|                            |        |                            |     |                    |\n------------------------------        ------------------------------     ----------------------\n```\n\nThere are 2 notes:\n\n- [notes-demos-exercises.md](notes-demos-exercises.md): The notes that go along with original source code.\n- [notes-articles.md](notes-articles.md): Extra helpful notes during my research.\n\n&nbsp;\n\n**For your best result, I strongly recommend that you read [notes-demos-exercises.md](notes-demos-exercises.md) while enjoying source code (demos and exercises).**\n\n&nbsp;\n\n## ROADMAP FOR THE LEARNERS\n\nThis is the roadmap for you, which is composed and researched carefully with all my heart. You should learn in the sequence listed below.\n\n**If you just want to learn the basis to understand the taste of multithreading:**\n\n- Demo: hello, join, pass arg, sleep, list-threads, race-condition, mutex, synchronized-block.\n- Exer: max-div.\n\n**If you are oriented to be a Software Developer:**\n\n- Demo: hello, join, pass arg, sleep, list-threads, terminate, return-value, exec-service, race-condition, mutex, synchronized-block, deadlock, blocking-queue, atomic.\n- Exer: max-div, producer-consumer, product-matrix, data-server.\n\n**If you really want to do an in-depth research:** Learn all!!!\n\n&nbsp;\n\n---\n\n## INTRODUCTION TO MULTITHREADING\n\n### GETTING STARTED\n\nBob sends four messages to Alice: `I love`, `you`, `not`, `her`.\n\nSurprisingly, Alice receives `I love`, `her`, `not`, `you` (That means \"I love her not you\"). So sad!\n\n```text\nTRADITIONAL (ONE THREAD)\n\n    ===========================================> Time\n\n                 Main thread\n    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>\n      \"I love\"   \"you\"   \"not\"   \"her\"\n\n\n\nMULTITHREADING (FOUR THREADS)\n\n    ===========================================> Time\n\n    ~~~~~~~~~~~~>\n      \"I love\"\n\n                      ~~~~~~~~~~~~>\n                          \"you\"\n\n              ~~~~~~~~~~~~>\n                  \"not\"\n\n        ~~~~~~~~~~~~>\n            \"her\"\n```\n\nIf you use multithreading or something similar, the context above is truly possible. The reason is that multithreading allows four messages to be sent in parallel, so message order is changed unpredictably when they come to Alice.\n\nIn a traditional simple app, there is only one thread (the \"main thread\"). If you apply multithreading then your app may have multiple threads (including the \"main thread\").\n\nBy learning multithreading:\n\n- You get closer to the operating system.\n- You can understand various terms: concurrency, parallel, asynchronous, synchronization.\n- You have additional knowledge to learn asynchronous programming and parallel programming.\n\nSo, why multithreading?\n\n### WHY MULTITHREADING\n\nMultithreaded programs can improve performance compared to traditional simple programs (which use only a single thread).\n\nMultithreading is used as an underlying technique in various fields:\n\n- Web browsers (Chrome, Edge, Firefox...).\n- Web servers.\n- Graphic editors (Adobe Photoshop, Corel Draw...).\n- Computer games.\n- Database management systems.\n- Networking programming.\n- Video encoders.\n- And more...\n\nBenefits of multithreading:\n\n- Improving application responsiveness.\n  - Any program in which many activities are not dependent upon each other can be redesigned so that each activity is defined as a thread. For example, the user of a multithreaded GUI does not have to wait for one activity to complete before starting another.\n\n- Using multiprocessors efficiently.\n  - Typically, applications that express concurrency requirements with threads need not take into account the number of available processors. The performance of the application improves transparently with additional processors.\n  - Numerical algorithms and applications with a high degree of parallelism, such as matrix multiplications, can run much faster when implemented with threads on a multiprocessor.\n\n- Improving throughput.\n  - Many concurrent compute operations and I/O requests within a single process.\n\n- Program structure simplification.\n  - Threads can be used to simplify the structure of complex applications, such as server-class and multimedia applications. Simple routines can be written for each activity, making complex programs easier to design and code, and more adaptive to a wide variation in user demands.\n\n- Using fewer system resources.\n  - Threads impose minimal impact on system resources. Threads require less overhead to create, maintain, and manage than a traditional process.\n\n- Better communication.\n  - Thread synchronization functions can be used to provide enhanced process-to-process communication.\n  - In addition, sharing large amounts of data through separate threads of execution within the same address space provides extremely high-bandwidth, low-latency communication between separate tasks within an application.\n\n&nbsp;\n\nIf you want to explore more articles, read here: [notes-articles.md](notes-articles.md).\n\n&nbsp;\n\nArticle references:\n\n- [Oracle Documentation Home, Multithreaded Programming Guide, Chapter 1 Covering Multithreading Basics, Benefiting From Multithreading](https://docs.oracle.com/cd/E19455-01/806-5257/6je9h032d/index.html)\n- [Oracle Documentation Home, JDK 1.1 for Solaris Developer's Guide, Chapter 2 Multithreading, Benefits of Multithreading](https://docs.oracle.com/cd/E19455-01/806-3461/6jck06gqj/index.html)\n\n&nbsp;\n\n---\n\n## REFERENCES\n\nAll general references in my repo.\n\nRead here: [references.md](references.md).\n"
  },
  {
    "path": "cpp/.gitignore",
    "content": "a\na.out\ntmp*\n"
  },
  {
    "path": "cpp/README.md",
    "content": "# C++ MULTITHREADING\n\n## DESCRIPTION\n\nMultithreading in C++.\n\n&nbsp;\n\n## PROJECT STRUCTURE\n\n| Directory name | Description           | Notes |\n| -------------- | --------------------- | ----- |\n| `cpp-std`      | C++20 std threading   | Most source code files are in C++11. Some features require newer standard. |\n| `cpp-pthread`  | C++11 POSIX threading |       |\n| `cpp-boost`    | C++98 Boost threading |       |\n\n&nbsp;\n\n## COMPILATION\n\nEnsure that your compiler meets the C++ standard as mentioned above.\n\n### gcc compiler\n\nTo compile with specified C++ standard, use option `-std`:\n\n- C++98: `g++ -o exec_filename filename.cpp -std=c++98`\n- C++11: `g++ -o exec_filename filename.cpp -std=c++11`\n- C++20: `g++ -o exec_filename filename.cpp -std=c++20`\n\nUsually in Linux/Unix environments, we shall use POSIX threading. This leads to linking objects with pthread by option `-lpthread`.\n\nAdditionally, if you use Boost:\n\n- `-lboost_thread` for all code.\n- `-lboost_chrono` for the code using boost::chrono.\n- `-lboost_random` for the code using boost::random.\n\n&nbsp;\n\nExample 1:\n\n```shell\n# Compile\ng++ -o output_exe demo00.cpp -lpthread -std=c++20\n\n# Run\n./output_exe\n```\n\nExample 2 for lib Boost:\n\n```shell\n# Compile\ng++ -o output_exe demo04a-sleep.cpp -lpthread -lboost_thread -lboost_chrono\n\n# Run\n./output_exe\n```\n\n&nbsp;\n\n### Other compilers and/or environments\n\nYou may consider a suitable IDE/compiler (e.g. Microsoft Visual Studio, mingw...).\n"
  },
  {
    "path": "cpp/cpp-boost/demo00.cpp",
    "content": "/*\nINTRODUCTION TO MULTITHREADING\nYou should try running this app several times and see results.\n*/\n\n\n#include <iostream>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nvoid doTask() {\n    for (int i = 0; i < 300; ++i)\n        cout << \"B\";\n}\n\n\n\nint main() {\n    boost::thread th(&doTask);\n\n    for (int i = 0; i < 300; ++i)\n        cout << \"A\";\n\n    th.join();\n\n    cout << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo01a01-hello.cpp",
    "content": "/*\nHELLO WORLD VERSION MULTITHREADING\nVersion A01: Using functions\n*/\n\n\n#include <iostream>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nvoid doTask() {\n    cout << \"Hello from example thread\" << endl;\n}\n\n\n\nint main() {\n    boost::thread th(&doTask);\n\n    cout << \"Hello from main thread\" << endl;\n\n    th.join();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo01a02-hello.cpp",
    "content": "/*\nHELLO WORLD VERSION MULTITHREADING\nVersion A02: Using functions allowing passing 2 arguments\n*/\n\n\n#include <iostream>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nvoid doTask(char const* message, int number) {\n    cout << message << \" \" << number << endl;\n}\n\n\n\nint main() {\n    boost::thread th(&doTask, \"Good day\", 19);\n    th.join();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo01b-hello-class01.cpp",
    "content": "/*\nHELLO WORLD VERSION MULTITHREADING\nVersion B: Using class methods\n*/\n\n\n#include <iostream>\n#include <string>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nclass Example {\npublic:\n    void doTask(string message) {\n        cout << message << endl;\n    }\n};\n\n\n\nint main() {\n    Example example;\n\n    boost::thread th(&Example::doTask, &example, \"Good day\");\n    // boost::thread th(boost::bind(&Example::doTask, &example, \"Good day\"));\n\n    th.join();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo01b-hello-class02.cpp",
    "content": "/*\nHELLO WORLD VERSION MULTITHREADING\nVersion B: Using class methods\n*/\n\n\n#include <iostream>\n#include <string>\n#include <boost/bind/bind.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nclass Example {\npublic:\n    void run() {\n        boost::thread th(&Example::doTask, this, \"Good day\");\n        // boost::thread th(boost::bind(&Example::doTask, this, \"Good day\"));\n        th.join();\n    }\n\nprivate:\n    void doTask(string message) {\n        cout << message << endl;\n    }\n};\n\n\n\nint main() {\n    Example example;\n    example.run();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo01b-hello-class03.cpp",
    "content": "/*\nHELLO WORLD VERSION MULTITHREADING\nVersion B: Using class methods\n*/\n\n\n#include <iostream>\n#include <string>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nclass Example {\npublic:\n    void run() {\n        boost::thread th(&Example::doTask, \"Good day\");\n        th.join();\n    }\n\nprivate:\n    static void doTask(string message) {\n        cout << message << endl;\n    }\n};\n\n\n\nint main() {\n    Example example;\n    example.run();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo01b-hello-functor.cpp",
    "content": "/*\nHELLO WORLD VERSION MULTITHREADING\nVersion B: Using functors\n*/\n\n\n#include <iostream>\n#include <string>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nclass Example {\npublic:\n    void operator()(string message) {\n        cout << message << endl;\n    }\n};\n\n\n\nint main() {\n    Example example;\n\n    boost::thread th(example, \"Good day\");\n\n    th.join();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo02-join.cpp",
    "content": "/*\nTHREAD JOINS\n*/\n\n\n#include <iostream>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nvoid doHeavyTask() {\n    // Do a heavy task, which takes a little time\n    for (int i = 0; i < 2000000000; ++i);\n\n    cout << \"Done!\" << endl;\n}\n\n\n\nint main() {\n    boost::thread th(&doHeavyTask);\n\n    th.join();\n\n    cout << \"Good bye!\" << endl;\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo03a-pass-arg.cpp",
    "content": "/*\nPASSING ARGUMENTS\nVersion A: Passing multiple arguments with various data types\n*/\n\n\n#include <iostream>\n#include <cstdio>\n#include <string>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nstruct Point {\n    int x;\n    int y;\n\n    Point(int x, int y): x(x), y(y) { }\n};\n\n\n\nvoid doTask(int a, double b, string c, char const* d, Point e) {\n    char buffer[50] = { 0 };\n    std::sprintf(buffer, \"%d  %.1f  %s  %s  (%d %d)\", a, b, c.data(), d, e.x, e.y);\n    cout << buffer << endl;\n}\n\n\n\nint main() {\n    boost::thread thFoo(&doTask, 1, 2, \"red\", \"red\", Point(0, 0));\n    boost::thread thBar(&doTask, 3, 4, \"blue\", \"blue\", Point(9, 9));\n\n    thFoo.join();\n    thBar.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo03b-pass-arg.cpp",
    "content": "/*\nPASSING ARGUMENTS\nVersion B: Passing constant references\n*/\n\n\n#include <iostream>\n#include <string>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nvoid doTask(const string& msg) {\n    cout << msg << endl;\n}\n\nvoid doTask(const string& msg) {\n    cout << msg << endl;\n}\n\n\n\nint main() {\n    boost::thread thFoo(&doTask, \"foo\");\n    boost::thread thBar(&doTask, \"bar\");\n\n    thFoo.join();\n    thBar.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo03c-pass-arg.cpp",
    "content": "/*\nPASSING ARGUMENTS\nVersion C: Passing normal references\n*/\n\n\n#include <iostream>\n#include <string>\n#include <boost/ref.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nvoid doTask(string& msg) {\n    cout << msg << endl;\n}\n\n\n\nint main() {\n    string a = \"lorem ipsum\";\n    string b = \"dolor amet\";\n\n    // We should use boost:ref to pass references\n    boost::thread thFoo(&doTask, boost::ref(a));\n    boost::thread thBar(&doTask, boost::ref(b));\n\n    // boost::thread thFoo(&doTask, a);\n    // boost::thread thBar(&doTask, b);\n\n    thFoo.join();\n    thBar.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo04a-sleep.cpp",
    "content": "/*\nSLEEP\nVersion A: Sleep for a specific duration\n*/\n\n\n#include <iostream>\n#include <string>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nvoid doTask(string name) {\n    cout << name << \" is sleeping\" << endl;\n    boost::this_thread::sleep_for(boost::chrono::seconds(3));\n    cout << name << \" wakes up\" << endl;\n}\n\n\n\nint main() {\n    boost::thread thFoo(&doTask, \"foo\");\n\n    thFoo.join();\n\n    cout << \"Good bye\" << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo04b-sleep.cpp",
    "content": "/*\nSLEEP\nVersion B: Sleep until a specific time point\n*/\n\n\n#include <iostream>\n#include <string>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-time.hpp\"\nusing namespace std;\n\n\n\ntypedef boost::chrono::system_clock sysclock;\n\n\n\nvoid doTask(string name, sysclock::time_point tpWakeUp) {\n    boost::this_thread::sleep_until(tpWakeUp);\n    cout << name << \" wakes up\" << endl;\n}\n\n\n\nint main() {\n    sysclock::time_point tpNow = sysclock::now();\n    sysclock::time_point tpWakeUpFoo = tpNow + boost::chrono::seconds(7);\n    sysclock::time_point tpWakeUpBar = tpNow + boost::chrono::seconds(3);\n\n    cout << \"foo will sleep until \" << mylib::getTimePointStr(tpWakeUpFoo) << endl;\n    cout << \"bar will sleep until \" << mylib::getTimePointStr(tpWakeUpBar) << endl;\n\n    boost::thread thFoo(&doTask, \"foo\", tpWakeUpFoo);\n    boost::thread thBar(&doTask, \"bar\", tpWakeUpBar);\n\n    thFoo.join();\n    thBar.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo05-id.cpp",
    "content": "/*\nGETTING THREAD'S ID\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nvoid doTask() {\n    boost::this_thread::sleep_for(boost::chrono::seconds(2));\n    cout << boost::this_thread::get_id() << endl;\n}\n\n\n\nint main() {\n    boost::thread thFoo(&doTask);\n    boost::thread thBar(&doTask);\n\n    cout << \"foo's id: \" << thFoo.get_id() << endl;\n    cout << \"bar's id: \" << thBar.get_id() << endl;\n\n    thFoo.join();\n    thBar.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo06a-list-threads.cpp",
    "content": "/*\nLIST OF MULTIPLE THREADS\nVersion A: Using standard arrays\n*/\n\n\n#include <iostream>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nvoid doTask(int index) {\n    cout << index;\n}\n\n\n\nint main() {\n    const int NUM_THREADS = 5;\n\n    boost::thread lstTh[NUM_THREADS];\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh[i] = boost::thread(&doTask, i);\n    }\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh[i].join();\n    }\n\n    cout << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo06b-list-threads.cpp",
    "content": "/*\nLIST OF MULTIPLE THREADS\nVersion B: Using the std::vector\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <boost/shared_ptr.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\ntypedef boost::shared_ptr<boost::thread> threadptr;\n\n\n\nvoid doTask(int index) {\n    cout << index;\n}\n\n\n\nint main() {\n    const int NUM_THREADS = 5;\n\n    vector<threadptr> lstTh;\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        threadptr ptr = boost::make_shared<boost::thread>(&doTask, i);\n        lstTh.push_back(ptr);\n    }\n\n    for (int i = 0; i < lstTh.size(); ++i) {\n        lstTh[i]->join();\n    }\n\n    cout << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo06c-list-threads.cpp",
    "content": "/*\nLIST OF MULTIPLE THREADS\nVersion C: Using the boost::thread_group\n*/\n\n\n#include <iostream>\n#include <boost/bind/bind.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nvoid doTask(int index) {\n    cout << index;\n}\n\n\n\nint main() {\n    const int NUM_THREADS = 5;\n\n    boost::thread_group lstTh;\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh.add_thread(new boost::thread(&doTask, i));\n        // lstTh.create_thread(boost::bind(&doTask, i));\n    }\n\n    lstTh.join_all();\n\n    cout << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo07-terminate.cpp",
    "content": "/*\nFORCING A THREAD TO TERMINATE (i.e. killing the thread)\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nvolatile bool isRunning;\n\n\n\nvoid doTask() {\n    while (isRunning) {\n        cout << \"Running...\" << endl;\n        boost::this_thread::sleep_for(boost::chrono::seconds(2));\n    }\n}\n\n\n\nint main() {\n    isRunning = true;\n    boost::thread th(&doTask);\n\n    boost::this_thread::sleep_for(boost::chrono::seconds(6));\n    isRunning = false;\n\n    th.join();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo08-return-value.cpp",
    "content": "/*\nGETTING RETURNED VALUES FROM THREADS\nUsing pointers or references (traditional way)\n*/\n\n\n#include <iostream>\n#include <boost/ref.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nvoid doubleValue(int arg, int* res) {\n    (*res) = arg * 2;\n}\n\n\n\nvoid squareValue(int arg, int& res) {\n    res = arg * arg;\n}\n\n\n\nint main() {\n    int result[3];\n\n    boost::thread thFoo(&doubleValue, 5, &result[0]);\n    boost::thread thBar(&doubleValue, 80, &result[1]);\n    boost::thread thEgg(&squareValue, 7, boost::ref(result[2]));\n\n    thFoo.join();\n    thBar.join();\n    thEgg.join();\n\n    cout << result[0] << endl;\n    cout << result[1] << endl;\n    cout << result[2] << endl;\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo09-detach.cpp",
    "content": "/*\nTHREAD DETACHING\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nvoid foo() {\n    cout << \"foo is starting...\" << endl;\n\n    boost::this_thread::sleep_for(boost::chrono::seconds(2));\n\n    cout << \"foo is exiting...\" << endl;\n}\n\n\n\nint main() {\n    boost::thread thFoo(&foo);\n    thFoo.detach();\n\n\n    // If I comment this statement,\n    // thFoo will be forced into terminating with main thread\n    boost::this_thread::sleep_for(boost::chrono::seconds(3));\n\n\n    cout << \"Main thread is exiting\" << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo10-yield.cpp",
    "content": "/*\nTHREAD YIELDING\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-time.hpp\"\nusing namespace std;\n\n\n\ntypedef boost::chrono::microseconds chrmicro;\ntypedef boost::chrono::steady_clock::time_point time_point;\ntypedef mylib::HiResClock hrclock;\n\n\n\nvoid littleSleep(int us) {\n    time_point tpStart = hrclock::now();\n    time_point tpEnd = tpStart + chrmicro(us);\n\n    do {\n        boost::this_thread::yield();\n    }\n    while (hrclock::now() < tpEnd);\n}\n\n\n\nint main() {\n    time_point tpStartMeasure = hrclock::now();\n\n    littleSleep(130);\n\n    chrmicro timeElapsed = hrclock::getTimeSpan<chrmicro>(tpStartMeasure);\n\n    cout << \"Elapsed time: \" << timeElapsed.count() << \" microseonds\" << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo11a-exec-service.cpp",
    "content": "/*\nEXECUTOR SERVICES AND THREAD POOLS\n*/\n\n\n#include <iostream>\n#include <boost/bind/bind.hpp>\n#include <boost/thread.hpp>\n#include <boost/asio.hpp>\nusing namespace std;\n\n\n\nvoid doTask() {\n    cout << \"Hello the Executor Service\" << endl;\n}\n\n\n\nclass MyFunctor {\npublic:\n    void operator()() {\n        cout << \"Hello Multithreading\" << endl;\n    }\n};\n\n\n\nint main() {\n    // INIT THE EXECUTOR SERVICE WITH 2 THREADS\n    boost::asio::thread_pool pool(2);\n\n\n    // SUBMIT\n    boost::asio::post(pool, boost::bind(&doTask /* , argument1, argument2,... */));\n    boost::asio::post(pool, MyFunctor());\n\n\n    // WAIT FOR THE COMPLETION OF ALL TASKS AND SHUTDOWN EXECUTOR SERVICE\n    pool.join();\n    pool.stop();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo11b-exec-service.cpp",
    "content": "/*\nEXECUTOR SERVICES AND THREAD POOLS\n*/\n\n\n#include <iostream>\n#include <boost/bind/bind.hpp>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include <boost/asio.hpp>\nusing namespace std;\n\n\n\nvoid doTask(char id) {\n    cout << \"Task \" << id << \" is starting\" << endl;\n    boost::this_thread::sleep_for(boost::chrono::seconds(3));\n    cout << \"Task \" << id << \" is completed\" << endl;\n}\n\n\n\nint main() {\n    const int NUM_THREADS = 2;\n    const int NUM_TASKS = 5;\n\n    boost::asio::thread_pool pool(NUM_THREADS);\n\n    for (int i = 0; i < NUM_TASKS; ++i) {\n        boost::asio::post(pool, boost::bind(&doTask, 'A' + i));\n    }\n\n    cout << \"All tasks are submitted\" << endl;\n\n    pool.join();\n    cout << \"All tasks are completed\" << endl;\n\n    pool.stop();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo12a-race-condition.cpp",
    "content": "/*\nRACE CONDITIONS\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nvoid doTask(int index) {\n    boost::this_thread::sleep_for(boost::chrono::seconds(1));\n    cout << index;\n}\n\n\n\nint main() {\n    const int NUM_THREADS = 4;\n    boost::thread_group lstTh;\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh.add_thread(new boost::thread(&doTask, i));\n    }\n\n    lstTh.join_all();\n\n    cout << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo12b01-data-race-single.cpp",
    "content": "/*\nDATA RACES\nVersion 01: Without multithreading\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <numeric>\nusing namespace std;\n\n\n\nint getResult(int N) {\n    vector<bool> a;\n    a.resize(N + 1, false);\n\n    for (int i = 1; i <= N; ++i)\n        if (0 == i % 2 || 0 == i % 3)\n            a[i] = true;\n\n    // result = sum of a (i.e. counting number of true values in a)\n    int result = std::accumulate(a.begin(), a.end(), 0);\n    return result;\n}\n\n\n\nint main() {\n    const int N = 8;\n\n    int result = getResult(N);\n\n    cout << \"Number of integers that are divisible by 2 or 3 is: \" << result << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo12b02-data-race-multi.cpp",
    "content": "/*\nDATA RACES\nVersion 02: Multithreading\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <numeric>\n#include <boost/ref.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nvoid markDiv2(vector<bool> & a, int N) {\n    for (int i = 2; i <= N; i += 2)\n        a[i] = true;\n}\n\n\n\nvoid markDiv3(vector<bool> & a, int N) {\n    for (int i = 3; i <= N; i += 3)\n        a[i] = true;\n}\n\n\n\nint main() {\n    const int N = 8;\n\n    vector<bool> a;\n    a.resize(N + 1, false);\n\n    boost::thread thDiv2(&markDiv2, boost::ref(a), N);\n    boost::thread thDiv3(&markDiv3, boost::ref(a), N);\n    thDiv2.join();\n    thDiv3.join();\n\n    // result = sum of a (i.e. counting numbers of true values in a)\n    int result = std::accumulate(a.begin(), a.end(), 0);\n\n    cout << \"Number of integers that are divisible by 2 or 3 is: \" << result << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo12c01-race-cond-data-race.cpp",
    "content": "/*\nRACE CONDITIONS AND DATA RACES\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nint counter = 0;\n\n\n\nvoid increaseCounter() {\n    boost::this_thread::sleep_for(boost::chrono::seconds(1));\n\n    for (int i = 0; i < 1000; ++i) {\n        counter += 1;\n    }\n}\n\n\n\nint main() {\n    const int NUM_THREADS = 16;\n    boost::thread_group lstTh;\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh.add_thread(new boost::thread(&increaseCounter));\n    }\n\n    lstTh.join_all();\n\n    cout << \"counter = \" << counter << endl;\n    // We are NOT sure that counter = 16000\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo12c02-race-cond-data-race.cpp",
    "content": "/*\nRACE CONDITIONS AND DATA RACES\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\ntypedef boost::chrono::system_clock sysclock;\n\n\n\nint counter = 0;\n\n\n\nvoid doTaskA(sysclock::time_point timePointWakeUp) {\n    boost::this_thread::sleep_until(timePointWakeUp);\n\n    while (counter < 10)\n        ++counter;\n\n    cout << \"A won !!!\" << endl;\n}\n\n\n\nvoid doTaskB(sysclock::time_point timePointWakeUp) {\n    boost::this_thread::sleep_until(timePointWakeUp);\n\n    while (counter > -10)\n        --counter;\n\n    cout << \"B won !!!\" << endl;\n}\n\n\n\nint main() {\n    sysclock::time_point tpNow = sysclock::now();\n    sysclock::time_point tpWakeUp = tpNow + boost::chrono::seconds(1);\n\n    boost::thread thA(&doTaskA, tpWakeUp);\n    boost::thread thB(&doTaskB, tpWakeUp);\n\n    thA.join();\n    thB.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo13a-mutex.cpp",
    "content": "/*\nMUTEXES\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nboost::mutex mut;\nint counter = 0;\n\n\n\nvoid doTask() {\n    boost::this_thread::sleep_for(boost::chrono::seconds(1));\n\n    mut.lock();\n\n    for (int i = 0; i < 1000; ++i)\n        ++counter;\n\n    mut.unlock();\n}\n\n\n\nint main() {\n    const int NUM_THREADS = 16;\n    boost::thread_group lstTh;\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh.add_thread(new boost::thread(&doTask));\n    }\n\n    lstTh.join_all();\n\n    cout << \"counter = \" << counter << endl;\n    // We are sure that counter = 16000\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo13b01-mutex.cpp",
    "content": "/*\nMUTEXES\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nboost::mutex mut;\nint counter = 0;\n\n\n\nvoid doTask() {\n    boost::this_thread::sleep_for(boost::chrono::seconds(1));\n\n    boost::lock_guard<boost::mutex> lk(mut);\n\n    for (int i = 0; i < 1000; ++i)\n        ++counter;\n\n    // Once function exits, then destructor of lk object will be called.\n    // In destructor it unlocks the mutex.\n}\n\n\n\nint main() {\n    const int NUM_THREADS = 16;\n    boost::thread_group lstTh;\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh.add_thread(new boost::thread(&doTask));\n    }\n\n    lstTh.join_all();\n\n    cout << \"counter = \" << counter << endl;\n    // We are sure that counter = 16000\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo13b02-mutex.cpp",
    "content": "/*\nMUTEXES\n\nboost::unique_lock is more complex than boost::lock_guard:\nNot only does it provide for RAII-style locking, it also allows for\ndeferring acquiring the lock until the lock() member function is called explicitly,\nor trying to acquire the lock in a non-blocking fashion, or with a timeout.\n\nConsequently, unlock() is only called in the destructor if the lock object\nhas locked the Lockable object, or otherwise adopted a lock on the Lockable object.\n\n(From Boost's docs website)\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nboost::mutex mut;\nint counter = 0;\n\n\n\nvoid doTask() {\n    boost::this_thread::sleep_for(boost::chrono::seconds(1));\n\n    boost::unique_lock<boost::mutex> lk(mut);\n\n    for (int i = 0; i < 1000; ++i)\n        ++counter;\n}\n\n\n\nint main() {\n    const int NUM_THREADS = 16;\n    boost::thread_group lstTh;\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh.add_thread(new boost::thread(&doTask));\n    }\n\n    lstTh.join_all();\n\n    cout << \"counter = \" << counter << endl;\n    // We are sure that counter = 16000\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo13c-mutex-trylock.cpp",
    "content": "/*\nMUTEXES\nLocking with a nonblocking mutex\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nboost::mutex mut;\nint counter = 0;\n\n\n\nvoid doTask() {\n    boost::this_thread::sleep_for(boost::chrono::seconds(1));\n\n    if (false == mut.try_lock()) {\n        return;\n    }\n\n    for (int i = 0; i < 10000; ++i)\n        ++counter;\n\n    mut.unlock();\n}\n\n\n\nint main() {\n    const int NUM_THREADS = 3;\n    boost::thread_group lstTh;\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh.add_thread(new boost::thread(&doTask));\n    }\n\n    lstTh.join_all();\n\n    cout << \"counter = \" << counter << endl;\n    // counter can be 10000, 20000 or 30000\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo14-synchronized-block.cpp",
    "content": "/*\nSYNCHRONIZED BLOCKS\n\nSynchronized blocks in C++ Boost threading are not supported by default.\nTo demonstate synchronized blocks, I use boost::unique_lock (or boost::lock_guard).\n\nNow, let's see the code:\n    {\n        boost::unique_lock lk(mut);\n        // Do something in the critical section\n    }\n\nThe code block above is protected by a lock/mutex. That means it is synchronized on thread execution.\nThis code block is called \"the synchronized block\".\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nboost::mutex mut;\nint counter = 0;\n\n\n\nvoid doTask() {\n    boost::this_thread::sleep_for(boost::chrono::seconds(1));\n\n    // This is the \"synchronized block\"\n    {\n        boost::unique_lock<boost::mutex> lk(mut);\n\n        for (int i = 0; i < 1000; ++i)\n            ++counter;\n    }\n}\n\n\n\nint main() {\n    const int NUM_THREADS = 16;\n    boost::thread_group lstTh;\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh.add_thread(new boost::thread(&doTask));\n    }\n\n    lstTh.join_all();\n\n    cout << \"counter = \" << counter << endl;\n    // We are sure that counter = 16000\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo15a-deadlock.cpp",
    "content": "/*\nDEADLOCK\nVersion A\n*/\n\n\n#include <iostream>\n#include <string>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nboost::mutex mut;\n\n\n\nvoid doTask(std::string name) {\n    mut.lock();\n\n    cout << name << \" acquired resource\" << endl;\n\n    // mut.unlock(); // Forget this statement ==> deadlock\n}\n\n\n\nint main() {\n    boost::thread thFoo(&doTask, \"foo\");\n    boost::thread thBar(&doTask, \"bar\");\n\n    thFoo.join();\n    thBar.join();\n\n    cout << \"You will never see this statement due to deadlock!\" << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo15b-deadlock.cpp",
    "content": "/*\nDEADLOCK\nVersion B\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nboost::mutex mutResourceA;\nboost::mutex mutResourceB;\n\n\n\nvoid foo() {\n    mutResourceA.lock();\n    cout << \"foo acquired resource A\" << endl;\n\n    boost::this_thread::sleep_for(boost::chrono::seconds(1));\n\n    mutResourceB.lock();\n    cout << \"foo acquired resource B\" << endl;\n    mutResourceB.unlock();\n\n    mutResourceA.unlock();\n}\n\n\n\nvoid bar() {\n    mutResourceB.lock();\n    cout << \"bar acquired resource B\" << endl;\n\n    boost::this_thread::sleep_for(boost::chrono::seconds(1));\n\n    mutResourceA.lock();\n    cout << \"bar acquired resource A\" << endl;\n    mutResourceA.unlock();\n\n    mutResourceB.unlock();\n}\n\n\n\nint main() {\n    boost::thread thFoo(&foo);\n    boost::thread thBar(&bar);\n\n    thFoo.join();\n    thBar.join();\n\n    cout << \"You will never see this statement due to deadlock!\" << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo16-monitor.cpp",
    "content": "/*\nMONITORS\nImplementation of a monitor for managing a counter\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nclass Monitor {\nprivate:\n    boost::mutex mut;\n    int* pCounter;\n\n\npublic:\n    // Should disable copy/move constructors, copy/move assignment operators\n\n\n    void init(int* pCounter) {\n        this->pCounter = pCounter;\n    }\n\n\n    void increaseCounter() {\n        mut.lock();\n        (*pCounter) += 1;\n        mut.unlock();\n    }\n};\n\n\n\nvoid doTask(Monitor* monitor) {\n    boost::this_thread::sleep_for(boost::chrono::seconds(1));\n\n    for (int i = 0; i < 1000; ++i)\n        monitor->increaseCounter();\n}\n\n\n\nint main() {\n    int counter = 0;\n    Monitor monitor;\n\n    const int NUM_THREADS = 16;\n    boost::thread_group lstTh;\n\n    monitor.init(&counter);\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh.add_thread(new boost::thread(&doTask, &monitor));\n    }\n\n    lstTh.join_all();\n\n    cout << \"counter = \" << counter << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo17a-reentrant-lock.cpp",
    "content": "/*\nREENTRANT LOCKS (RECURSIVE MUTEXES)\nVersion A: Introduction to reentrant locks\n*/\n\n\n#include <iostream>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nboost::mutex mut;\n\n\n\nvoid doTask() {\n    mut.lock();\n    cout << \"First time acquiring the resource\" << endl;\n\n    mut.lock();\n    cout << \"Second time acquiring the resource\" << endl;\n\n    mut.unlock();\n    mut.unlock();\n}\n\n\n\nint main() {\n    boost::thread th(&doTask);\n    /*\n    The thread th shall meet deadlock.\n    So, you will never get output \"Second time the acquiring resource\".\n    */\n    th.join();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo17b-reentrant-lock.cpp",
    "content": "/*\nREENTRANT LOCKS (RECURSIVE MUTEXES)\nVersion B: Solving the problem from version A\n*/\n\n\n#include <iostream>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nboost::recursive_mutex mut;\n\n\n\nvoid doTask() {\n    mut.lock();\n    cout << \"First time acquiring the resource\" << endl;\n\n    mut.lock();\n    cout << \"Second time acquiring the resource\" << endl;\n\n    mut.unlock();\n    mut.unlock();\n}\n\n\n\nvoid doTaskUsingSyncBlock() {\n    typedef boost::unique_lock<boost::recursive_mutex> uniquelk;\n\n    uniquelk(mut);\n    cout << \"First time acquiring the resource\" << endl;\n\n    {\n        uniquelk(mut);\n        cout << \"Second time acquiring the resource\" << endl;\n    }\n}\n\n\n\nint main() {\n    boost::thread th(&doTask);\n    th.join();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo17c-reentrant-lock.cpp",
    "content": "/*\nREENTRANT LOCKS (RECURSIVE MUTEXES)\nVersion C: A multithreaded app example\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nboost::recursive_mutex mut;\n\n\n\nvoid doTask(char name) {\n    boost::this_thread::sleep_for(boost::chrono::seconds(1));\n\n    mut.lock();\n    cout << \"First time \" << name << \" acquiring the resource\" << endl;\n\n    mut.lock();\n    cout << \"Second time \" << name << \" acquiring the resource\" << endl;\n\n    mut.unlock();\n    mut.unlock();\n}\n\n\n\nvoid doTaskUsingSyncBlock(char name) {\n    typedef boost::unique_lock<boost::recursive_mutex> uniquelk;\n\n    boost::this_thread::sleep_for(boost::chrono::seconds(1));\n\n    {\n        uniquelk(mut);\n        cout << \"First time \" << name << \" acquiring the resource\" << endl;\n\n        {\n            uniquelk(mut);\n            cout << \"Second time \" << name << \" acquiring the resource\" << endl;\n        }\n    }\n}\n\n\n\nint main() {\n    const int NUM_THREADS = 3;\n    boost::thread_group lstTh;\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh.add_thread(new boost::thread(&doTask, char(i + 'A')));\n    }\n\n    lstTh.join_all();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo18a01-barrier.cpp",
    "content": "/*\nBARRIERS AND LATCHES\nVersion A: Cyclic barriers\n*/\n\n\n#include <iostream>\n#include <string>\n#include <boost/tuple/tuple.hpp>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\ntypedef boost::tuple<string,int> tuplestrint;\n\nboost::barrier syncPoint(3); // participant count = 3\n\n\n\nvoid processRequest(string userName, int waitTime) {\n    boost::this_thread::sleep_for(boost::chrono::seconds(waitTime));\n\n    cout << \"Get request from \" << userName << endl;\n    syncPoint.count_down_and_wait();\n\n    cout << \"Process request for \" << userName << endl;\n    syncPoint.count_down_and_wait();\n\n    cout << \"Done \" << userName << endl;\n}\n\n\n\nint main() {\n    const int NUM_THREADS = 3;\n    boost::thread_group lstTh;\n\n    // tuple<userName, waitTime>\n    tuplestrint lstArg[NUM_THREADS] = {\n        tuplestrint(\"lorem\", 1),\n        tuplestrint(\"ipsum\", 2),\n        tuplestrint(\"dolor\", 3)\n    };\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        tuplestrint & arg = lstArg[i];\n        lstTh.add_thread(new boost::thread(\n            &processRequest, boost::get<0>(arg), boost::get<1>(arg)\n        ));\n    }\n\n    lstTh.join_all();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo18a02-barrier.cpp",
    "content": "/*\nBARRIERS AND LATCHES\nVersion A: Cyclic barriers\n*/\n\n\n#include <iostream>\n#include <string>\n#include <boost/tuple/tuple.hpp>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\ntypedef boost::tuple<string,int> tuplestrint;\n\nboost::barrier syncPoint(2); // participant count = 2\n\n\n\nvoid processRequest(string userName, int waitTime) {\n    boost::this_thread::sleep_for(boost::chrono::seconds(waitTime));\n\n    cout << \"Get request from \" << userName << endl;\n    syncPoint.count_down_and_wait();\n\n    cout << \"Process request for \" << userName << endl;\n    syncPoint.count_down_and_wait();\n\n    cout << \"Done \" << userName << endl;\n}\n\n\n\nint main() {\n    const int NUM_THREADS = 4;\n    boost::thread_group lstTh;\n\n    // tuple<userName, waitTime>\n    tuplestrint lstArg[NUM_THREADS] = {\n        tuplestrint(\"lorem\", 1),\n        tuplestrint(\"ipsum\", 3),\n        tuplestrint(\"dolor\", 3),\n        tuplestrint(\"amet\", 10),\n    };\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        tuplestrint & arg = lstArg[i];\n        lstTh.add_thread(new boost::thread(\n            &processRequest, boost::get<0>(arg), boost::get<1>(arg)\n        ));\n    }\n\n    // Thread with userName = \"amet\" shall be FREEZED\n\n    lstTh.join_all();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo18a03-barrier.cpp",
    "content": "/*\nBARRIERS AND LATCHES\nVersion A: Cyclic barriers\n*/\n\n\n#include <iostream>\n#include <string>\n#include <boost/tuple/tuple.hpp>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\ntypedef boost::tuple<string,int> tuplestrint;\n\nboost::barrier syncPointA(2);\nboost::barrier syncPointB(2);\n\n\n\nvoid processRequest(string userName, int waitTime) {\n    boost::this_thread::sleep_for(boost::chrono::seconds(waitTime));\n\n    cout << \"Get request from \" << userName << endl;\n    syncPointA.count_down_and_wait();\n\n    cout << \"Process request for \" << userName << endl;\n    syncPointB.count_down_and_wait();\n\n    cout << \"Done \" << userName << endl;\n}\n\n\n\nint main() {\n    const int NUM_THREADS = 4;\n    boost::thread_group lstTh;\n\n    // tuple<userName, waitTime>\n    tuplestrint lstArg[NUM_THREADS] = {\n        tuplestrint(\"lorem\", 1),\n        tuplestrint(\"ipsum\", 3),\n        tuplestrint(\"dolor\", 3),\n        tuplestrint(\"amet\", 10),\n    };\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        tuplestrint & arg = lstArg[i];\n        lstTh.add_thread(new boost::thread(\n            &processRequest, boost::get<0>(arg), boost::get<1>(arg)\n        ));\n    }\n\n    lstTh.join_all();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo18b01-latch.cpp",
    "content": "/*\nBARRIERS AND LATCHES\nVersion B: Count-down latches\n*/\n\n\n#include <iostream>\n#include <string>\n#include <boost/tuple/tuple.hpp>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include <boost/thread/latch.hpp>\nusing namespace std;\n\n\n\ntypedef boost::tuple<string,int> tuplestrint;\n\nboost::latch syncPoint(3); // participant count = 3\n\n\n\nvoid processRequest(string userName, int waitTime) {\n    boost::this_thread::sleep_for(boost::chrono::seconds(waitTime));\n\n    cout << \"Get request from \" << userName << endl;\n\n    syncPoint.count_down();\n    syncPoint.wait();\n    // syncPoint.count_down_and_wait();\n\n    cout << \"Done \" << userName << endl;\n}\n\n\n\nint main() {\n    const int NUM_THREADS = 3;\n    boost::thread_group lstTh;\n\n    // tuple<userName, waitTime>\n    tuplestrint lstArg[NUM_THREADS] = {\n        tuplestrint(\"lorem\", 1),\n        tuplestrint(\"ipsum\", 2),\n        tuplestrint(\"dolor\", 3)\n    };\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        tuplestrint & arg = lstArg[i];\n        lstTh.add_thread(new boost::thread(\n            &processRequest, boost::get<0>(arg), boost::get<1>(arg)\n        ));\n    }\n\n    lstTh.join_all();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo18b02-latch.cpp",
    "content": "/*\nBARRIERS AND LATCHES\nVersion B: Count-down latches\n\nMain thread waits for 3 child threads to get enough data to progress.\n*/\n\n\n#include <iostream>\n#include <string>\n#include <boost/tuple/tuple.hpp>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include <boost/thread/latch.hpp>\nusing namespace std;\n\n\n\ntypedef boost::tuple<string,int> tuplestrint;\n\nconst int NUM_THREADS = 3;\nboost::latch syncPoint(NUM_THREADS);\n\n\n\nvoid doTask(string message, int waitTime) {\n    boost::this_thread::sleep_for(boost::chrono::seconds(waitTime));\n\n    cout << message << endl;\n    syncPoint.count_down();\n\n    boost::this_thread::sleep_for(boost::chrono::seconds(8));\n    cout << \"Cleanup\" << endl;\n}\n\n\n\nint main() {\n    boost::thread_group lstTh;\n\n    // tuple<message, waitTime>\n    tuplestrint lstArg[NUM_THREADS] = {\n        tuplestrint(\"Send request to egg.net to get data\", 6),\n        tuplestrint(\"Send request to foo.org to get data\", 2),\n        tuplestrint(\"Send request to bar.com to get data\", 4)\n    };\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        tuplestrint & arg = lstArg[i];\n\n        lstTh.add_thread(new boost::thread(\n            &doTask, boost::get<0>(arg), boost::get<1>(arg)\n        ));\n    }\n\n    syncPoint.wait();\n    cout << \"\\nNow we have enough data to progress to next step\\n\" << endl;\n\n    lstTh.join_all();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo19a-read-write-lock.cpp",
    "content": "/*\nREAD-WRITE LOCKS\n*/\n\n\n#include <iostream>\n#include <numeric>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-random.hpp\"\nusing namespace std;\n\n\n\nvolatile int resource = 0;\nboost::shared_mutex rwmut;\n\n\n\nvoid readFunc(int waitTime) {\n    boost::this_thread::sleep_for(boost::chrono::seconds(waitTime));\n\n    rwmut.lock_shared();\n\n    cout << \"read: \" << resource << endl;\n\n    rwmut.unlock_shared();\n}\n\n\n\nvoid writeFunc(int waitTime) {\n    boost::this_thread::sleep_for(boost::chrono::seconds(waitTime));\n\n    rwmut.lock();\n\n    resource = mylib::RandInt::get(100);\n    cout << \"write: \" << resource << endl;\n\n    rwmut.unlock();\n}\n\n\n\nint main() {\n    const int NUM_THREADS_READ = 10;\n    const int NUM_THREADS_WRITE = 4;\n    const int NUM_ARGS = 3;\n\n    boost::thread_group lstThRead;\n    boost::thread_group lstThWrite;\n    int lstArg[NUM_ARGS];\n\n\n    // INITIALIZE\n    for (int i = 0; i < NUM_ARGS; ++i) {\n        lstArg[i] = i;\n    }\n\n\n    // CREATE THREADS\n    for (int i = 0; i < NUM_THREADS_READ; ++i) {\n        int arg = lstArg[ mylib::RandInt::get(NUM_ARGS) ];\n\n        lstThRead.add_thread(new boost::thread(\n            &readFunc, arg\n        ));\n    }\n\n    for (int i = 0; i < NUM_THREADS_WRITE; ++i) {\n        int arg = lstArg[ mylib::RandInt::get(NUM_ARGS) ];\n\n        lstThWrite.add_thread(new boost::thread(\n            &writeFunc, arg\n        ));\n    }\n\n\n    // JOIN THREADS\n    lstThRead.join_all();\n    lstThWrite.join_all();\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo19b-read-write-lock.cpp",
    "content": "/*\nREAD-WRITE LOCKS\n*/\n\n\n#include <iostream>\n#include <numeric>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-random.hpp\"\nusing namespace std;\n\n\n\nvolatile int resource = 0;\nboost::shared_mutex rwmut;\n\n\n\nvoid readFunc(int waitTime) {\n    boost::this_thread::sleep_for(boost::chrono::seconds(waitTime));\n\n    boost::shared_lock<boost::shared_mutex> lk(rwmut);\n\n    cout << \"read: \" << resource << endl;\n\n    // lk.unlock();\n}\n\n\n\nvoid writeFunc(int waitTime) {\n    boost::this_thread::sleep_for(boost::chrono::seconds(waitTime));\n\n    boost::lock_guard<boost::shared_mutex> lk(rwmut);\n    // boost::unique_lock<boost::shared_mutex> lk(rwmut);\n\n    resource = mylib::RandInt::get(100);\n    cout << \"write: \" << resource << endl;\n\n    // lk.unlock();\n}\n\n\n\nint main() {\n    const int NUM_THREADS_READ = 10;\n    const int NUM_THREADS_WRITE = 4;\n    const int NUM_ARGS = 3;\n\n    boost::thread_group lstThRead;\n    boost::thread_group lstThWrite;\n    int lstArg[NUM_ARGS];\n\n\n    // INITIALIZE\n    for (int i = 0; i < NUM_ARGS; ++i) {\n        lstArg[i] = i;\n    }\n\n\n    // CREATE THREADS\n    for (int i = 0; i < NUM_THREADS_READ; ++i) {\n        int arg = lstArg[ mylib::RandInt::get(NUM_ARGS) ];\n\n        lstThRead.add_thread(new boost::thread(\n            &readFunc, arg\n        ));\n    }\n\n    for (int i = 0; i < NUM_THREADS_WRITE; ++i) {\n        int arg = lstArg[ mylib::RandInt::get(NUM_ARGS) ];\n\n        lstThWrite.add_thread(new boost::thread(\n            &writeFunc, arg\n        ));\n    }\n\n\n    // JOIN THREADS\n    lstThRead.join_all();\n    lstThWrite.join_all();\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo20a01-semaphore.cpp",
    "content": "/*\nSEMAPHORES\nVersion A: Paper sheets and packages\n\nSemaphores in C++ Boost threading are not supported by default.\nSo, I use mylib::Semaphore for this demonstration.\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-semaphore.hpp\"\nusing namespace std;\n\n\n\nmylib::Semaphore semPackage(0);\n\n\n\nvoid makeOneSheet() {\n    for (int i = 0; i < 4; ++i) {\n        cout << \"Make 1 sheet\" << endl;\n        boost::this_thread::sleep_for(boost::chrono::seconds(1));\n        semPackage.release();\n    }\n}\n\n\n\nvoid combineOnePackage() {\n    for (int i = 0; i < 4; ++i) {\n        semPackage.acquire();\n        semPackage.acquire();\n        cout << \"Combine 2 sheets into 1 package\" << endl;\n    }\n}\n\n\n\nint main() {\n    boost::thread thMakeSheetA(&makeOneSheet);\n    boost::thread thMakeSheetB(&makeOneSheet);\n    boost::thread thCombinePackage(&combineOnePackage);\n\n    thMakeSheetA.join();\n    thMakeSheetB.join();\n    thCombinePackage.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo20a02-semaphore.cpp",
    "content": "/*\nSEMAPHORES\nVersion A: Paper sheets and packages\n\nSemaphores in C++ Boost threading are not supported by default.\nSo, I use mylib::Semaphore for this demonstration.\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-semaphore.hpp\"\nusing namespace std;\n\n\n\nmylib::Semaphore semPackage(0);\nmylib::Semaphore semSheet(2);\n\n\n\nvoid makeOneSheet() {\n    for (int i = 0; i < 2; ++i) {\n        semSheet.acquire();\n        cout << \"Make 1 sheet\" << endl;\n        semPackage.release();\n    }\n}\n\n\n\nvoid combineOnePackage() {\n    for (int i = 0; i < 2; ++i) {\n        semPackage.acquire();\n        semPackage.acquire();\n\n        cout << \"Combine 2 sheets into 1 package\" << endl;\n        boost::this_thread::sleep_for(boost::chrono::seconds(2));\n\n        semSheet.release();\n        semSheet.release();\n    }\n}\n\n\n\nint main() {\n    boost::thread thMakeSheetA(&makeOneSheet);\n    boost::thread thMakeSheetB(&makeOneSheet);\n    boost::thread thCombinePackage(&combineOnePackage);\n\n    thMakeSheetA.join();\n    thMakeSheetB.join();\n    thCombinePackage.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo20a03-semaphore-deadlock.cpp",
    "content": "/*\nSEMAPHORES\nVersion A: Paper sheets and packages\n\nSemaphores in C++ Boost threading are not supported by default.\nSo, I use mylib::Semaphore for this demonstration.\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-semaphore.hpp\"\nusing namespace std;\n\n\n\nmylib::Semaphore semPackage(0);\nmylib::Semaphore semSheet(2);\n\n\n\nvoid makeOneSheet() {\n    for (int i = 0; i < 4; ++i) {\n        semSheet.acquire();\n        cout << \"Make 1 sheet\" << endl;\n        semPackage.release();\n    }\n}\n\n\n\nvoid combineOnePackage() {\n    for (int i = 0; i < 4; ++i) {\n        semPackage.acquire();\n        semPackage.acquire();\n\n        cout << \"Combine 2 sheets into 1 package\" << endl;\n        boost::this_thread::sleep_for(boost::chrono::seconds(2));\n\n        semSheet.release();\n        // Missing one statement: semSheet.release() ==> deadlock\n    }\n}\n\n\n\nint main() {\n    boost::thread thMakeSheetA(&makeOneSheet);\n    boost::thread thMakeSheetB(&makeOneSheet);\n    boost::thread thCombinePackage(&combineOnePackage);\n\n    thMakeSheetA.join();\n    thMakeSheetB.join();\n    thCombinePackage.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo20b-semaphore.cpp",
    "content": "/*\nSEMAPHORES\nVersion B: Tires and chassis\n\nSemaphores in C++ Boost threading are not supported by default.\nSo, I use mylib::Semaphore for this demonstration.\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-semaphore.hpp\"\nusing namespace std;\n\n\n\nmylib::Semaphore semTire(4);\nmylib::Semaphore semChassis(0);\n\n\n\nvoid makeTire() {\n    for (int i = 0; i < 8; ++i) {\n        semTire.acquire();\n\n        cout << \"Make 1 tire\" << endl;\n        boost::this_thread::sleep_for(boost::chrono::seconds(1));\n\n        semChassis.release();\n    }\n}\n\n\n\nvoid makeChassis() {\n    for (int i = 0; i < 4; ++i) {\n        semChassis.acquire();\n        semChassis.acquire();\n        semChassis.acquire();\n        semChassis.acquire();\n\n        cout << \"Make 1 chassis\" << endl;\n        boost::this_thread::sleep_for(boost::chrono::seconds(3));\n\n        semTire.release();\n        semTire.release();\n        semTire.release();\n        semTire.release();\n    }\n}\n\n\n\nint main() {\n    boost::thread thTireA(&makeTire);\n    boost::thread thTireB(&makeTire);\n    boost::thread thChassis(&makeChassis);\n\n    thTireA.join();\n    thTireB.join();\n    thChassis.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo21a01-condition-variable.cpp",
    "content": "/*\nCONDITION VARIABLES\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nboost::mutex mut;\nboost::condition_variable conditionVar;\n\n\n\nvoid foo() {\n    cout << \"foo is waiting...\" << endl;\n\n    boost::unique_lock<boost::mutex> mutLock(mut);\n    conditionVar.wait(mutLock);\n\n    cout << \"foo resumed\" << endl;\n}\n\n\n\nvoid bar() {\n    boost::this_thread::sleep_for(boost::chrono::seconds(3));\n    conditionVar.notify_one();\n}\n\n\n\nint main() {\n    boost::thread thFoo(&foo);\n    boost::thread thBar(&bar);\n\n    thFoo.join();\n    thBar.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo21a02-condition-variable.cpp",
    "content": "/*\nCONDITION VARIABLES\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nboost::mutex mut;\nboost::condition_variable conditionVar;\n\nconst int NUM_TH_FOO = 3;\n\n\n\nvoid foo() {\n    cout << \"foo is waiting...\" << endl;\n\n    boost::unique_lock<boost::mutex> mutLock(mut);\n    conditionVar.wait(mutLock);\n\n    cout << \"foo resumed\" << endl;\n}\n\n\n\nvoid bar() {\n    for (int i = 0; i < NUM_TH_FOO; ++i) {\n        boost::this_thread::sleep_for(boost::chrono::seconds(2));\n        conditionVar.notify_one();\n    }\n}\n\n\n\nint main() {\n    boost::thread_group lstThFoo;\n\n    for (int i = 0; i < NUM_TH_FOO; ++i) {\n        lstThFoo.add_thread(new boost::thread(&foo));\n    }\n\n    boost::thread thBar(&bar);\n\n    lstThFoo.join_all();\n    thBar.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo21a03-condition-variable.cpp",
    "content": "/*\nCONDITION VARIABLES\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nboost::mutex mut;\nboost::condition_variable conditionVar;\n\nconst int NUM_TH_FOO = 3;\n\n\n\nvoid foo() {\n    cout << \"foo is waiting...\" << endl;\n\n    boost::unique_lock<boost::mutex> mutLock(mut);\n    conditionVar.wait(mutLock);\n\n    cout << \"foo resumed\" << endl;\n}\n\n\n\nvoid bar() {\n    boost::this_thread::sleep_for(boost::chrono::seconds(3));\n    // Notify all waiting threads\n    conditionVar.notify_all();\n}\n\n\n\nint main() {\n    boost::thread_group lstThFoo;\n\n    for (int i = 0; i < NUM_TH_FOO; ++i) {\n        lstThFoo.add_thread(new boost::thread(&foo));\n    }\n\n    boost::thread thBar(&bar);\n\n    lstThFoo.join_all();\n    thBar.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo21b-condition-variable.cpp",
    "content": "/*\nCONDITION VARIABLES\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nboost::mutex mut;\nboost::condition_variable conditionVar;\n\nint counter = 0;\n\nconst int COUNT_HALT_01 = 3;\nconst int COUNT_HALT_02 = 6;\nconst int COUNT_DONE = 10;\n\n\n\n// Write numbers 1-3 and 8-10 as permitted by egg()\nvoid foo() {\n    for (;;) {\n        // Lock mutex and then wait for signal to relase mutex\n        boost::unique_lock<boost::mutex> lk(mut);\n\n        // Wait while egg() operates on counter,\n        // Mutex unlocked if condition variable in egg() signaled\n        conditionVar.wait(lk);\n\n        ++counter;\n        cout << \"foo counter = \" << counter << endl;\n\n        if (counter >= COUNT_DONE) {\n            return;\n        }\n    }\n}\n\n\n\n// Write numbers 4-7\nvoid egg() {\n    for (;;) {\n        boost::unique_lock<boost::mutex> lk(mut);\n\n        if (counter < COUNT_HALT_01 || counter > COUNT_HALT_02) {\n            // Signal to free waiting thread by freeing the mutex\n            // Note: foo() is now permitted to modify \"counter\"\n            conditionVar.notify_one();\n        }\n        else {\n            ++counter;\n            cout << \"egg counter = \" << counter << endl;\n        }\n\n        if (counter >= COUNT_DONE) {\n            return;\n        }\n    }\n}\n\n\n\nint main() {\n    boost::thread thFoo(&foo);\n    boost::thread thEgg(&egg);\n\n    thFoo.join();\n    thEgg.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo22a-blocking-queue.cpp",
    "content": "/*\nBLOCKING QUEUES\nVersion A: A slow producer and a fast consumer\n\nBlocking queues in C++ Boost threading are not supported by default.\nSo, I use mylib::BlockingQueue for this demonstration.\n*/\n\n\n#include <iostream>\n#include <string>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\nusing namespace mylib;\n\n\n\nvoid producer(BlockingQueue<string>* blkQueue) {\n    boost::this_thread::sleep_for(boost::chrono::seconds(2));\n    blkQueue->put(\"Alice\");\n\n    boost::this_thread::sleep_for(boost::chrono::seconds(2));\n    blkQueue->put(\"likes\");\n\n    boost::this_thread::sleep_for(boost::chrono::seconds(2));\n    blkQueue->put(\"singing\");\n}\n\n\n\nvoid consumer(BlockingQueue<string>* blkQueue) {\n    string data;\n\n    for (int i = 0; i < 3; ++i) {\n        cout << \"\\nWaiting for data...\" << endl;\n        data = blkQueue->take();\n        cout << \"    \" << data << endl;\n    }\n}\n\n\n\nint main() {\n    BlockingQueue<string> blkQueue;\n\n    boost::thread thProducer(&producer, &blkQueue);\n    boost::thread thConsumer(&consumer, &blkQueue);\n\n    thProducer.join();\n    thConsumer.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo22b-blocking-queue.cpp",
    "content": "/*\nBLOCKING QUEUES\nVersion B: A fast producer and a slow consumer\n\nBlocking queues in C++ Boost threading are not supported by default.\nSo, I use mylib::BlockingQueue for this demonstration.\n*/\n\n\n#include <iostream>\n#include <string>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\nusing namespace mylib;\n\n\n\nvoid producer(BlockingQueue<string>* blkQueue) {\n    blkQueue->put(\"Alice\");\n    blkQueue->put(\"likes\");\n\n    /*\n    Due to reaching the maximum capacity = 2, when executing blkQueue->put(\"singing\"),\n    this thread is going to sleep until the queue removes an element.\n    */\n\n    blkQueue->put(\"singing\");\n}\n\n\n\nvoid consumer(BlockingQueue<string>* blkQueue) {\n    string data;\n    boost::this_thread::sleep_for(boost::chrono::seconds(2));\n\n    for (int i = 0; i < 3; ++i) {\n        cout << \"\\nWaiting for data...\" << endl;\n        data = blkQueue->take();\n        cout << \"    \" << data << endl;\n    }\n}\n\n\n\nint main() {\n    BlockingQueue<string> blkQueue(2); // blocking queue with capacity = 2\n\n    boost::thread thProducer(&producer, &blkQueue);\n    boost::thread thConsumer(&consumer, &blkQueue);\n\n    thProducer.join();\n    thConsumer.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo23a-thread-local.cpp",
    "content": "/*\nTHREAD-LOCAL STORAGE\nIntroduction\n    The basic way to use thread-local storage\n*/\n\n\n#include <iostream>\n#include <string>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nboost::thread_specific_ptr<string> value;\n\n\n\nvoid printLocalValue() {\n    cout << (*value.get()) << endl;\n}\n\n\n\nvoid doTaskApple() {\n    value.reset(new string(\"APPLE\"));\n    boost::this_thread::sleep_for(boost::chrono::seconds(2));\n    printLocalValue();\n}\n\n\n\nvoid doTaskBanana() {\n    value.reset(new string(\"BANANA\"));\n    boost::this_thread::sleep_for(boost::chrono::seconds(2));\n    printLocalValue();\n}\n\n\n\nint main() {\n    boost::thread thApple(&doTaskApple);\n    boost::thread thBanana(&doTaskBanana);\n\n    thApple.join();\n    thBanana.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo23b-thread-local.cpp",
    "content": "/*\nTHREAD-LOCAL STORAGE\nAvoiding synchronization using thread-local storage\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nboost::thread_specific_ptr<int> counter;\n\n\n\nvoid doTask(int t) {\n    boost::this_thread::sleep_for(boost::chrono::seconds(1));\n    counter.reset(new int(0));\n\n    for (int i = 0; i < 1000; ++i)\n        (*counter) += 1;\n\n    cout << \"Thread \" << t << \" gives counter = \" << (*counter) << endl;\n}\n\n\n\nint main() {\n    const int NUM_THREADS = 3;\n    boost::thread_group lstTh;\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh.add_thread(new boost::thread(&doTask, i));\n    }\n\n    lstTh.join_all();\n\n    cout << endl;\n\n    /*\n    By using thread-local storage, each thread has its own counter.\n    So, the counter in one thread is completely independent of each other.\n\n    Thread-local storage helps us to AVOID SYNCHRONIZATION.\n    */\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo24-volatile.cpp",
    "content": "/*\nTHE VOLATILE KEYWORD\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nvolatile bool isRunning;\n\n\n\nvoid doTask() {\n    while (isRunning) {\n        cout << \"Running...\" << endl;\n        boost::this_thread::sleep_for(boost::chrono::seconds(2));\n    }\n}\n\n\n\nint main() {\n    isRunning = true;\n    boost::thread th(&doTask);\n\n    boost::this_thread::sleep_for(boost::chrono::seconds(6));\n    isRunning = false;\n\n    th.join();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo25a-atomic.cpp",
    "content": "/*\nATOMIC ACCESS\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nvolatile int counter = 0;\n\n\n\nvoid doTask() {\n    boost::this_thread::sleep_for(boost::chrono::seconds(1));\n    counter += 1;\n}\n\n\n\nint main() {\n    counter = 0;\n\n    boost::thread_group lstTh;\n\n    for (int i = 0; i < 1000; ++i) {\n        lstTh.add_thread(new boost::thread(&doTask));\n    }\n\n    lstTh.join_all();\n\n    // Unpredictable result\n    cout << \"counter = \" << counter << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demo25b-atomic.cpp",
    "content": "/*\nATOMIC ACCESS\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\n// boost::atomic<int> counter;\nboost::atomic_int32_t counter;\n\n\n\nvoid doTask() {\n    boost::this_thread::sleep_for(boost::chrono::seconds(1));\n    counter += 1;\n}\n\n\n\nint main() {\n    counter = 0;\n\n    boost::thread_group lstTh;\n\n    for (int i = 0; i < 1000; ++i) {\n        lstTh.add_thread(new boost::thread(&doTask));\n    }\n\n    lstTh.join_all();\n\n    // counter = 1000\n    cout << \"counter = \" << counter << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/demoex-async-future.cpp",
    "content": "/*\nASYNCHRONOUS PROGRAMMING WITH THE FUTURE\n*/\n\n\n#include <iostream>\n#include <boost/ref.hpp>\n#include <boost/move/move.hpp>\n#include <boost/thread.hpp>\n\n\n\nint doTaskA() {\n    return 7;\n}\n\nint doTaskB() {\n    return 8;\n}\n\nvoid doTaskC(boost::promise<int> & prom) {\n    prom.set_value_at_thread_exit(9);\n}\n\n\n\nint main() {\n    // future from a packaged_task (C++11)\n    boost::packaged_task<int> task(&doTaskA);               // wrap the function\n    boost::unique_future<int> fut1 = task.get_future();     // get a future\n    boost::thread th(boost::move(task));                    // launch on a thread\n\n\n    // future from an async()\n    boost::unique_future<int> fut2 = boost::async(boost::launch::async, &doTaskB);\n\n\n    // future from a promise\n    boost::promise<int> prom;\n    boost::unique_future<int> fut3 = prom.get_future();\n    boost::thread(&doTaskC, boost::ref(prom)).detach();\n\n\n    std::cout << \"Waiting...\" << std::endl;\n    fut1.wait();\n    fut2.wait();\n    fut3.wait();\n    th.join();\n\n\n    std::cout << \"Done!\" << std::endl;\n\n    std::cout << \"Results are: \"\n              << fut1.get() << ' ' << fut2.get() << ' ' << fut3.get() << std::endl;\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer01a-max-div.cpp",
    "content": "/*\nMAXIMUM NUMBER OF DIVISORS\n*/\n\n\n#include <iostream>\n#include \"mylib-time.hpp\"\nusing namespace std;\n\n\n\ntypedef boost::chrono::microseconds chrmicro;\ntypedef boost::chrono::steady_clock::time_point time_point;\ntypedef mylib::HiResClock hrclock;\n\n\n\nint main() {\n    const int RANGE_START = 1;\n    const int RANGE_END = 100000;\n\n    int resValue = 0;\n    int resNumDiv = 0;  // number of divisors of result\n\n    time_point tpStart = hrclock::now();\n\n\n    for (int i = RANGE_START; i <= RANGE_END; ++i) {\n        int numDiv = 0;\n\n        for (int j = i / 2; j > 0; --j)\n            if (i % j == 0)\n                ++numDiv;\n\n        if (resNumDiv < numDiv) {\n            resNumDiv = numDiv;\n            resValue = i;\n        }\n    }\n\n\n    chrmicro timeElapsed = hrclock::getTimeSpan<chrmicro>(tpStart);\n\n    cout << \"The integer which has largest number of divisors is \" << resValue << endl;\n    cout << \"The largest number of divisor is \" << resNumDiv << endl;\n    cout << \"Time elapsed = \" << (timeElapsed.count() / 1000000.0) << endl;\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer01b-max-div.cpp",
    "content": "/*\nMAXIMUM NUMBER OF DIVISORS\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <algorithm>\n#include <boost/thread.hpp>\n#include \"mylib-time.hpp\"\nusing namespace std;\n\n\n\ntypedef boost::chrono::microseconds chrmicro;\ntypedef boost::chrono::steady_clock::time_point time_point;\ntypedef mylib::HiResClock hrclock;\n\n\n\nstruct WorkerArg {\n    int iStart;\n    int iEnd;\n\n    WorkerArg(int iStart = 0, int iEnd = 0): iStart(iStart), iEnd(iEnd)\n    {\n    }\n};\n\n\n\nstruct WorkerResult {\n    int value;\n    int numDiv;\n\n    WorkerResult(int value = 0, int numDiv = 0): value(value), numDiv(numDiv)\n    {\n    }\n};\n\n\n\nvoid workerFunc(WorkerArg* arg, WorkerResult* res) {\n    int resValue = 0;\n    int resNumDiv = 0;\n\n    for (int i = arg->iStart; i <= arg->iEnd; ++i) {\n        int numDiv = 0;\n\n        for (int j = i / 2; j > 0; --j)\n            if (i % j == 0)\n                ++numDiv;\n\n        if (resNumDiv < numDiv) {\n            resNumDiv = numDiv;\n            resValue = i;\n        }\n    }\n\n    (*res) = WorkerResult(resValue, resNumDiv);\n}\n\n\n\nvoid prepare(\n    int rangeStart, int rangeEnd,\n    int numThreads,\n    vector<WorkerArg>& lstWorkerArg,\n    vector<WorkerResult>& lstWorkerRes\n) {\n    lstWorkerArg.resize(numThreads);\n    lstWorkerRes.resize(numThreads);\n\n    int rangeA, rangeB, rangeBlock;\n\n    rangeBlock = (rangeEnd - rangeStart + 1) / numThreads;\n    rangeA = rangeStart;\n\n    for (int i = 0; i < numThreads; ++i, rangeA += rangeBlock) {\n        rangeB = rangeA + rangeBlock - 1;\n\n        if (i == numThreads - 1)\n            rangeB = rangeEnd;\n\n        lstWorkerArg[i] = WorkerArg(rangeA, rangeB);\n    }\n}\n\n\n\nint main() {\n    const int RANGE_START = 1;\n    const int RANGE_END = 100000;\n    const int NUM_THREADS = 8;\n\n    boost::thread_group lstTh;\n    vector<WorkerArg> lstWorkerArg;\n    vector<WorkerResult> lstWorkerRes;\n\n    prepare(RANGE_START, RANGE_END, NUM_THREADS, lstWorkerArg, lstWorkerRes);\n\n    time_point tpStart = hrclock::now();\n\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh.add_thread(new boost::thread(&workerFunc, &lstWorkerArg[i], &lstWorkerRes[i]));\n    }\n\n    lstTh.join_all();\n\n\n    WorkerResult finalRes = lstWorkerRes[0];\n\n    for (int i = 1; i < lstWorkerRes.size(); ++i) {\n        if (finalRes.numDiv < lstWorkerRes[i].numDiv) {\n            finalRes = lstWorkerRes[i];\n        }\n    }\n\n\n    chrmicro timeElapsed = hrclock::getTimeSpan<chrmicro>(tpStart);\n\n    cout << \"The integer which has largest number of divisors is \" << finalRes.value << endl;\n    cout << \"The largest number of divisor is \" << finalRes.numDiv << endl;\n    cout << \"Time elapsed = \" << (timeElapsed.count() / 1000000.0) << endl;\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer01c-max-div.cpp",
    "content": "/*\nMAXIMUM NUMBER OF DIVISORS\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <algorithm>\n#include <boost/thread.hpp>\n#include \"mylib-time.hpp\"\nusing namespace std;\n\n\n\ntypedef boost::chrono::microseconds chrmicro;\ntypedef boost::chrono::steady_clock::time_point time_point;\ntypedef mylib::HiResClock hrclock;\n\n\n\nstruct WorkerArg {\n    int iStart;\n    int iEnd;\n\n    WorkerArg(int iStart = 0, int iEnd = 0): iStart(iStart), iEnd(iEnd)\n    {\n    }\n};\n\n\n\nclass FinalResult {\npublic:\n    int value;\n    int numDiv;\n\nprivate:\n    boost::mutex mut;\n\n\npublic:\n    FinalResult(): value(0), numDiv(0) { }\n\n\n    void update(int value, int numDiv) {\n        // Synchronize whole function\n        boost::unique_lock<boost::mutex> lk(mut);\n\n        if (this->numDiv < numDiv) {\n            this->numDiv = numDiv;\n            this->value = value;\n        }\n    }\n};\n\n\n\nvoid workerFunc(WorkerArg* arg, FinalResult* res) {\n    int resValue = 0;\n    int resNumDiv = 0;\n\n    for (int i = arg->iStart; i <= arg->iEnd; ++i) {\n        int numDiv = 0;\n\n        for (int j = i / 2; j > 0; --j)\n            if (i % j == 0)\n                ++numDiv;\n\n        if (resNumDiv < numDiv) {\n            resNumDiv = numDiv;\n            resValue = i;\n        }\n    }\n\n    res->update(resValue, resNumDiv);\n}\n\n\n\nvoid prepare(\n    int rangeStart, int rangeEnd,\n    int numThreads,\n    vector<WorkerArg>& lstWorkerArg\n) {\n    lstWorkerArg.resize(numThreads);\n\n    int rangeA, rangeB, rangeBlock;\n\n    rangeBlock = (rangeEnd - rangeStart + 1) / numThreads;\n    rangeA = rangeStart;\n\n    for (int i = 0; i < numThreads; ++i, rangeA += rangeBlock) {\n        rangeB = rangeA + rangeBlock - 1;\n\n        if (i == numThreads - 1)\n            rangeB = rangeEnd;\n\n        lstWorkerArg[i] = WorkerArg(rangeA, rangeB);\n    }\n}\n\n\n\nint main() {\n    const int RANGE_START = 1;\n    const int RANGE_END = 100000;\n    const int NUM_THREADS = 8;\n\n    boost::thread_group lstTh;\n    vector<WorkerArg> lstWorkerArg;\n\n    FinalResult finalRes;\n\n    prepare(RANGE_START, RANGE_END, NUM_THREADS, lstWorkerArg);\n\n    time_point tpStart = hrclock::now();\n\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh.add_thread(new boost::thread(&workerFunc, &lstWorkerArg[i], &finalRes));\n    }\n\n    lstTh.join_all();\n\n\n    chrmicro timeElapsed = hrclock::getTimeSpan<chrmicro>(tpStart);\n\n    cout << \"The integer which has largest number of divisors is \" << finalRes.value << endl;\n    cout << \"The largest number of divisor is \" << finalRes.numDiv << endl;\n    cout << \"Time elapsed = \" << (timeElapsed.count() / 1000000.0) << endl;\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer02a01-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE A: USING BLOCKING QUEUES\n    Version A01: 1 slow producer, 1 fast consumer\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\nusing namespace mylib;\n\n\n\nvoid producer(BlockingQueue<int>* blkq) {\n    int i = 1;\n\n    for (;; ++i) {\n        blkq->put(i);\n        boost::this_thread::sleep_for(boost::chrono::seconds(1));\n    }\n}\n\n\n\nvoid consumer(BlockingQueue<int>* blkq) {\n    int data = 0;\n\n    for (;;) {\n        data = blkq->take();\n        cout << \"Consumer \" << data << endl;\n    }\n}\n\n\n\nint main() {\n    BlockingQueue<int> blkq;\n\n    boost::thread thProducer(&producer, &blkq);\n    boost::thread thConsumer(&consumer, &blkq);\n\n    thProducer.join();\n    thConsumer.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer02a02-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE A: USING BLOCKING QUEUES\n    Version A02: 2 slow producers, 1 fast consumer\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\nusing namespace mylib;\n\n\n\nvoid producer(BlockingQueue<int>* blkq) {\n    int i = 1;\n\n    for (;; ++i) {\n        blkq->put(i);\n        boost::this_thread::sleep_for(boost::chrono::seconds(1));\n    }\n}\n\n\n\nvoid consumer(BlockingQueue<int>* blkq) {\n    int data = 0;\n\n    for (;;) {\n        data = blkq->take();\n        cout << \"Consumer \" << data << endl;\n    }\n}\n\n\n\nint main() {\n    BlockingQueue<int> blkq;\n\n    boost::thread thProducerA(&producer, &blkq);\n    boost::thread thProducerB(&producer, &blkq);\n    boost::thread thConsumer(&consumer, &blkq);\n\n    thProducerA.join();\n    thProducerB.join();\n    thConsumer.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer02a03-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE A: USING BLOCKING QUEUES\n    Version A03: 1 slow producer, 2 fast consumers\n*/\n\n\n#include <iostream>\n#include <string>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\nusing namespace mylib;\n\n\n\nvoid producer(BlockingQueue<int>* blkq) {\n    int i = 1;\n\n    for (;; ++i) {\n        blkq->put(i);\n        boost::this_thread::sleep_for(boost::chrono::seconds(1));\n    }\n}\n\n\n\nvoid consumer(string name, BlockingQueue<int>* blkq) {\n    int data = 0;\n\n    for (;;) {\n        data = blkq->take();\n        cout << \"Consumer \" << name << \": \" << data << endl;\n    }\n}\n\n\n\nint main() {\n    BlockingQueue<int> blkq;\n\n    boost::thread thProducer(&producer, &blkq);\n    boost::thread thConsumerFoo(&consumer, \"foo\", &blkq);\n    boost::thread thConsumerBar(&consumer, \"bar\", &blkq);\n\n    thProducer.join();\n    thConsumerFoo.join();\n    thConsumerBar.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer02a04-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE A: USING BLOCKING QUEUES\n    Version A04: Multiple fast producers, multiple slow consumers\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\nusing namespace mylib;\n\n\n\nvoid producer(BlockingQueue<int>* blkq, int startValue) {\n    int i = 1;\n\n    for (;; ++i) {\n        blkq->put(i + startValue);\n    }\n}\n\n\n\nvoid consumer(BlockingQueue<int>* blkq) {\n    int data = 0;\n\n    for (;;) {\n        data = blkq->take();\n        cout << \"Consumer \" << data << endl;\n        boost::this_thread::sleep_for(boost::chrono::seconds(1));\n    }\n}\n\n\n\nint main() {\n    BlockingQueue<int> blkq(5);\n\n\n    const int NUM_PRODUCERS = 3;\n    const int NUM_CONSUMERS = 2;\n\n    boost::thread_group lstThProducer;\n    boost::thread_group lstThConsumer;\n\n\n    // CREATE THREADS\n    for (int i = 0; i < NUM_PRODUCERS; ++i) {\n        lstThProducer.add_thread(new boost::thread(&producer, &blkq, i * 1000));\n    }\n\n    for (int i = 0; i < NUM_CONSUMERS; ++i) {\n        lstThConsumer.add_thread(new boost::thread(&consumer, &blkq));\n    }\n\n\n    // JOIN THREADS\n    lstThProducer.join_all();\n    lstThConsumer.join_all();\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer02b01-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE B: USING SEMAPHORES\n    Version B01: 1 slow producer, 1 fast consumer\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-semaphore.hpp\"\nusing namespace std;\n\n\n\nvoid producer(\n    mylib::Semaphore* semFill,\n    mylib::Semaphore* semEmpty,\n    queue<int>* q\n) {\n    int i = 1;\n\n    for (;; ++i) {\n        semEmpty->acquire();\n\n        q->push(i);\n        boost::this_thread::sleep_for(boost::chrono::seconds(1));\n\n        semFill->release();\n    }\n}\n\n\n\nvoid consumer(\n    mylib::Semaphore* semFill,\n    mylib::Semaphore* semEmpty,\n    queue<int>* q\n) {\n    int data = 0;\n\n    for (;;) {\n        semFill->acquire();\n\n        data = q->front();\n        q->pop();\n\n        cout << \"Consumer \" << data << endl;\n\n        semEmpty->release();\n    }\n}\n\n\n\nint main() {\n    mylib::Semaphore semFill(0);   // item produced\n    mylib::Semaphore semEmpty(1);  // remaining space in queue\n\n    queue<int> q;\n\n    boost::thread thProducer(&producer, &semFill, &semEmpty, &q);\n    boost::thread thConsumer(&consumer, &semFill, &semEmpty, &q);\n\n    thProducer.join();\n    thConsumer.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer02b02-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE B: USING SEMAPHORES\n    Version B02: 2 slow producers, 1 fast consumer\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-semaphore.hpp\"\nusing namespace std;\n\n\n\nvoid producer(\n    mylib::Semaphore* semFill,\n    mylib::Semaphore* semEmpty,\n    queue<int>* q,\n    int startValue\n) {\n    int i = 1;\n\n    for (;; ++i) {\n        semEmpty->acquire();\n\n        q->push(i + startValue);\n        boost::this_thread::sleep_for(boost::chrono::seconds(1));\n\n        semFill->release();\n    }\n}\n\n\n\nvoid consumer(\n    mylib::Semaphore* semFill,\n    mylib::Semaphore* semEmpty,\n    queue<int>* q\n) {\n    int data = 0;\n\n    for (;;) {\n        semFill->acquire();\n\n        data = q->front();\n        q->pop();\n\n        cout << \"Consumer \" << data << endl;\n\n        semEmpty->release();\n    }\n}\n\n\n\nint main() {\n    mylib::Semaphore semFill(0);   // item produced\n    mylib::Semaphore semEmpty(1);  // remaining space in queue\n\n    queue<int> q;\n\n    boost::thread thProducerA(&producer, &semFill, &semEmpty, &q, 0);\n    boost::thread thProducerB(&producer, &semFill, &semEmpty, &q, 1000);\n    boost::thread thConsumer(&consumer, &semFill, &semEmpty, &q);\n\n    thProducerA.join();\n    thProducerB.join();\n    thConsumer.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer02b03-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE B: USING SEMAPHORES\n    Version B03: 2 fast producers, 1 slow consumer\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-semaphore.hpp\"\nusing namespace std;\n\n\n\nvoid producer(\n    mylib::Semaphore* semFill,\n    mylib::Semaphore* semEmpty,\n    queue<int>* q,\n    int startValue\n) {\n    int i = 1;\n\n    for (;; ++i) {\n        semEmpty->acquire();\n        q->push(i + startValue);\n        semFill->release();\n    }\n}\n\n\n\nvoid consumer(\n    mylib::Semaphore* semFill,\n    mylib::Semaphore* semEmpty,\n    queue<int>* q\n) {\n    int data = 0;\n\n    for (;;) {\n        semFill->acquire();\n\n        data = q->front();\n        q->pop();\n\n        cout << \"Consumer \" << data << endl;\n        boost::this_thread::sleep_for(boost::chrono::seconds(1));\n\n        semEmpty->release();\n    }\n}\n\n\n\nint main() {\n    mylib::Semaphore semFill(0);   // item produced\n    mylib::Semaphore semEmpty(1);  // remaining space in queue\n\n    queue<int> q;\n\n    boost::thread thProducerA(&producer, &semFill, &semEmpty, &q, 0);\n    boost::thread thProducerB(&producer, &semFill, &semEmpty, &q, 1000);\n    boost::thread thConsumer(&consumer, &semFill, &semEmpty, &q);\n\n    thProducerA.join();\n    thProducerB.join();\n    thConsumer.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer02b04-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE B: USING SEMAPHORES\n    Version B04: Multiple fast producers, multiple slow consumers\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-semaphore.hpp\"\nusing namespace std;\n\n\n\nvoid producer(\n    mylib::Semaphore* semFill,\n    mylib::Semaphore* semEmpty,\n    queue<int>* q,\n    int startValue\n) {\n    int i = 1;\n\n    for (;; ++i) {\n        semEmpty->acquire();\n        q->push(i + startValue);\n        semFill->release();\n    }\n}\n\n\n\nvoid consumer(\n    mylib::Semaphore* semFill,\n    mylib::Semaphore* semEmpty,\n    queue<int>* q\n) {\n    int data = 0;\n\n    for (;;) {\n        semFill->acquire();\n\n        data = q->front();\n        q->pop();\n\n        cout << \"Consumer \" << data << endl;\n        boost::this_thread::sleep_for(boost::chrono::seconds(1));\n\n        semEmpty->release();\n    }\n}\n\n\n\nint main() {\n    mylib::Semaphore semFill(0);   // item produced\n    mylib::Semaphore semEmpty(1);  // remaining space in queue\n\n    queue<int> q;\n\n\n    const int NUM_PRODUCERS = 3;\n    const int NUM_CONSUMERS = 2;\n\n    boost::thread_group lstThProducer;\n    boost::thread_group lstThConsumer;\n\n\n    // CREATE THREADS\n    for (int i = 0; i < NUM_PRODUCERS; ++i) {\n        lstThProducer.add_thread(new boost::thread(\n            &producer, &semFill, &semEmpty, &q, i * 1000\n        ));\n    }\n\n    for (int i = 0; i < NUM_CONSUMERS; ++i) {\n        lstThConsumer.add_thread(new boost::thread(\n            &consumer, &semFill, &semEmpty, &q\n        ));\n    }\n\n\n    // JOIN THREADS\n    lstThProducer.join_all();\n    lstThConsumer.join_all();\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer02c-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE C: USING CONDITION VARIABLES & MONITORS\n    Multiple fast producers, multiple slow consumers\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\ntemplate <typename T>\nclass Monitor {\nprivate:\n    std::queue<T>* q;\n    int maxQueueSize;\n\n    boost::condition_variable condFull;\n    boost::condition_variable condEmpty;\n    boost::mutex mut;\n\n\npublic:\n    Monitor() : q(0), maxQueueSize(0) { }\n\n\nprivate:\n    Monitor(const Monitor& other) { }\n    void operator=(const Monitor& other) { }\n\n\npublic:\n    void init(int maxQueueSize, std::queue<T>* q) {\n        this->q = q;\n        this->maxQueueSize = maxQueueSize;\n    }\n\n\n    void add(const T& item) {\n        boost::unique_lock<boost::mutex> mutLock(mut);\n\n        while (q->size() == maxQueueSize) {\n            condFull.wait(mutLock);\n        }\n\n        q->push(item);\n\n        if (q->size() == 1) {\n            condEmpty.notify_one();\n        }\n\n        // mutLock.unlock();\n    }\n\n\n    T remove() {\n        boost::unique_lock<boost::mutex> mutLock(mut);\n\n        while (q->size() == 0) {\n            condEmpty.wait(mutLock);\n        }\n\n        T item = q->front();\n        q->pop();\n\n        if (q->size() == maxQueueSize - 1) {\n            condFull.notify_one();\n        }\n\n        // mutLock.unlock();\n\n        return item;\n    }\n};\n\n\n\ntemplate <typename T>\nvoid producer(Monitor<T>* monitor, int startValue) {\n    T i = 1;\n\n    for (;; ++i) {\n        monitor->add(i + startValue);\n    }\n}\n\n\n\ntemplate <typename T>\nvoid consumer(Monitor<T>* monitor) {\n    T data;\n\n    for (;;) {\n        data = monitor->remove();\n        cout << \"Consumer \" << data << endl;\n        boost::this_thread::sleep_for(boost::chrono::seconds(1));\n    }\n}\n\n\n\nint main() {\n    Monitor<int> monitor;\n    queue<int> q;\n\n    const int MAX_QUEUE_SIZE = 6;\n    const int NUM_PRODUCERS = 3;\n    const int NUM_CONSUMERS = 2;\n\n    boost::thread_group lstThProducer;\n    boost::thread_group lstThConsumer;\n\n\n    // PREPARE ARGUMENTS\n    monitor.init(MAX_QUEUE_SIZE, &q);\n\n\n    // CREATE THREADS\n    for (int i = 0; i < NUM_PRODUCERS; ++i) {\n        lstThProducer.add_thread(new boost::thread(&producer<int>, &monitor, i * 1000));\n    }\n\n    for (int i = 0; i < NUM_CONSUMERS; ++i) {\n        lstThConsumer.add_thread(new boost::thread(&consumer<int>, &monitor));\n    }\n\n\n    // JOIN THREADS\n    lstThProducer.join_all();\n    lstThConsumer.join_all();\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer03a-readers-writers.cpp",
    "content": "/*\nTHE READERS-WRITERS PROBLEM\nSolution for the first readers-writers problem\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-random.hpp\"\nusing namespace std;\n\n\n\nstruct GlobalData {\n    volatile int resource;\n    int readerCount;\n\n    boost::mutex mutResource;\n    boost::mutex mutReaderCount;\n};\n\n\n\nvoid doTaskWriter(GlobalData* g, int delayTime) {\n    boost::this_thread::sleep_for(boost::chrono::seconds(delayTime));\n\n    g->mutResource.lock();\n\n    g->resource = mylib::RandInt::get(100);\n    cout << \"Write \" << g->resource << endl;\n\n    g->mutResource.unlock();\n}\n\n\n\nvoid doTaskReader(GlobalData* g, int delayTime) {\n    boost::this_thread::sleep_for(boost::chrono::seconds(delayTime));\n\n\n    // Increase reader count\n    g->mutReaderCount.lock();\n    g->readerCount += 1;\n\n    if (1 == g->readerCount)\n        g->mutResource.lock();\n\n    g->mutReaderCount.unlock();\n\n\n    // Do the reading\n    cout << \"Read \" << g->resource << endl;\n\n\n    // Decrease reader count\n    g->mutReaderCount.lock();\n    g->readerCount -= 1;\n\n    if (0 == g->readerCount)\n        g->mutResource.unlock();\n\n    g->mutReaderCount.unlock();\n}\n\n\n\nint main() {\n    GlobalData globalData;\n    globalData.resource = 0;\n    globalData.readerCount = 0;\n\n\n    const int NUM_READERS = 8;\n    const int NUM_WRITERS = 6;\n\n    boost::thread_group lstThReader;\n    boost::thread_group lstThWriter;\n\n\n    // CREATE THREADS\n    for (int i = 0; i < NUM_READERS; ++i) {\n        lstThReader.add_thread(new boost::thread(\n            &doTaskReader, &globalData, mylib::RandInt::get(3)\n        ));\n    }\n\n    for (int i = 0; i < NUM_WRITERS; ++i) {\n        lstThWriter.add_thread(new boost::thread(\n            &doTaskWriter, &globalData, mylib::RandInt::get(3)\n        ));\n    }\n\n\n    // JOIN THREADS\n    lstThReader.join_all();\n    lstThWriter.join_all();\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer03b-readers-writers.cpp",
    "content": "/*\nTHE READERS-WRITERS PROBLEM\nSolution for the third readers-writers problem\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-random.hpp\"\nusing namespace std;\n\n\n\nstruct GlobalData {\n    volatile int resource;\n    int readerCount;\n\n    boost::mutex mutResource;\n    boost::mutex mutReaderCount;\n\n    boost::mutex mutServiceQueue;\n};\n\n\n\nvoid doTaskWriter(GlobalData* g, int delayTime) {\n    boost::this_thread::sleep_for(boost::chrono::seconds(delayTime));\n\n    g->mutServiceQueue.lock();\n\n    g->mutResource.lock();\n\n    g->mutServiceQueue.unlock();\n\n    g->resource = mylib::RandInt::get(100);\n    cout << \"Write \" << g->resource << endl;\n\n    g->mutResource.unlock();\n}\n\n\n\nvoid doTaskReader(GlobalData* g, int delayTime) {\n    boost::this_thread::sleep_for(boost::chrono::seconds(delayTime));\n\n\n    g->mutServiceQueue.lock();\n\n\n    // Increase reader count\n    g->mutReaderCount.lock();\n    g->readerCount += 1;\n\n    if (1 == g->readerCount)\n        g->mutResource.lock();\n\n    g->mutReaderCount.unlock();\n\n\n    g->mutServiceQueue.unlock();\n\n\n    // Do the reading\n    cout << \"Read \" << g->resource << endl;\n\n\n    // Decrease reader count\n    g->mutReaderCount.lock();\n    g->readerCount -= 1;\n\n    if (0 == g->readerCount)\n        g->mutResource.unlock();\n\n    g->mutReaderCount.unlock();\n}\n\n\n\nint main() {\n    GlobalData globalData;\n    globalData.resource = 0;\n    globalData.readerCount = 0;\n\n\n    const int NUM_READERS = 8;\n    const int NUM_WRITERS = 6;\n\n    boost::thread_group lstThReader;\n    boost::thread_group lstThWriter;\n\n\n    // CREATE THREADS\n    for (int i = 0; i < NUM_READERS; ++i) {\n        lstThReader.add_thread(new boost::thread(\n            &doTaskReader, &globalData, mylib::RandInt::get(3)\n        ));\n    }\n\n    for (int i = 0; i < NUM_WRITERS; ++i) {\n        lstThWriter.add_thread(new boost::thread(\n            &doTaskWriter, &globalData, mylib::RandInt::get(3)\n        ));\n    }\n\n\n    // JOIN THREADS\n    lstThReader.join_all();\n    lstThWriter.join_all();\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer04-dining-philosophers.cpp",
    "content": "/*\nTHE DINING PHILOSOPHERS PROBLEM\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\nvoid doTaskPhilosopher(boost::mutex chopstick[], int numPhilo, int idPhilo) {\n    int n = numPhilo;\n    int i = idPhilo;\n\n    boost::this_thread::sleep_for(boost::chrono::seconds(1));\n\n    chopstick[i].lock();\n    chopstick[(i + 1) % n].lock();\n\n    cout << \"Philosopher #\" << i << \" is eating the rice\" << endl;\n\n    chopstick[(i + 1) % n].unlock();\n    chopstick[i].unlock();\n}\n\n\n\nvoid doTaskPhilosopherUsingSyncBlock(boost::mutex chopstick[], int numPhilo, int idPhilo) {\n    int n = numPhilo;\n    int i = idPhilo;\n\n    boost::this_thread::sleep_for(boost::chrono::seconds(1));\n\n    {\n        boost::unique_lock<boost::mutex> ( chopstick[i] );\n        boost::unique_lock<boost::mutex> ( chopstick[(i + 1) % n] );\n        cout << \"Philosopher #\" << i << \" is eating the rice\" << endl;\n    }\n}\n\n\n\nint main() {\n    const int NUM_PHILOSOPHERS = 5;\n\n    boost::mutex chopstick[NUM_PHILOSOPHERS];\n    boost::thread_group lstTh;\n\n    // CREATE THREADS\n    for (int i = 0; i < NUM_PHILOSOPHERS; ++i) {\n        lstTh.add_thread(new boost::thread(\n            &doTaskPhilosopher, chopstick, NUM_PHILOSOPHERS, i\n        ));\n    }\n\n    // JOIN THREADS\n    lstTh.join_all();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer05-util.hpp",
    "content": "#ifndef _EXER05_UTIL_HPP_\n#define _EXER05_UTIL_HPP_\n\n\n\nvoid getScalarProduct(double const* u, double const* v, int sizeVector, double* res) {\n    double sum = 0;\n\n    for (int i = sizeVector - 1; i >= 0; --i) {\n        sum += u[i] * v[i];\n    }\n\n    (*res) = sum;\n}\n\n\n\n#endif // _EXER05_UTIL_HPP_\n"
  },
  {
    "path": "cpp/cpp-boost/exer05a-product-matrix-vector.cpp",
    "content": "/*\nMATRIX-VECTOR MULTIPLICATION\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <boost/assign/std/vector.hpp>\n#include <boost/thread.hpp>\n#include \"exer05-util.hpp\"\nusing namespace std;\nusing namespace boost::assign;\n\n\n\ntypedef std::vector<double> vectord;\ntypedef std::vector<vectord> matrix;\n\n\n\nvoid getProduct(const matrix& mat, const vectord& vec, vectord& result) {\n    // Assume that size of mat and vec are both eligible\n    int sizeRowMat = mat.size();\n    int sizeColMat = mat[0].size();\n    int sizeVec = vec.size();\n\n    result.clear();\n    result.resize(sizeRowMat, 0);\n\n    boost::thread_group lstTh;\n\n    for (int i = 0; i < sizeRowMat; ++i) {\n        lstTh.add_thread(new boost::thread(\n            &getScalarProduct, mat[i].data(), vec.data(), sizeVec, &result[i]\n        ));\n    }\n\n    lstTh.join_all();\n}\n\n\n\nint main() {\n    matrix A;\n\n    {\n        vectord row1, row2, row3;\n        row1 += 1, 2, 3;\n        row2 += 4, 5, 6;\n        row3 += 7, 8, 9;\n        A += row1, row2, row3;\n    }\n\n    vectord b;\n    b += 3, -1, 0;\n\n    vectord result;\n    getProduct(A, b, result);\n\n    for (int i = 0; i < result.size(); ++i) {\n        cout << result[i] << endl;\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer05b-product-matrix-matrix.cpp",
    "content": "/*\nMATRIX-MATRIX MULTIPLICATION (DOT PRODUCT)\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <boost/assign/std/vector.hpp>\n#include <boost/thread.hpp>\n#include \"exer05-util.hpp\"\nusing namespace std;\nusing namespace boost::assign;\n\n\n\ntypedef std::vector<double> vectord;\ntypedef std::vector<vectord> matrix;\n\n\n\nvoid getTransposeMatrix(const matrix& input, matrix& output) {\n    int numRow = input.size();\n    int numCol = input[0].size();\n\n    output.clear();\n    output.assign(numCol, vectord(numRow, 0));\n\n    for (int i = 0; i < numRow; ++i)\n        for (int j = 0; j < numCol; ++j)\n            output[j][i] = input[i][j];\n}\n\n\n\nvoid displayMatrix(const matrix& mat) {\n    int numRow = mat.size();\n    int numCol = mat[0].size();\n\n    for (int i = 0; i < numRow; ++i) {\n        for (int j = 0; j < numCol; ++j)\n            cout << \"\\t\" << mat[i][j];\n\n        cout << endl;\n    }\n}\n\n\n\nvoid getProduct(const matrix& matA, const matrix& matB, matrix& result) {\n    // Assume that size of matA and matB are both eligible\n    int sizeRowA = matA.size();\n    int sizeColA = matA[0].size();\n    int sizeColB = matB[0].size();\n    int sizeTotal = sizeRowA * sizeColB;\n\n    result.clear();\n    result.assign(sizeRowA, vectord(sizeColB, 0));\n\n    matrix matBT;\n    getTransposeMatrix(matB, matBT);\n\n    boost::thread_group lstTh;\n\n    for (int i = 0; i < sizeRowA; ++i) {\n        for (int j = 0; j < sizeColB; ++j) {\n            int sizeVector = sizeColA;\n\n            lstTh.add_thread(new boost::thread(\n                &getScalarProduct, matA[i].data(), matBT[j].data(), sizeVector, &result[i][j]\n            ));\n        }\n    }\n\n    lstTh.join_all();\n}\n\n\n\nint main() {\n    matrix A, B;\n\n    {\n        vectord row1, row2;\n        row1 += 1, 3, 5;\n        row2 += 2, 4, 6;\n        A += row1, row2;\n    }\n\n    {\n        vectord row1, row2, row3;\n        row1 += 1, 0, 1, 0;\n        row2 += 0, 1, 0, 1;\n        row3 += 1, 0, 0, -2;\n        B += row1, row2, row3;\n    }\n\n    matrix result;\n    getProduct(A, B, result);\n\n    displayMatrix(result);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer06a-blocking-queue.cpp",
    "content": "/*\nBLOCKING QUEUE IMPLEMENTATION\nVersion A: Synchronous queues\n*/\n\n\n#include <iostream>\n#include <string>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-semaphore.hpp\"\nusing namespace std;\n\n\n\ntemplate <typename T>\nclass SynchronousQueue {\n\nprivate:\n    mylib::Semaphore semPut;\n    mylib::Semaphore semTake;\n    T element;\n\n\npublic:\n    SynchronousQueue() : semPut(1), semTake(0) { }\n\n\n    void put(const T& value) {\n        semPut.acquire();\n        element = value;\n        semTake.release();\n    }\n\n\n    T take() {\n        semTake.acquire();\n        T result = element;\n        semPut.release();\n        return result;\n    }\n\n};\n\n\n\nvoid producer(SynchronousQueue<std::string>* syncQueue) {\n    std::string arr[] = { \"lorem\", \"ipsum\", \"dolor\" };\n\n    for (int i = 0; i < 3; ++i) {\n        std::string& data = arr[i];\n        cout << \"Producer: \" << data << endl;\n        syncQueue->put(data);\n        cout << \"Producer: \" << data << \"\\t\\t\\t[done]\" << endl;\n    }\n}\n\n\n\nvoid consumer(SynchronousQueue<std::string>* syncQueue) {\n    std::string data;\n    boost::this_thread::sleep_for(boost::chrono::seconds(5));\n\n    for (int i = 0; i < 3; ++i) {\n        data = syncQueue->take();\n        cout << \"\\tConsumer: \" << data << endl;\n    }\n}\n\n\n\nint main() {\n    SynchronousQueue<std::string> syncQueue;\n\n    boost::thread thProducer(&producer, &syncQueue);\n    boost::thread thConsumer(&consumer, &syncQueue);\n\n    thProducer.join();\n    thConsumer.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer06b01-blocking-queue.cpp",
    "content": "/*\nBLOCKING QUEUE IMPLEMENTATION\nVersion B01: General blocking queues\n             Underlying mechanism: Semaphores\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <string>\n#include <stdexcept>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-semaphore.hpp\"\nusing namespace std;\n\n\n\ntemplate <typename T>\nclass BlockingQueue {\n\nprivate:\n    int capacity;\n\n    mylib::Semaphore semRemain;\n    mylib::Semaphore semFill;\n    boost::mutex mut;\n\n    std::queue<T> q;\n\n\npublic:\n    BlockingQueue(int capacity) : capacity(capacity), semRemain(capacity), semFill(0) {\n        if (this->capacity <= 0)\n            throw std::invalid_argument(\"capacity must be a positive integer\");\n    }\n\n\n    void put(const T& value) {\n        semRemain.acquire();\n\n        {\n            boost::unique_lock<boost::mutex> lk(mut);\n            q.push(value);\n        }\n\n        semFill.release();\n    }\n\n\n    T take() {\n        T result;\n        semFill.acquire();\n\n        {\n            boost::unique_lock<boost::mutex> lk(mut);\n            result = q.front();\n            q.pop();\n        }\n\n        semRemain.release();\n        return result;\n    }\n\n};\n\n\n\nvoid producer(BlockingQueue<std::string>* blkQueue) {\n    std::string arr[] = { \"nice\", \"to\", \"meet\", \"you\" };\n\n    for (int i = 0; i < 4; ++i) {\n        std::string& data = arr[i];\n        cout << \"Producer: \" << data << endl;\n        blkQueue->put(data);\n        cout << \"Producer: \" << data << \"\\t\\t\\t[done]\" << endl;\n    }\n}\n\n\n\nvoid consumer(BlockingQueue<std::string>* blkQueue) {\n    std::string data;\n    boost::this_thread::sleep_for(boost::chrono::seconds(5));\n\n    for (int i = 0; i < 4; ++i) {\n        data = blkQueue->take();\n        cout << \"\\tConsumer: \" << data << endl;\n\n        if (0 == i)\n            boost::this_thread::sleep_for(boost::chrono::seconds(5));\n    }\n}\n\n\n\nint main() {\n    BlockingQueue<std::string> blkQueue(2); // capacity = 2\n\n    boost::thread thProducer(&producer, &blkQueue);\n    boost::thread thConsumer(&consumer, &blkQueue);\n\n    thProducer.join();\n    thConsumer.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer06b02-blocking-queue.cpp",
    "content": "/*\nBLOCKING QUEUE IMPLEMENTATION\nVersion B02: General blocking queues\n             Underlying mechanism: Condition variables\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <string>\n#include <stdexcept>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\ntemplate <typename T>\nclass BlockingQueue {\n\nprivate:\n    boost::condition_variable condEmpty;\n    boost::condition_variable condFull;\n    boost::mutex mut;\n\n    int capacity;\n    std::queue<T> q;\n\n\npublic:\n    BlockingQueue(int capacity) {\n        if (capacity <= 0)\n            throw std::invalid_argument(\"capacity must be a positive integer\");\n\n        this->capacity = capacity;\n    }\n\n\n    void put(const T& value) {\n        {\n            boost::unique_lock<boost::mutex> lk(mut);\n\n            while ((int)q.size() >= capacity) {\n                // Queue is full, must wait for 'take'\n                condFull.wait(lk);\n            }\n\n            q.push(value);\n        }\n\n        condEmpty.notify_one();\n    }\n\n\n    T take() {\n        T result;\n\n        {\n            boost::unique_lock<boost::mutex> lk(mut);\n\n            while (q.empty()) {\n                // Queue is empty, must wait for 'put'\n                condEmpty.wait(lk);\n            }\n\n            result = q.front();\n            q.pop();\n        }\n\n        condFull.notify_one();\n        return result;\n    }\n\n};\n\n\n\nvoid producer(BlockingQueue<std::string>* blkQueue) {\n    std::string arr[] = { \"nice\", \"to\", \"meet\", \"you\" };\n\n    for (int i = 0; i < 4; ++i) {\n        std::string& data = arr[i];\n        cout << \"Producer: \" << data << endl;\n        blkQueue->put(data);\n        cout << \"Producer: \" << data << \"\\t\\t\\t[done]\" << endl;\n    }\n}\n\n\n\nvoid consumer(BlockingQueue<std::string>* blkQueue) {\n    std::string data;\n    boost::this_thread::sleep_for(boost::chrono::seconds(5));\n\n    for (int i = 0; i < 4; ++i) {\n        data = blkQueue->take();\n        cout << \"\\tConsumer: \" << data << endl;\n\n        if (0 == i)\n            boost::this_thread::sleep_for(boost::chrono::seconds(5));\n    }\n}\n\n\n\nint main() {\n    BlockingQueue<std::string> blkQueue(2); // capacity = 2\n\n    boost::thread thProducer(&producer, &blkQueue);\n    boost::thread thConsumer(&consumer, &blkQueue);\n\n    thProducer.join();\n    thConsumer.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer07a-data-server.cpp",
    "content": "/*\nTHE DATA SERVER PROBLEM\nVersion A: Solving the problem using a condition variable\n*/\n\n\n#include <iostream>\n#include <string>\n#include <vector>\n#include <boost/ref.hpp>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\nusing namespace std;\n\n\n\n#define sleepsec(secs) \\\n    do { boost::this_thread::sleep_for(boost::chrono::seconds(secs)); } while (0)\n\n\n\nstruct Counter {\n    int value;\n    boost::mutex mut;\n    boost::condition_variable cond;\n    Counter(int value) : value(value) { }\n};\n\n\n\nvoid checkAuthUser() {\n    cout << \"[   Auth   ] Start\" << endl;\n    // Send request to authenticator, check permissions, encrypt, decrypt...\n    sleepsec(20);\n    cout << \"[   Auth   ] Done\" << endl;\n}\n\n\n\nvoid processFiles(const vector<string>& lstFileName, Counter& counter) {\n    for (size_t i = 0; i < lstFileName.size(); ++i) {\n        const string& fileName = lstFileName[i];\n\n        // Read file\n        cout << \"[ ReadFile ] Start \" << fileName << endl;\n        sleepsec(10);\n        cout << \"[ ReadFile ] Done  \" << fileName << endl;\n\n        {\n            boost::unique_lock<boost::mutex>(counter.mut);\n            --counter.value;\n            counter.cond.notify_one();\n        }\n\n        // Write log into disk\n        sleepsec(5);\n        cout << \"[ WriteLog ]\" << endl;\n    }\n}\n\n\n\nvoid processRequest() {\n    vector<string> lstFileName;\n    lstFileName.push_back(\"foo.html\");\n    lstFileName.push_back(\"bar.json\");\n\n    Counter counter(lstFileName.size());\n\n    // The server checks auth user while reading files, concurrently\n    boost::thread th(&processFiles, boost::cref(lstFileName), boost::ref(counter));\n    checkAuthUser();\n\n    // The server waits for completion of loading files\n    {\n        boost::unique_lock<boost::mutex> lk(counter.mut);\n        while (counter.value > 0) {\n            counter.cond.wait_for(lk, boost::chrono::seconds(10)); // timeout = 10 seconds\n        }\n    }\n\n    cout << \"\\nNow user is authorized and files are loaded\" << endl;\n    cout << \"Do other tasks...\\n\" << endl;\n\n    th.join();\n}\n\n\n\nint main() {\n    processRequest();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer07b-data-server.cpp",
    "content": "/*\nTHE DATA SERVER PROBLEM\nVersion B: Solving the problem using a semaphore\n*/\n\n\n#include <iostream>\n#include <string>\n#include <vector>\n#include <boost/ref.hpp>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-semaphore.hpp\"\nusing namespace std;\n\n\n\n#define sleepsec(secs) \\\n    do { boost::this_thread::sleep_for(boost::chrono::seconds(secs)); } while (0)\n\n\n\nvoid checkAuthUser() {\n    cout << \"[   Auth   ] Start\" << endl;\n    // Send request to authenticator, check permissions, encrypt, decrypt...\n    sleepsec(20);\n    cout << \"[   Auth   ] Done\" << endl;\n}\n\n\n\nvoid processFiles(const vector<string>& lstFileName, mylib::Semaphore& sem) {\n    for (size_t i = 0; i < lstFileName.size(); ++i) {\n        const string& fileName = lstFileName[i];\n\n        // Read file\n        cout << \"[ ReadFile ] Start \" << fileName << endl;\n        sleepsec(10);\n        cout << \"[ ReadFile ] Done  \" << fileName << endl;\n\n        sem.release();\n\n        // Write log into disk\n        sleepsec(5);\n        cout << \"[ WriteLog ]\" << endl;\n    }\n}\n\n\n\nvoid processRequest() {\n    vector<string> lstFileName;\n    lstFileName.push_back(\"foo.html\");\n    lstFileName.push_back(\"bar.json\");\n\n    mylib::Semaphore sem(0);\n\n    // The server checks auth user while reading files, concurrently\n    boost::thread th(&processFiles, boost::cref(lstFileName), boost::ref(sem));\n    checkAuthUser();\n\n    // The server waits for completion of loading files\n    for (size_t i = lstFileName.size(); i > 0; --i) {\n        sem.acquire();\n    }\n\n    cout << \"\\nNow user is authorized and files are loaded\" << endl;\n    cout << \"Do other tasks...\\n\" << endl;\n\n    th.join();\n}\n\n\n\nint main() {\n    processRequest();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer07c-data-server.cpp",
    "content": "/*\nTHE DATA SERVER PROBLEM\nVersion C: Solving the problem using a count-down latch\n*/\n\n\n#include <iostream>\n#include <string>\n#include <vector>\n#include <boost/ref.hpp>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include <boost/thread/latch.hpp>\nusing namespace std;\n\n\n\n#define sleepsec(secs) \\\n    do { boost::this_thread::sleep_for(boost::chrono::seconds(secs)); } while (0)\n\n\n\nvoid checkAuthUser() {\n    cout << \"[   Auth   ] Start\" << endl;\n    // Send request to authenticator, check permissions, encrypt, decrypt...\n    sleepsec(20);\n    cout << \"[   Auth   ] Done\" << endl;\n}\n\n\n\nvoid processFiles(const vector<string>& lstFileName, boost::latch& rdLatch) {\n    for (size_t i = 0; i < lstFileName.size(); ++i) {\n        const string& fileName = lstFileName[i];\n\n        // Read file\n        cout << \"[ ReadFile ] Start \" << fileName << endl;\n        sleepsec(10);\n        cout << \"[ ReadFile ] Done  \" << fileName << endl;\n\n        rdLatch.count_down();\n\n        // Write log into disk\n        sleepsec(5);\n        cout << \"[ WriteLog ]\" << endl;\n    }\n}\n\n\n\nvoid processRequest() {\n    vector<string> lstFileName;\n    lstFileName.push_back(\"foo.html\");\n    lstFileName.push_back(\"bar.json\");\n\n    boost::latch readFileLatch(lstFileName.size());\n\n    // The server checks auth user while reading files, concurrently\n    boost::thread th(&processFiles, boost::cref(lstFileName), boost::ref(readFileLatch));\n    checkAuthUser();\n\n    // The server waits for completion of loading files\n    readFileLatch.wait();\n\n    cout << \"\\nNow user is authorized and files are loaded\" << endl;\n    cout << \"Do other tasks...\\n\" << endl;\n\n    th.join();\n}\n\n\n\nint main() {\n    processRequest();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer07d-data-server.cpp",
    "content": "/*\nTHE DATA SERVER PROBLEM\nVersion D: Solving the problem using a blocking queue\n*/\n\n\n#include <iostream>\n#include <string>\n#include <vector>\n#include <boost/ref.hpp>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\n\n\n\n#define sleepsec(secs) \\\n    do { boost::this_thread::sleep_for(boost::chrono::seconds(secs)); } while (0)\n\n\n\nvoid checkAuthUser() {\n    cout << \"[   Auth   ] Start\" << endl;\n    // Send request to authenticator, check permissions, encrypt, decrypt...\n    sleepsec(20);\n    cout << \"[   Auth   ] Done\" << endl;\n}\n\n\n\nvoid processFiles(const vector<string>& lstFileName, mylib::BlockingQueue<string>& blkq) {\n    for (size_t i = 0; i < lstFileName.size(); ++i) {\n        const string& fileName = lstFileName[i];\n\n        // Read file\n        cout << \"[ ReadFile ] Start \" << fileName << endl;\n        sleepsec(10);\n        cout << \"[ ReadFile ] Done  \" << fileName << endl;\n\n        blkq.put(fileName); // You may put file data here\n\n        // Write log into disk\n        sleepsec(5);\n        cout << \"[ WriteLog ]\" << endl;\n    }\n}\n\n\n\nvoid processRequest() {\n    vector<string> lstFileName;\n    lstFileName.push_back(\"foo.html\");\n    lstFileName.push_back(\"bar.json\");\n\n    mylib::BlockingQueue<string> blkq;\n\n    // The server checks auth user while reading files, concurrently\n    boost::thread th(&processFiles, boost::cref(lstFileName), boost::ref(blkq));\n    checkAuthUser();\n\n    // The server waits for completion of loading files\n    for (size_t i = lstFileName.size(); i > 0; --i) {\n        blkq.take();\n    }\n\n    cout << \"\\nNow user is authorized and files are loaded\" << endl;\n    cout << \"Do other tasks...\\n\" << endl;\n\n    th.join();\n}\n\n\n\nint main() {\n    processRequest();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer08-exec-service-itask.hpp",
    "content": "#ifndef _MY_EXEC_SERVICE_ITASK_HPP_\n#define _MY_EXEC_SERVICE_ITASK_HPP_\n\n\n\n// interface ITask\nclass ITask {\npublic:\n    virtual ~ITask() { }\n    virtual void run() = 0;\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_ITASK_HPP_\n"
  },
  {
    "path": "cpp/cpp-boost/exer08-exec-service-main.cpp",
    "content": "/*\nEXECUTOR SERVICE & THREAD POOL IMPLEMENTATION\n*/\n\n\n#include <iostream>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"exer08-exec-service-itask.hpp\"\n#include \"exer08-exec-service-v0a.hpp\"\n#include \"exer08-exec-service-v0b.hpp\"\n#include \"exer08-exec-service-v1a.hpp\"\n#include \"exer08-exec-service-v1b.hpp\"\n#include \"exer08-exec-service-v2a.hpp\"\n#include \"exer08-exec-service-v2b.hpp\"\n\n\n\nclass MyTask : public ITask {\npublic:\n    char id;\n\npublic:\n    void run() {\n        std::cout << \"Task \" << id << \" is starting\" << std::endl;\n        boost::this_thread::sleep_for(boost::chrono::seconds(3));\n        std::cout << \"Task \" << id << \" is completed\" << std::endl;\n    }\n};\n\n\n\nint main() {\n    const int NUM_THREADS = 2;\n    const int NUM_TASKS = 5;\n\n\n    MyExecServiceV0A execService(NUM_THREADS);\n\n\n    std::vector<MyTask> lstTask(NUM_TASKS);\n\n    for (int i = 0; i < NUM_TASKS; ++i)\n        lstTask[i].id = 'A' + i;\n\n\n    for (int i = 0; i < NUM_TASKS; ++i) {\n        execService.submit(&lstTask[i]);\n    }\n\n    std::cout << \"All tasks are submitted\" << std::endl;\n\n\n    execService.waitTaskDone();\n    std::cout << \"All tasks are completed\" << std::endl;\n\n\n    execService.shutdown();\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-boost/exer08-exec-service-v0a.hpp",
    "content": "/*\nMY EXECUTOR SERVICE\n\nVersion 0A: The easiest executor service\n- It uses a blocking queue as underlying mechanism.\n*/\n\n\n\n#ifndef _MY_EXEC_SERVICE_V0A_HPP_\n#define _MY_EXEC_SERVICE_V0A_HPP_\n\n\n\n#include <iostream>\n#include <vector>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-blockingqueue.hpp\"\n#include \"exer08-exec-service-itask.hpp\"\n\n\n\nclass MyExecServiceV0A {\n\nprivate:\n    int numThreads;\n    boost::thread_group lstTh;\n    mylib::BlockingQueue<ITask*> taskPending;\n\n\npublic:\n    MyExecServiceV0A(int numThreads) {\n        init(numThreads);\n    }\n\n\nprivate:\n    MyExecServiceV0A(const MyExecServiceV0A& other) : numThreads(0) { }\n    void operator=(const MyExecServiceV0A& other) { }\n\n#if __cplusplus >= 201103L || (defined(_MSC_VER) && _MSC_VER >= 1900)\n    MyExecServiceV0A(const MyExecServiceV0A&& other) : numThreads(0) { }\n    void operator=(const MyExecServiceV0A&& other) { }\n#endif\n\n\nprivate:\n    void init(int numThreads) {\n        this->numThreads = numThreads;\n\n        for (int i = 0; i < numThreads; ++i) {\n            lstTh.add_thread(new boost::thread(&threadWorkerFunc, this));\n        }\n    }\n\n\npublic:\n    void submit(ITask* task) {\n        taskPending.add(task);\n    }\n\n\n    void waitTaskDone() {\n        // This ExecService is too simple,\n        // so there is no implementation for waitTaskDone()\n        boost::this_thread::sleep_for(boost::chrono::seconds(11)); // fake behaviour\n    }\n\n\n    void shutdown() {\n        // This ExecService is too simple,\n        // so there is no implementation for shutdown()\n        std::cout << \"No implementation for shutdown().\" << std::endl;\n        std::cout << \"You need to exit the app manually.\" << std::endl;\n        lstTh.join_all();\n    }\n\n\nprivate:\n    static void threadWorkerFunc(MyExecServiceV0A* thisPtr) {\n        mylib::BlockingQueue<ITask*> & taskPending = thisPtr->taskPending;\n        ITask* task = 0;\n\n        for (;;) {\n            // WAIT FOR AN AVAILABLE PENDING TASK\n            task = taskPending.take();\n\n            // DO THE TASK\n            task->run();\n        }\n    }\n\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_V0A_HPP_\n"
  },
  {
    "path": "cpp/cpp-boost/exer08-exec-service-v0b.hpp",
    "content": "/*\nMY EXECUTOR SERVICE\n\nVersion 0B: The easiest executor service\n- It uses a blocking queue as underlying mechanism.\n- It supports waitTaskDone() and shutdown().\n*/\n\n\n\n#ifndef _MY_EXEC_SERVICE_V0B_HPP_\n#define _MY_EXEC_SERVICE_V0B_HPP_\n\n\n\n#include <vector>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"mylib-blockingqueue.hpp\"\n#include \"exer08-exec-service-itask.hpp\"\n\n\n\nclass MyExecServiceV0B {\n\nprivate:\n    int numThreads;\n    boost::thread_group lstTh;\n\n    mylib::BlockingQueue<ITask*> taskPending;\n    boost::atomic_int32_t counterTaskRunning;\n\n    volatile bool forceThreadShutdown;\n\n    const class : ITask {\n        void run() { }\n    }\n    emptyTask;\n\n\npublic:\n    MyExecServiceV0B(int numThreads) {\n        init(numThreads);\n    }\n\n\nprivate:\n    MyExecServiceV0B(const MyExecServiceV0B& other) : numThreads(0) { }\n    void operator=(const MyExecServiceV0B& other) { }\n\n#if __cplusplus >= 201103L || (defined(_MSC_VER) && _MSC_VER >= 1900)\n    MyExecServiceV0B(const MyExecServiceV0B&& other) : numThreads(0) { }\n    void operator=(const MyExecServiceV0B&& other) { }\n#endif\n\n\nprivate:\n    void init(int numThreads) {\n        this->numThreads = numThreads;\n        counterTaskRunning = 0;\n        forceThreadShutdown = false;\n\n        for (int i = 0; i < numThreads; ++i) {\n            lstTh.add_thread(new boost::thread(&threadWorkerFunc, this));\n        }\n    }\n\n\npublic:\n    void submit(ITask* task) {\n        taskPending.add(task);\n    }\n\n\n    void waitTaskDone() {\n        // This ExecService is too simple,\n        // so there is no good implementation for waitTaskDone()\n        while (false == taskPending.empty() || counterTaskRunning > 0) {\n            boost::this_thread::sleep_for(boost::chrono::seconds(1));\n            // boost::this_thread::yield();\n        }\n    }\n\n\n    void shutdown() {\n        forceThreadShutdown = true;\n        taskPending.clear();\n\n        // Invoke blocked threads by adding \"empty\" tasks\n        for (int i = 0; i < numThreads; ++i) {\n            taskPending.put( (ITask* const) &emptyTask );\n        }\n\n        lstTh.join_all();\n        numThreads = 0;\n    }\n\n\nprivate:\n    static void threadWorkerFunc(MyExecServiceV0B* thisPtr) {\n        mylib::BlockingQueue<ITask*> & taskPending = thisPtr->taskPending;\n        boost::atomic_int32_t & counterTaskRunning = thisPtr->counterTaskRunning;\n        volatile bool & forceThreadShutdown = thisPtr->forceThreadShutdown;\n\n        ITask* task = 0;\n\n        for (;;) {\n            // WAIT FOR AN AVAILABLE PENDING TASK\n            task = taskPending.take();\n\n            // If shutdown() was called, then exit the function\n            if (forceThreadShutdown) {\n                break;\n            }\n\n            // DO THE TASK\n            ++counterTaskRunning;\n            task->run();\n            --counterTaskRunning;\n        }\n    }\n\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_V0B_HPP_\n"
  },
  {
    "path": "cpp/cpp-boost/exer08-exec-service-v1a.hpp",
    "content": "/*\nMY EXECUTOR SERVICE\n\nVersion 1A: Simple executor service\n- Method \"waitTaskDone\" invokes thread sleeps in loop (which can cause performance problems).\n*/\n\n\n\n#ifndef _MY_EXEC_SERVICE_V1A_HPP_\n#define _MY_EXEC_SERVICE_V1A_HPP_\n\n\n\n#include <vector>\n#include <queue>\n#include <boost/chrono.hpp>\n#include <boost/thread.hpp>\n#include \"exer08-exec-service-itask.hpp\"\n\n\n\nclass MyExecServiceV1A {\n\nprivate:\n    typedef boost::unique_lock<boost::mutex> uniquelk;\n\n\nprivate:\n    int numThreads;\n    boost::thread_group lstTh;\n\n    std::queue<ITask*> taskPending;\n    boost::mutex mutTaskPending;\n    boost::condition_variable condTaskPending;\n\n    boost::atomic_int32_t counterTaskRunning;\n\n    volatile bool forceThreadShutdown;\n\n\npublic:\n    MyExecServiceV1A(int numThreads) {\n        init(numThreads);\n    }\n\n\nprivate:\n    MyExecServiceV1A(const MyExecServiceV1A& other) : numThreads(0) { }\n    void operator=(const MyExecServiceV1A& other) { }\n\n#if __cplusplus >= 201103L || (defined(_MSC_VER) && _MSC_VER >= 1900)\n    MyExecServiceV1A(const MyExecServiceV1A&& other) : numThreads(0) { }\n    void operator=(const MyExecServiceV1A&& other) { }\n#endif\n\n\nprivate:\n    void init(int numThreads) {\n        // shutdown();\n\n        this->numThreads = numThreads;\n        counterTaskRunning = 0;\n        forceThreadShutdown = false;\n\n        for (int i = 0; i < numThreads; ++i) {\n            lstTh.add_thread(new boost::thread(&threadWorkerFunc, this));\n        }\n    }\n\n\npublic:\n    void submit(ITask* task) {\n        {\n            uniquelk lk(mutTaskPending);\n            taskPending.push(task);\n        }\n\n        condTaskPending.notify_one();\n    }\n\n\n    void waitTaskDone() {\n        bool done = false;\n\n        for (;;) {\n            {\n                uniquelk lk(mutTaskPending);\n\n                if (taskPending.empty() && 0 == counterTaskRunning) {\n                    done = true;\n                }\n            }\n\n            if (done) {\n                break;\n            }\n\n            boost::this_thread::sleep_for(boost::chrono::seconds(1));\n            // boost::this_thread::yield();\n        }\n    }\n\n\n    void shutdown() {\n        {\n            uniquelk lk(mutTaskPending);\n            forceThreadShutdown = true;\n\n            while (false == taskPending.empty())\n                taskPending.pop();\n        }\n\n        condTaskPending.notify_all();\n        lstTh.join_all();\n        numThreads = 0;\n    }\n\n\nprivate:\n    static void threadWorkerFunc(MyExecServiceV1A* thisPtr) {\n        std::queue<ITask*> & taskPending = thisPtr->taskPending;\n        boost::mutex & mutTaskPending = thisPtr->mutTaskPending;\n        boost::condition_variable & condTaskPending = thisPtr->condTaskPending;\n\n        boost::atomic_int32_t & counterTaskRunning = thisPtr->counterTaskRunning;\n        volatile bool & forceThreadShutdown = thisPtr->forceThreadShutdown;\n\n        ITask* task = 0;\n\n\n        for (;;) {\n            {\n                // WAIT FOR AN AVAILABLE PENDING TASK\n                uniquelk lkPending(mutTaskPending);\n\n                while (taskPending.empty() && false == forceThreadShutdown) {\n                    condTaskPending.wait(lkPending);\n                }\n\n                if (forceThreadShutdown) {\n                    // lkPending.unlock(); // remember this statement\n                    break;\n                }\n\n                // GET THE TASK FROM THE PENDING QUEUE\n                task = taskPending.front();\n                taskPending.pop();\n\n                ++counterTaskRunning;\n            }\n\n            // DO THE TASK\n            task->run();\n\n            --counterTaskRunning;\n        }\n    }\n\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_V1A_HPP_\n"
  },
  {
    "path": "cpp/cpp-boost/exer08-exec-service-v1b.hpp",
    "content": "/*\nMY EXECUTOR SERVICE\n\nVersion 1B: Simple executor service\n- Method \"waitTaskDone\" uses a condition variable to synchronize.\n*/\n\n\n\n#ifndef _MY_EXEC_SERVICE_V1B_HPP_\n#define _MY_EXEC_SERVICE_V1B_HPP_\n\n\n\n#include <vector>\n#include <queue>\n#include <boost/thread.hpp>\n#include \"exer08-exec-service-itask.hpp\"\n\n\n\nclass MyExecServiceV1B {\n\nprivate:\n    typedef boost::unique_lock<boost::mutex> uniquelk;\n\n\nprivate:\n    int numThreads;\n    boost::thread_group lstTh;\n\n    std::queue<ITask*> taskPending;\n    boost::mutex mutTaskPending;\n    boost::condition_variable condTaskPending;\n\n    int counterTaskRunning;\n    boost::mutex mutTaskRunning;\n    boost::condition_variable condTaskRunning;\n\n    volatile bool forceThreadShutdown;\n\n\npublic:\n    MyExecServiceV1B(int numThreads) {\n        init(numThreads);\n    }\n\n\nprivate:\n    MyExecServiceV1B(const MyExecServiceV1B& other) : numThreads(0) { }\n    void operator=(const MyExecServiceV1B& other) { }\n\n#if __cplusplus >= 201103L || (defined(_MSC_VER) && _MSC_VER >= 1900)\n    MyExecServiceV1B(const MyExecServiceV1B&& other) : numThreads(0) { }\n    void operator=(const MyExecServiceV1B&& other) { }\n#endif\n\n\nprivate:\n    void init(int numThreads) {\n        // shutdown();\n\n        this->numThreads = numThreads;\n        counterTaskRunning = 0;\n        forceThreadShutdown = false;\n\n        for (int i = 0; i < numThreads; ++i) {\n            lstTh.add_thread(new boost::thread(&threadWorkerFunc, this));\n        }\n    }\n\n\npublic:\n    void submit(ITask* task) {\n        {\n            uniquelk lk(mutTaskPending);\n            taskPending.push(task);\n        }\n\n        condTaskPending.notify_one();\n    }\n\n\n    void waitTaskDone() {\n        for (;;) {\n            uniquelk lkPending(mutTaskPending);\n\n            if (taskPending.empty()) {\n                uniquelk lkRunning(mutTaskRunning);\n\n                while (counterTaskRunning > 0)\n                    condTaskRunning.wait(lkRunning);\n\n                // no pending task and no running task\n                break;\n            }\n        }\n    }\n\n\n    void shutdown() {\n        {\n            uniquelk lk(mutTaskPending);\n            forceThreadShutdown = true;\n\n            while (false == taskPending.empty())\n                taskPending.pop();\n        }\n\n        condTaskPending.notify_all();\n        lstTh.join_all();\n        numThreads = 0;\n    }\n\n\nprivate:\n    static void threadWorkerFunc(MyExecServiceV1B* thisPtr) {\n        std::queue<ITask*> & taskPending = thisPtr->taskPending;\n        boost::mutex & mutTaskPending = thisPtr->mutTaskPending;\n        boost::condition_variable & condTaskPending = thisPtr->condTaskPending;\n\n        int & counterTaskRunning = thisPtr->counterTaskRunning;\n        boost::mutex & mutTaskRunning = thisPtr->mutTaskRunning;\n        boost::condition_variable & condTaskRunning = thisPtr->condTaskRunning;\n\n        volatile bool & forceThreadShutdown = thisPtr->forceThreadShutdown;\n\n        ITask* task = 0;\n\n\n        for (;;) {\n            {\n                // WAIT FOR AN AVAILABLE PENDING TASK\n                uniquelk lkPending(mutTaskPending);\n\n                while (taskPending.empty() && false == forceThreadShutdown) {\n                    condTaskPending.wait(lkPending);\n                }\n\n                if (forceThreadShutdown) {\n                    // lkPending.unlock(); // remember this statement\n                    break;\n                }\n\n                // GET THE TASK FROM THE PENDING QUEUE\n                task = taskPending.front();\n                taskPending.pop();\n\n                ++counterTaskRunning;\n            }\n\n            // DO THE TASK\n            task->run();\n\n            {\n                uniquelk lkRunning(mutTaskRunning);\n                --counterTaskRunning;\n\n                if (0 == counterTaskRunning) {\n                    condTaskRunning.notify_one();\n                }\n            }\n        }\n    }\n\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_V1B_HPP_\n"
  },
  {
    "path": "cpp/cpp-boost/exer08-exec-service-v2a.hpp",
    "content": "/*\nMY EXECUTOR SERVICE\n\nVersion 2A: The executor service storing running tasks\n- Method \"waitTaskDone\" uses a semaphore to synchronize.\n*/\n\n\n\n#ifndef _MY_EXEC_SERVICE_V2A_HPP_\n#define _MY_EXEC_SERVICE_V2A_HPP_\n\n\n\n#include <vector>\n#include <list>\n#include <queue>\n#include <boost/thread.hpp>\n#include \"mylib-semaphore.hpp\"\n#include \"exer08-exec-service-itask.hpp\"\n\n\n\nclass MyExecServiceV2A {\n\nprivate:\n    typedef boost::unique_lock<boost::mutex> uniquelk;\n\n\nprivate:\n    int numThreads;\n    boost::thread_group lstTh;\n\n    std::queue<ITask*> taskPending;\n    boost::mutex mutTaskPending;\n    boost::condition_variable condTaskPending;\n\n    std::list<ITask*> taskRunning;\n    boost::mutex mutTaskRunning;\n    mylib::Semaphore counterTaskRunning;\n\n    volatile bool forceThreadShutdown;\n\n\npublic:\n    MyExecServiceV2A(int numThreads) : counterTaskRunning(0) {\n        init(numThreads);\n    }\n\n\nprivate:\n    MyExecServiceV2A(const MyExecServiceV2A& other) : numThreads(0), counterTaskRunning(0) { }\n    void operator=(const MyExecServiceV2A& other) { }\n\n#if __cplusplus >= 201103L || (defined(_MSC_VER) && _MSC_VER >= 1900)\n    MyExecServiceV2A(const MyExecServiceV2A&& other) : numThreads(0), counterTaskRunning(0) { }\n    void operator=(const MyExecServiceV2A&& other) { }\n#endif\n\n\nprivate:\n    void init(int numThreads) {\n        // shutdown();\n\n        this->numThreads = numThreads;\n        forceThreadShutdown = false;\n\n        for (int i = 0; i < numThreads; ++i) {\n            lstTh.add_thread(new boost::thread(&threadWorkerFunc, this));\n        }\n    }\n\n\npublic:\n    void submit(ITask* task) {\n        {\n            uniquelk lk(mutTaskPending);\n            taskPending.push(task);\n        }\n\n        condTaskPending.notify_one();\n    }\n\n\n    void waitTaskDone() {\n        for (;;) {\n            counterTaskRunning.acquire();\n\n            {\n                uniquelk lkPending(mutTaskPending);\n                uniquelk lkRunning(mutTaskRunning);\n\n                if (taskPending.empty() && taskRunning.empty()) {\n                    break;\n                }\n            }\n        }\n    }\n\n\n    void shutdown() {\n        {\n            uniquelk lk(mutTaskPending);\n            forceThreadShutdown = true;\n\n            while (false == taskPending.empty())\n                taskPending.pop();\n        }\n\n        condTaskPending.notify_all();\n        lstTh.join_all();\n        numThreads = 0;\n    }\n\n\nprivate:\n    static void threadWorkerFunc(MyExecServiceV2A* thisPtr) {\n        std::queue<ITask*> & taskPending = thisPtr->taskPending;\n        boost::mutex & mutTaskPending = thisPtr->mutTaskPending;\n        boost::condition_variable & condTaskPending = thisPtr->condTaskPending;\n\n        std::list<ITask*> & taskRunning = thisPtr->taskRunning;\n        boost::mutex & mutTaskRunning = thisPtr->mutTaskRunning;\n        mylib::Semaphore & counterTaskRunning = thisPtr->counterTaskRunning;\n\n        volatile bool & forceThreadShutdown = thisPtr->forceThreadShutdown;\n\n        ITask* task = 0;\n\n\n        for (;;) {\n            {\n                // WAIT FOR AN AVAILABLE PENDING TASK\n                uniquelk lkPending(mutTaskPending);\n\n                while (taskPending.empty() && false == forceThreadShutdown) {\n                    condTaskPending.wait(lkPending);\n                }\n\n                if (forceThreadShutdown) {\n                    // lkPending.unlock(); // remember this statement\n                    break;\n                }\n\n                // GET THE TASK FROM THE PENDING QUEUE\n                task = taskPending.front();\n                taskPending.pop();\n\n                // PUSH IT TO THE RUNNING QUEUE\n                {\n                    uniquelk lkRunning(mutTaskRunning);\n                    taskRunning.push_back(task);\n                }\n            }\n\n            // DO THE TASK\n            task->run();\n\n            // REMOVE IT FROM THE RUNNING QUEUE\n            {\n                uniquelk lkRunning(mutTaskRunning);\n                taskRunning.remove(task);\n                counterTaskRunning.release();\n            }\n        }\n    }\n\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_V2A_HPP_\n"
  },
  {
    "path": "cpp/cpp-boost/exer08-exec-service-v2b.hpp",
    "content": "/*\nMY EXECUTOR SERVICE\n\nVersion 2B: The executor service storing running tasks\n- Method \"waitTaskDone\" uses a condition variable to synchronize.\n*/\n\n\n\n#ifndef _MY_EXEC_SERVICE_V2B_HPP_\n#define _MY_EXEC_SERVICE_V2B_HPP_\n\n\n\n#include <vector>\n#include <list>\n#include <queue>\n#include <boost/thread.hpp>\n#include \"exer08-exec-service-itask.hpp\"\n\n\n\nclass MyExecServiceV2B {\n\nprivate:\n    typedef boost::unique_lock<boost::mutex> uniquelk;\n\n\nprivate:\n    int numThreads;\n    boost::thread_group lstTh;\n\n    std::queue<ITask*> taskPending;\n    boost::mutex mutTaskPending;\n    boost::condition_variable condTaskPending;\n\n    std::list<ITask*> taskRunning;\n    boost::mutex mutTaskRunning;\n    boost::condition_variable condTaskRunning;\n\n    volatile bool forceThreadShutdown;\n\n\npublic:\n    MyExecServiceV2B(int numThreads) {\n        init(numThreads);\n    }\n\n\nprivate:\n    MyExecServiceV2B(const MyExecServiceV2B& other) : numThreads(0) { }\n    void operator=(const MyExecServiceV2B& other) { }\n\n#if __cplusplus >= 201103L || (defined(_MSC_VER) && _MSC_VER >= 1900)\n    MyExecServiceV2B(const MyExecServiceV2B&& other) : numThreads(0) { }\n    void operator=(const MyExecServiceV2B&& other) { }\n#endif\n\n\nprivate:\n    void init(int numThreads) {\n        // shutdown();\n\n        this->numThreads = numThreads;\n        forceThreadShutdown = false;\n\n        for (int i = 0; i < numThreads; ++i) {\n            lstTh.add_thread(new boost::thread(&threadWorkerFunc, this));\n        }\n    }\n\n\npublic:\n    void submit(ITask* task) {\n        {\n            uniquelk lk(mutTaskPending);\n            taskPending.push(task);\n        }\n\n        condTaskPending.notify_one();\n    }\n\n\n    void waitTaskDone() {\n        for (;;) {\n            uniquelk lkPending(mutTaskPending);\n\n            if (taskPending.empty()) {\n                uniquelk lkRunning(mutTaskRunning);\n\n                while (false == taskRunning.empty())\n                    condTaskRunning.wait(lkRunning);\n\n                // no pending task and no running task\n                break;\n            }\n        }\n    }\n\n\n    void shutdown() {\n        {\n            uniquelk lk(mutTaskPending);\n            forceThreadShutdown = true;\n\n            while (false == taskPending.empty())\n                taskPending.pop();\n        }\n\n        condTaskPending.notify_all();\n        lstTh.join_all();\n        numThreads = 0;\n    }\n\n\nprivate:\n    static void threadWorkerFunc(MyExecServiceV2B* thisPtr) {\n        std::queue<ITask*> & taskPending = thisPtr->taskPending;\n        boost::mutex & mutTaskPending = thisPtr->mutTaskPending;\n        boost::condition_variable & condTaskPending = thisPtr->condTaskPending;\n\n        std::list<ITask*> & taskRunning = thisPtr->taskRunning;\n        boost::mutex & mutTaskRunning = thisPtr->mutTaskRunning;\n        boost::condition_variable & condTaskRunning = thisPtr->condTaskRunning;\n\n        volatile bool & forceThreadShutdown = thisPtr->forceThreadShutdown;\n\n        ITask* task = 0;\n\n\n        for (;;) {\n            {\n                // WAIT FOR AN AVAILABLE PENDING TASK\n                uniquelk lkPending(mutTaskPending);\n\n                while (taskPending.empty() && false == forceThreadShutdown) {\n                    condTaskPending.wait(lkPending);\n                }\n\n                if (forceThreadShutdown) {\n                    // lkPending.unlock(); // remember this statement\n                    break;\n                }\n\n                // GET THE TASK FROM THE PENDING QUEUE\n                task = taskPending.front();\n                taskPending.pop();\n\n                // PUSH IT TO THE RUNNING QUEUE\n                {\n                    uniquelk lkRunning(mutTaskRunning);\n                    taskRunning.push_back(task);\n                }\n            }\n\n            // DO THE TASK\n            task->run();\n\n            // REMOVE IT FROM THE RUNNING QUEUE\n            {\n                uniquelk lkRunning(mutTaskRunning);\n                taskRunning.remove(task);\n                condTaskRunning.notify_one();\n            }\n        }\n    }\n\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_V2B_HPP_\n"
  },
  {
    "path": "cpp/cpp-boost/mylib-blockingqueue.hpp",
    "content": "/******************************************************\n*\n* File name:    mylib-blockingqueue.hpp\n*\n* Author:       Name:   Thanh Nguyen\n*               Email:  thanh.it1995(at)gmail(dot)com\n*\n* License:      3-Clause BSD License\n*\n* Description:  The blocking queue implementation in C++98 Boost threading\n*\n******************************************************/\n\n\n\n#ifndef _MYLIB_BLOCKING_QUEUE_HPP_\n#define _MYLIB_BLOCKING_QUEUE_HPP_\n\n\n\n#include <limits>\n#include <queue>\n#include <boost/thread/mutex.hpp>\n#include <boost/thread/condition_variable.hpp>\n#include <boost/thread/lock_types.hpp>\n#include <boost/thread/thread.hpp>\n\n\n\nnamespace mylib\n{\n\n\n\ntemplate <typename T>\nclass BlockingQueue {\n\nprivate:\n    typedef boost::unique_lock<boost::mutex> uniquelk;\n\n\nprivate:\n    boost::condition_variable condEmpty;\n    boost::condition_variable condFull;\n    boost::mutex mut;\n\n    size_t capacity;\n    std::queue<T> q;\n\n\npublic:\n    BlockingQueue() : capacity(std::numeric_limits<size_t>::max()) {\n    }\n\n\n    BlockingQueue(size_t capacity) : capacity(capacity) {\n    }\n\n\nprivate:\n    BlockingQueue(const BlockingQueue& other) { }\n    void operator=(const BlockingQueue& other) { }\n\n#if __cplusplus >= 201103L || (defined(_MSC_VER) && _MSC_VER >= 1900)\n    BlockingQueue(const BlockingQueue&& other) { }\n    void operator=(const BlockingQueue&& other) { }\n#endif\n\n\npublic:\n    bool empty() const {\n        return q.empty();\n    }\n\n\n    size_t size() const {\n        return q.size();\n    }\n\n\n    // sync enqueue\n    void put(const T& value) {\n        uniquelk lk(mut);\n\n        while (q.size() >= capacity) {\n            condFull.wait(lk);\n        }\n\n        q.push(value);\n        condEmpty.notify_one();\n    }\n\n\n    // sync dequeue\n    T take() {\n        uniquelk lk(mut);\n\n        while (q.empty()) {\n            condEmpty.wait(lk);\n        }\n\n        T result = q.front();\n        q.pop();\n        condFull.notify_one();\n\n        return result;\n    }\n\n\n    // async enqueue\n    void add(const T& value) {\n        // Note: For asynchronous operations, we should use a long-live background thread\n        // instead of using a temporary thread\n        boost::thread(&BlockingQueue<T>::put, this, value).detach();\n    }\n\n\n    // returns false if queue is empty, otherwise returns true and assigns the result\n    bool peek(T& result) const {\n        uniquelk(mut);\n\n        if (q.empty()) {\n            return false;\n        }\n\n        result = q.front();\n        return true;\n    }\n\n\n    void clear() {\n        uniquelk(mut);\n\n        while (false == q.empty()) {\n            q.pop();\n        }\n    }\n\n}; // BlockingQueue\n\n\n\n} // namespace mylib\n\n\n\n#endif // _MYLIB_BLOCKING_QUEUE_HPP_\n"
  },
  {
    "path": "cpp/cpp-boost/mylib-random.hpp",
    "content": "/******************************************************\n*\n* File name:    mylib-random.hpp\n*\n* Author:       Name:   Thanh Nguyen\n*               Email:  thanh.it1995(at)gmail(dot)com\n*\n* License:      3-Clause BSD License\n*\n* Description:  The random utility in C++98 Boost\n*\n******************************************************/\n\n\n\n#ifndef _MYLIB_RANDOM_HPP_\n#define _MYLIB_RANDOM_HPP_\n\n\n\n#include <limits>\n#include <boost/random/random_device.hpp>\n#include <boost/random/mersenne_twister.hpp>\n#include <boost/random/uniform_int_distribution.hpp>\n\n\n\nnamespace mylib {\n\n\n\nclass RandInt {\n\nprivate:\n    boost::random::random_device rd;\n    boost::random::mt19937 mt;\n    boost::random::uniform_int_distribution<int> dist;\n\n\npublic:\n    RandInt() {\n        init(0, std::numeric_limits<int>::max());\n    }\n\n\n    RandInt(int minValue, int maxValueInclusive) {\n        init(minValue, maxValueInclusive);\n    }\n\n\n    void init(int minValue, int maxValueInclusive) {\n        dist = boost::random::uniform_int_distribution<int>(minValue, maxValueInclusive);\n        mt.seed(rd());\n    }\n\n\n    int next() {\n        return dist(mt);\n    }\n\n\nprivate:\n    RandInt(const RandInt& other) { }\n    void operator=(const RandInt& other) { }\n\n\n// STATIC\nprivate:\n    static RandInt publicRandInt;\n\npublic:\n    static int get(int maxExclusive) {\n        return publicRandInt.next() % maxExclusive;\n    }\n\n}; // RandInt\n\n\n\nRandInt RandInt::publicRandInt;\n\n\n\n} // namespace mylib\n\n\n\n#endif // _MYLIB_RANDOM_HPP_\n"
  },
  {
    "path": "cpp/cpp-boost/mylib-semaphore.hpp",
    "content": "/******************************************************\n*\n* File name:    mylib-semaphore.hpp\n*\n* Author:       Name:   Thanh Nguyen\n*               Email:  thanh.it1995(at)gmail(dot)com\n*\n* License:      3-Clause BSD License\n*\n* Description:  The semaphore implementation in C++98 Boost threading\n*               This is just a simulation based on the condition variable\n*\n******************************************************/\n\n\n\n#ifndef _MYLIB_SEMAPHORE_HPP_\n#define _MYLIB_SEMAPHORE_HPP_\n\n\n\n#include <queue>\n#include <limits>\n#include <boost/thread/mutex.hpp>\n#include <boost/thread/condition_variable.hpp>\n#include <boost/thread/lock_types.hpp>\n\n\n\nnamespace mylib\n{\n\n\n\nclass Semaphore {\n\nprivate:\n    typedef boost::unique_lock<boost::mutex> uniquelk;\n\n\nprivate:\n    volatile int value;\n    boost::condition_variable condFreeState;\n    boost::mutex mut;\n\n    static int MIN_VALUE;\n    static int MAX_VALUE;\n\n\npublic:\n    Semaphore(int initialValue) {\n        this->value = initialValue;\n    }\n\n\nprivate:\n    Semaphore(const Semaphore& other) { }\n    void operator=(const Semaphore& other) { }\n\n#if __cplusplus >= 201103L || (defined(_MSC_VER) && _MSC_VER >= 1900)\n    Semaphore(const Semaphore&& other) { }\n    void operator=(const Semaphore&& other) { }\n#endif\n\n\npublic:\n    void acquire() {\n        uniquelk lk(mut);\n\n        while (value <= 0) {\n            condFreeState.wait(lk);\n        }\n\n        if (value > MIN_VALUE) {\n            --value;\n        }\n    }\n\n\n    void release() {\n        uniquelk lk(mut);\n\n        if (value < MAX_VALUE) {\n            ++value;\n        }\n\n        if (value >= 0) {\n            condFreeState.notify_one();\n        }\n    }\n\n\n    int getValue() const {\n        return value; // does not block\n    }\n\n}; // Semaphore\n\n\n\nint Semaphore::MIN_VALUE = std::numeric_limits<int>::min();\nint Semaphore::MAX_VALUE = std::numeric_limits<int>::max();\n\n\n\n} // namespace mylib\n\n\n\n#endif // _MYLIB_SEMAPHORE_HPP_\n"
  },
  {
    "path": "cpp/cpp-boost/mylib-time.hpp",
    "content": "/******************************************************\n*\n* File name:    mylib-time.hpp\n*\n* Author:       Name:   Thanh Nguyen\n*               Email:  thanh.it1995(at)gmail(dot)com\n*\n* License:      3-Clause BSD License\n*\n* Description:  The time utility in C++98 Boost\n*\n******************************************************/\n\n\n\n#ifndef _MYLIB_TIME_HPP_\n#define _MYLIB_TIME_HPP_\n\n\n\n#include <ctime>\n#include <boost/chrono.hpp>\n\n\n\nnamespace mylib\n{\n\n\n\nnamespace chro = boost::chrono;\ntypedef chro::system_clock sysclock;\n\n\n\nclass HiResClock {\n\nprivate:\n    typedef chro::high_resolution_clock stdhrc;\n\n\npublic:\n    static inline stdhrc::time_point now()\n    {\n        return stdhrc::now();\n    }\n\n\n    template< typename duType >\n    static inline\n    duType\n    getTimeSpan(\n        const stdhrc::time_point& tp1,\n        const stdhrc::time_point& tp2)\n    {\n        duType res = chro::duration_cast<duType>(tp2 - tp1);\n        return res;\n    }\n\n\n    template< typename duType >\n    static inline\n    duType\n    getTimeSpan(const stdhrc::time_point& tpBefore)\n    {\n        stdhrc::time_point tpCurrent = HiResClock::now();\n        duType res = HiResClock::getTimeSpan<duType>(tpBefore, tpCurrent);\n        return res;\n    }\n\n}; // HiResClock\n\n\n\nchar* getTimePointStr(const sysclock::time_point& tp) {\n    std::time_t timeStamp = sysclock::to_time_t(tp);\n    return std::ctime(&timeStamp);\n}\n\n\n\ntemplate<class clock>\nclass clock::time_point getTimePoint(\n    int year, int month, int day,\n    int hour, int minute, int second)\n{\n    std::tm t;\n    t.tm_year = year - 1900;\n    t.tm_mon = month - 1;\n    t.tm_mday = day;\n    t.tm_hour = hour;\n    t.tm_min = minute;\n    t.tm_sec = second;\n    return clock::from_time_t(std::mktime(&t));\n}\n\n\n\n// tp += numSeconds * 2;\n// tp -= (x % numSeconds)\ntemplate<class clock, typename duType>\nchro::time_point<clock>\ngetTimePointFutureFloor(const chro::time_point<clock>& tp, int numSeconds) {\n    chro::seconds duSeconds(numSeconds);\n\n    chro::duration<duType> durationFromTp = chro::time_point_cast<chro::seconds>(tp).time_since_epoch();\n\n    chro::duration<duType> durationFuture = durationFromTp + (duSeconds * 2);\n    durationFuture = durationFuture - (durationFuture % duSeconds);\n\n    chro::time_point<clock> tpFuture(durationFuture);\n    return tpFuture;\n}\n\n\n\n} // namespace mylib\n\n\n\n#endif // _MYLIB_TIME_HPP_\n"
  },
  {
    "path": "cpp/cpp-pthread/demo00.cpp",
    "content": "/*\nINTRODUCTION TO MULTITHREADING\nYou should try running this app several times and see results.\n*/\n\n\n#include <iostream>\n#include <pthread.h>\nusing namespace std;\n\n\n\nvoid* doTask(void*) {\n    for (int i = 0; i < 300; ++i)\n        cout << \"B\";\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tid;\n    int ret = 0;\n\n    ret = pthread_create(&tid, nullptr, &doTask, nullptr);\n\n    for (int i = 0; i < 300; ++i)\n        cout << \"A\";\n\n    ret = pthread_join(tid, nullptr);\n\n    cout << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo01-hello.cpp",
    "content": "/*\nHELLO WORLD VERSION MULTITHREADING\n*/\n\n\n#include <iostream>\n#include <pthread.h>\nusing namespace std;\n\n\n\nvoid* doTask(void*) {\n    cout << \"Hello from example thread\" << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tid;\n    int ret = 0;\n\n    ret = pthread_create(&tid, nullptr, &doTask, nullptr);\n\n    /*\n    if (ret) {\n        cerr << \"Error: Unable to create thread \" << ret << endl;\n        return 1;\n    }\n    */\n\n    ret = pthread_join(tid, nullptr);\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo02-join.cpp",
    "content": "/*\nTHREAD JOINS\n*/\n\n\n#include <iostream>\n#include <pthread.h>\nusing namespace std;\n\n\n\nvoid* doHeavyTask(void*) {\n    // Do a heavy task, which takes a little time\n    for (int i = 0; i < 2000000000; ++i);\n\n    cout << \"Done!\" << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tid;\n    int ret = 0;\n\n    ret = pthread_create(&tid, nullptr, &doHeavyTask, nullptr);\n\n    ret = pthread_join(tid, nullptr);\n\n    cout << \"Good bye!\" << endl;\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo03a01-pass-arg.cpp",
    "content": "/*\nPASSING ARGUMENTS\nVersion A01: The problem\n\nThe id in statement \"hello pthread with id...\" might be DUPLICATED!!!\nReason: Passing the address of variable i,\n        so that all threads use a same value of i.\n*/\n\n\n#include <iostream>\n#include <pthread.h>\nusing namespace std;\n\n\n\nvoid* doTask(void* ptrId) {\n    int id = *(int*)ptrId;\n\n    cout << \"Hello pthread with id = \" << id << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t lstTid[2];\n    int ret = 0;\n\n    for (int i = 0; i < 2; ++i)\n        ret = pthread_create(&lstTid[i], nullptr, &doTask, &i);\n\n    for (int i = 0; i < 2; ++i)\n        ret = pthread_join(lstTid[i], nullptr);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo03a02-pass-arg.cpp",
    "content": "/*\nPASSING ARGUMENTS\nVersion A02: Solving the problem\n*/\n\n\n#include <iostream>\n#include <pthread.h>\nusing namespace std;\n\n\n\nvoid* doTask(void* ptrId) {\n    int id = *(int*)ptrId;\n\n    cout << \"Hello pthread with id = \" << id << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t lstTid[2];\n    int lstArg[2];\n    int ret = 0;\n\n    for (int i = 0; i < 2; ++i) {\n        lstArg[i] = i + 1;\n        ret = pthread_create(&lstTid[i], nullptr, &doTask, &lstArg[i]);\n    }\n\n    for (int i = 0; i < 2; ++i)\n        ret = pthread_join(lstTid[i], nullptr);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo03b01-pass-arg.cpp",
    "content": "/*\nPASSING MULTIPLE ARGUMENTS\nSolution 01: Creating a custom struct\n*/\n\n\n#include <iostream>\n#include <string>\n#include <pthread.h>\nusing namespace std;\n\n\n\nstruct ThreadArg {\n    int x;\n    double y;\n    string z;\n};\n\n\n\nvoid* doTask(void* argVoid) {\n    auto arg = (ThreadArg*) argVoid;\n\n    cout << arg->x << endl;\n    cout << arg->y << endl;\n    cout << arg->z << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tid;\n    ThreadArg arg;\n    int ret = 0;\n\n    arg.x = 10;\n    arg.y = -2.4;\n    arg.z = \"lorem ipsum\";\n\n    ret = pthread_create(&tid, nullptr, &doTask, &arg);\n    ret = pthread_join(tid, nullptr);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo03b02-pass-arg.cpp",
    "content": "/*\nPASSING MULTIPLE ARGUMENTS\nSolution 02: Using std::tuple\n*/\n\n\n#include <iostream>\n#include <string>\n#include <tuple>\n#include <pthread.h>\nusing namespace std;\n\n\n\nvoid* doTask(void* argVoid) {\n    auto arg = * (tuple<int,double,string> *) argVoid;\n\n    cout << std::get<0>(arg) << endl;\n    cout << std::get<1>(arg) << endl;\n    cout << std::get<2>(arg) << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tid;\n    tuple<int,double,string> arg;\n    int ret = 0;\n\n    // arg = std::make_tuple( 10, -2.4, \"lorem ipsum\" );\n    arg = { 10, -2.4, \"lorem ipsum\" };\n\n    ret = pthread_create(&tid, nullptr, &doTask, &arg);\n    ret = pthread_join(tid, nullptr);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo04-sleep.cpp",
    "content": "/*\nSLEEP\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\nvoid* doTask(void* arg) {\n    auto name = (const char*) arg;\n\n    cout << name << \" is sleeping\" << endl;\n    sleep(2);\n    cout << name << \" wakes up\" << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tid;\n    int ret = 0;\n\n    ret = pthread_create(&tid, nullptr, &doTask, (void*)\"foo\");\n\n    ret = pthread_join(tid, nullptr);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo05-id.cpp",
    "content": "/*\nGETTING THREAD'S ID\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\nvoid* doTask(void*) {\n    sleep(2);\n    cout << pthread_self() << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tidFoo, tidBar;\n    int ret = 0;\n\n    ret = pthread_create(&tidFoo, nullptr, &doTask, nullptr);\n    ret = pthread_create(&tidBar, nullptr, &doTask, nullptr);\n\n    cout << \"foo's id = \" << tidFoo << endl;\n    cout << \"bar's id = \" << tidBar << endl;\n\n    ret = pthread_join(tidFoo, nullptr);\n    ret = pthread_join(tidBar, nullptr);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo06a-list-threads.cpp",
    "content": "/*\nLIST OF MULTIPLE THREADS\nVersion A: Using standard arrays\n*/\n\n\n#include <iostream>\n#include <pthread.h>\nusing namespace std;\n\n\n\nvoid* doTask(void* arg) {\n    auto index = *(int*) arg;\n    cout << index;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 5;\n\n    pthread_t lstTid[NUM_THREADS];\n    int lstArg[NUM_THREADS];\n\n    int ret = 0;\n\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstArg[i] = i;\n        ret = pthread_create(&lstTid[i], nullptr, &doTask, &lstArg[i]);\n    }\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_join(tid, nullptr);\n    }\n\n\n    cout << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo06b-list-threads.cpp",
    "content": "/*\nLIST OF MULTIPLE THREADS\nVersion B: Using the std::vector\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <pthread.h>\nusing namespace std;\n\n\n\nvoid* doTask(void* arg) {\n    auto index = *(int*) arg;\n    cout << index;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 5;\n\n    vector<pthread_t> lstTid(NUM_THREADS);\n    vector<int> lstArg(NUM_THREADS);\n\n    int ret = 0;\n\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstArg[i] = i;\n        ret = pthread_create(&lstTid[i], nullptr, &doTask, &lstArg[i]);\n    }\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_join(tid, nullptr);\n    }\n\n\n    cout << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo07a-terminate.cpp",
    "content": "/*\nFORCING A THREAD TO TERMINATE (i.e. killing the thread)\nVersion A: Using the flag 'isRunning'\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\nvolatile bool isRunning;\n\n\n\nvoid* doTask(void*) {\n    while (isRunning) {\n        cout << \"Running...\" << endl;\n        sleep(2);\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tid;\n    int ret = 0;\n\n    isRunning = true;\n    ret = pthread_create(&tid, nullptr, &doTask, nullptr);\n\n    sleep(6);\n    isRunning = false;\n\n    ret = pthread_join(tid, nullptr);\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo07b-terminate.cpp",
    "content": "/*\nFORCING A THREAD TO TERMINATE (i.e. killing the thread)\nVersion B: Using pthread_cancel\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\nvoid* doTask(void*) {\n    while (1) {\n        cout << \"Running...\" << endl;\n        sleep(1);\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tid;\n    int ret = 0;\n\n    ret = pthread_create(&tid, nullptr, &doTask, nullptr);\n\n    sleep(3);\n\n    ret = pthread_cancel(tid);\n\n    ret = pthread_join(tid, nullptr);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo08a-return-value.cpp",
    "content": "/*\nGETTING RETURNED VALUES FROM THREADS\nVersion A: Values returned via pointers passed from arguments\n*/\n\n\n#include <iostream>\n#include <pthread.h>\nusing namespace std;\n\n\n\nstruct ThreadArg {\n    int value;\n    int *res;\n};\n\n\n\nvoid* doubleValue(void* argVoid) {\n    auto arg = (ThreadArg*) argVoid;\n\n    *(arg->res) = arg->value * 2;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tid;\n    int result;\n    ThreadArg arg;\n    int ret = 0;\n\n    arg = { 80, &result };\n\n    ret = pthread_create(&tid, nullptr, &doubleValue, &arg);\n    ret = pthread_join(tid, nullptr);\n\n    cout << result << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo08b-return-value.cpp",
    "content": "/*\nGETTING RETURNED VALUES FROM THREADS\nVersion B: Values returned via pthread_exit\n*/\n\n\n#include <iostream>\n#include <pthread.h>\nusing namespace std;\n\n\n\nvoid* doubleValue(void* arg) {\n    auto value = *(int*) arg;\n\n    int *result = new int;\n    *result = value * 2;\n\n    pthread_exit((void*)result);\n    return (void*)result;\n}\n\n\n\nint main() {\n    pthread_t tid;\n    int arg = 80;\n    int *result = nullptr;\n    int ret = 0;\n\n    ret = pthread_create(&tid, nullptr, &doubleValue, &arg);\n    ret = pthread_join(tid, (void**)&result);\n\n    cout << (*result) << endl;\n\n    delete result;\n    result = nullptr;\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo09a-detach.cpp",
    "content": "/*\nTHREAD DETACHING\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\nvoid* foo(void*) {\n    cout << \"foo is starting...\" << endl;\n\n    sleep(2);\n\n    cout << \"foo is exiting...\" << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tidFoo;\n    int ret = 0;\n\n\n    ret = pthread_create(&tidFoo, nullptr, &foo, nullptr);\n    ret = pthread_detach(tidFoo);\n\n    if (ret) {\n        cout << \"Error: Cannot detach tidFoo\" << endl;\n    }\n\n\n    // ret = pthread_join(tidFoo, nullptr);\n    // if (ret) {\n    //     cout << \"Error: Cannot join tidFoo\" << endl;\n    // }\n\n\n    // If I comment this statement,\n    // tidFoo will be forced into terminating with main thread\n    sleep(3);\n\n\n    cout << \"Main thread is exiting\" << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo09b-detach.cpp",
    "content": "/*\nTHREAD DETACHING\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\nvoid* foo(void*) {\n    int ret = 0;\n\n    cout << \"foo is starting...\" << endl;\n\n    if ( ret = pthread_detach(pthread_self()) ) {\n        cout << \"Error: Cannot detach\" << endl;\n    }\n\n    sleep(2);\n\n    cout << \"foo is exiting...\" << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tidFoo;\n    int ret = 0;\n\n    ret = pthread_create(&tidFoo, nullptr, &foo, nullptr);\n\n    sleep(3);\n\n    cout << \"Main thread is exiting\" << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo10-yield.cpp",
    "content": "/*\nTHREAD YIELDING\n*/\n\n\n#include <iostream>\n#include <sched.h>\n#include <pthread.h>\n#include <chrono>\n#include \"../cpp-std/mylib-time.hpp\"\nusing namespace std;\n\n\n\nusing chrmicro = std::chrono::microseconds;\nusing hrclock = mylib::HiResClock;\n\n\n\nvoid littleSleep(int us) {\n    auto tpStart = hrclock::now();\n    auto tpEnd = tpStart + chrmicro(us);\n\n    int ret = 0;\n\n    do {\n        // ret = pthread_yield();\n        ret = sched_yield();\n    }\n    while (hrclock::now() < tpEnd);\n}\n\n\n\nint main() {\n    auto tpStartMeasure = hrclock::now();\n\n    littleSleep(130);\n\n    auto timeElapsed = hrclock::getTimeSpan<chrmicro>(tpStartMeasure);\n\n    cout << \"Elapsed time: \" << timeElapsed.count() << \" microseonds\" << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo11a-exec-service.cpp",
    "content": "/*\nEXECUTOR SERVICES AND THREAD POOLS\n\nExecutor services in C++ POSIX threading are not supported by default.\nSo, I use mylib::ExecService for this demonstration.\n*/\n\n\n#include <iostream>\n#include \"mylib-execservice.hpp\"\nusing namespace std;\n\n\n\nvoid doTask() {\n    cout << \"Hello the Executor Service\" << endl;\n}\n\n\n\nclass MyFunctor {\npublic:\n    void operator()() {\n        cout << \"Hello Multithreading\" << endl;\n    }\n};\n\n\n\nint main() {\n    // INIT THE EXECUTOR SERVICE WITH 2 THREADS\n    auto execService = mylib::ExecService(2);\n\n\n    // SUBMIT\n    execService.submit([] { cout << \"Hello World\" << endl; });\n\n    execService.submit(&doTask);\n\n    execService.submit(MyFunctor());\n\n\n    // WAIT FOR THE COMPLETION OF ALL TASKS AND SHUTDOWN EXECUTOR SERVICE\n    execService.waitTaskDone();\n    execService.shutdown();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo11b-exec-service.cpp",
    "content": "/*\nEXECUTOR SERVICES AND THREAD POOLS\n\nExecutor services in C++ POSIX threading are not supported by default.\nSo, I use mylib::ExecService for this demonstration.\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include \"mylib-execservice.hpp\"\nusing namespace std;\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 2;\n    constexpr int NUM_TASKS = 5;\n\n    auto execService = mylib::ExecService(NUM_THREADS);\n\n    for (int i = 0; i < NUM_TASKS; ++i) {\n        execService.submit([=] {\n            char id = 'A' + i;\n            cout << \"Task \" << id << \" is starting\" << endl;\n            sleep(3);\n            cout << \"Task \" << id << \" is completed\" << endl;\n        });\n    }\n\n    cout << \"All tasks are submitted\" << endl;\n\n    execService.waitTaskDone();\n    cout << \"All tasks are completed\" << endl;\n\n    execService.shutdown();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo12a-race-condition.cpp",
    "content": "/*\nRACE CONDITIONS\n*/\n\n\n#include <iostream>\n#include <pthread.h>\n#include <unistd.h>\nusing namespace std;\n\n\n\nvoid* doTask(void* arg) {\n    int index = *(int*) arg;\n\n    sleep(1);\n\n    cout << index;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 4;\n\n    pthread_t lstTid[NUM_THREADS];\n    int lstArg[NUM_THREADS];\n\n    int ret = 0;\n\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstArg[i] = i;\n        ret = pthread_create(&lstTid[i], nullptr, &doTask, &lstArg[i]);\n    }\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_join(tid, nullptr);\n    }\n\n\n    cout << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo12b01-data-race-single.cpp",
    "content": "/*\nDATA RACES\nVersion 01: Without multithreading\n*/\n\n\n#include <iostream>\n#include <cstdlib>\nusing namespace std;\n\n\n\nint *a = nullptr;\nint N = 0;\n\n\n\nint getResult() {\n    a = (int*)calloc(sizeof(int), N + 1);\n\n    for (int i = 1; i <= N; ++i)\n        if (0 == i % 2 || 0 == i % 3)\n            a[i] = 1;\n\n    int result = 0;\n\n    for (int i = 1; i <= N; ++i)\n        if (a[i])\n            ++result;\n\n    free(a);\n    a = nullptr;\n\n    return result;\n}\n\n\n\nint main() {\n    N = 8;\n\n    int result = getResult();\n\n    cout << \"Numbers of integers that are divisible by 2 or 3 is: \" << result << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo12b02-data-race-multi.cpp",
    "content": "/*\nDATA RACES\nVersion 02: Multithreading\n*/\n\n\n#include <iostream>\n#include <cstdlib>\n#include <pthread.h>\nusing namespace std;\n\n\n\nint* a = nullptr;\nint N = 0;\n\n\n\nvoid* markDiv2(void*) {\n    for (int i = 2; i <= N; i += 2)\n        a[i] = 1;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* markDiv3(void*) {\n    for (int i = 3; i <= N; i += 3)\n        a[i] = 1;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    N = 8;\n\n    pthread_t tidDiv2, tidDiv3;\n    int ret = 0;\n\n    a = (int*)calloc(sizeof(int), N + 1);\n\n    ret = pthread_create(&tidDiv2, nullptr, &markDiv2, nullptr);\n    ret = pthread_create(&tidDiv3, nullptr, &markDiv3, nullptr);\n    ret = pthread_join(tidDiv2, nullptr);\n    ret = pthread_join(tidDiv3, nullptr);\n\n    int result = 0;\n\n    for (int i = 1; i <= N; ++i)\n        if (a[i])\n            ++result;\n\n    free(a);\n    a = nullptr;\n\n    cout << \"Numbers of integers that are divisible by 2 or 3 is: \" << result << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo12bex-data-race-fork.cpp",
    "content": "/*\nDATA RACES\n*/\n\n\n#include <iostream>\n#include <fstream>\n#include <unistd.h>\nusing namespace std;\n\n\n\nint main() {\n    int pid = 0;\n\n    pid = fork();\n\n    if (-1 == pid) {\n        cerr << \"Cannot fork\" << endl;\n        return 1;\n    }\n\n\n    ofstream ofs;\n    ofs.open(\"tmp-output.txt\");\n\n    if (ofs.fail()) {\n        return 1;\n    }\n\n    cout << \"Writing to the file...\" << endl;\n\n    ofs << pid << endl;\n\n    ofs.close();\n    return 0;\n}\n\n\n/*\nThe content of the file is UNKNOWN.\nIt may be:\n    0\n\nor\n    34\n\nor\n    0\n    34\n\nor\n    34\n    0\n\nAssume that 34 and 0 are process ids.\n*/\n"
  },
  {
    "path": "cpp/cpp-pthread/demo12c01-race-cond-data-race.cpp",
    "content": "/*\nRACE CONDITIONS AND DATA RACES\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\nint counter = 0;\n\n\n\nvoid* increaseCounter(void*) {\n    sleep(1);\n\n    for (int i = 0; i < 1000; ++i) {\n        counter += 1;\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 16;\n    pthread_t lstTid[NUM_THREADS];\n    int ret = 0;\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        ret = pthread_create(&lstTid[i], nullptr, &increaseCounter, nullptr);\n    }\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        ret = pthread_join(lstTid[i], nullptr);\n    }\n\n    cout << \"counter = \" << counter << endl;\n    // We are NOT sure that counter = 16000\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo12c02-race-cond-data-race.cpp",
    "content": "/*\nRACE CONDITIONS AND DATA RACES\n*/\n\n\n#include <iostream>\n#include <pthread.h>\n#include <unistd.h>\nusing namespace std;\n\n\n\nint counter = 0;\n\n\n\nvoid* doTaskA(void*) {\n    sleep(1);\n\n    while (counter < 10)\n        ++counter;\n\n    cout << \"A won !!!\" << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* doTaskB(void*) {\n    sleep(1);\n\n    while (counter > -10)\n        --counter;\n\n    cout << \"B won !!!\" << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tidA, tidB;\n    int ret = 0;\n\n    ret = pthread_create(&tidA, nullptr, &doTaskA, nullptr);\n    ret = pthread_create(&tidB, nullptr, &doTaskB, nullptr);\n\n    ret = pthread_join(tidA, nullptr);\n    ret = pthread_join(tidB, nullptr);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo13a-mutex.cpp",
    "content": "/*\nMUTEXES\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\npthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\nint counter = 0;\n\n\n\nvoid* doTask(void*) {\n    sleep(1);\n\n    pthread_mutex_lock(&mut);\n\n    for (int i = 0; i < 1000; ++i)\n        ++counter;\n\n    pthread_mutex_unlock(&mut);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 16;\n    pthread_t lstTid[NUM_THREADS];\n    int ret = 0;\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_create(&tid, nullptr, &doTask, nullptr);\n    }\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_join(tid, nullptr);\n    }\n\n    cout << \"counter = \" << counter << endl;\n    // We are sure that counter = 16000\n\n    ret = pthread_mutex_destroy(&mut);\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo13b-mutex-trylock.cpp",
    "content": "/*\nMUTEXES\nLocking with a nonblocking mutex\n\nUse pthread_mutex_trylock to attempt to lock the mutex pointed to by mutex.\n*/\n\n\n#include <iostream>\n#include <pthread.h>\n#include <unistd.h>\nusing namespace std;\n\n\n\npthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\nint counter = 0;\n\n\n\nvoid* doTask(void*) {\n    int ret = 0;\n    sleep(1);\n\n    ret = pthread_mutex_trylock(&mut);\n\n    if (ret) {\n        /*\n        switch (ret) {\n            case EBUSY:\n                The mutex could not be acquired because the mutex pointed to by mutex\n                was already locked.\n\n            case EAGAIN:\n                The mutex could not be acquired because the maximum number\n                of recursive locks for mutex has been exceeded.\n\n            case EOWNERDEAD:\n                The last owner of this mutex died while holding the mutex.\n                This mutex is now owned by the caller.\n                The caller must attempt to make the state protected by the mutex consistent.\n\n            case ENOTRECOVERABLE:\n                The mutex you are trying to acquire is protecting state left irrecoverable\n                by the mutex's previous owner that died while holding the lock.\n                The mutex has not been acquired.\n                This condition can occur when the lock was previously acquired\n                with EOWNERDEAD and the owner was unable to cleanup the state and\n                had unlocked the mutex without making the mutex state consistent.\n\n            case ENOMEM:\n                The limit on the number of simultaneously held mutexes has been exceeded.\n        }\n        */\n\n        pthread_exit(nullptr);\n        return nullptr;\n    }\n\n    for (int i = 0; i < 10000; ++i)\n        ++counter;\n\n    pthread_mutex_unlock(&mut);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 3;\n    pthread_t lstTid[NUM_THREADS];\n    int ret = 0;\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_create(&tid, nullptr, &doTask, nullptr);\n    }\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_join(tid, nullptr);\n    }\n\n    cout << \"counter = \" << counter << endl;\n    // counter can be 10000, 20000 or 30000\n\n    ret = pthread_mutex_destroy(&mut);\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo14-synchronized-block.cpp",
    "content": "/*\nSYNCHRONIZED BLOCKS\n\nSynchronized blocks in C++ POSIX threading are not supported by default.\nTo demonstate synchronized blocks, I implement the class LockGuard.\n\nNow, let's see the code:\n    {\n        LockGuard lk(&mutex);\n        // Do something in the critical section\n    }\n\nWhen go to the end of the code block, lk object shall execute it's destructor and release mutex.\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\nclass LockGuard {\n\nprivate:\n    pthread_mutex_t* mut = nullptr;\n\nprivate:\n    LockGuard(const LockGuard&) = delete;\n    LockGuard(const LockGuard&&) = delete;\n    void operator=(const LockGuard&) = delete;\n    void operator=(const LockGuard&&) = delete;\n\npublic:\n    LockGuard(pthread_mutex_t* mut) {\n        this->mut = mut; // Assume that mut != nullptr\n        pthread_mutex_lock(this->mut);\n    }\n\n    ~LockGuard() {\n        pthread_mutex_unlock(this->mut);\n    }\n\n};\n\n\n\npthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\nint counter = 0;\n\n\n\nvoid* doTask(void*) {\n    sleep(1);\n\n    // This is the \"synchronized block\"\n    {\n        LockGuard lk(&mut);\n\n        for (int i = 0; i < 1000; ++i)\n            ++counter;\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 16;\n    pthread_t lstTid[NUM_THREADS];\n    int ret = 0;\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_create(&tid, nullptr, &doTask, nullptr);\n    }\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_join(tid, nullptr);\n    }\n\n    cout << \"counter = \" << counter << endl;\n    // We are sure that counter = 16000\n\n    ret = pthread_mutex_destroy(&mut);\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo15a-deadlock.cpp",
    "content": "/*\nDEADLOCK\nVersion A\n*/\n\n\n#include <iostream>\n#include <pthread.h>\nusing namespace std;\n\n\n\npthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\n\n\n\nvoid* doTask(void* arg) {\n    auto name = (const char*) arg;\n\n    pthread_mutex_lock(&mut);\n\n    cout << name << \" acquired resource\" << endl;\n\n    // pthread_mutex_unlock(&mut); // Forget this statement ==> deadlock\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tidFoo, tidBar;\n    int ret = 0;\n\n    ret = pthread_create(&tidFoo, nullptr, &doTask, (void*)\"foo\");\n    ret = pthread_create(&tidBar, nullptr, &doTask, (void*)\"bar\");\n\n    ret = pthread_join(tidFoo, nullptr);\n    ret = pthread_join(tidBar, nullptr);\n\n    cout << \"You will never see this statement due to deadlock!\" << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo15b-deadlock.cpp",
    "content": "/*\nDEADLOCK\nVersion B\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\npthread_mutex_t mutResourceA = PTHREAD_MUTEX_INITIALIZER;\npthread_mutex_t mutResourceB = PTHREAD_MUTEX_INITIALIZER;\n\n\n\nvoid* foo(void*) {\n    pthread_mutex_lock(&mutResourceA);\n    cout << \"foo acquired resource A\" << endl;\n\n    sleep(1);\n\n    pthread_mutex_lock(&mutResourceB);\n    cout << \"foo acquired resource B\" << endl;\n    pthread_mutex_unlock(&mutResourceB);\n\n    pthread_mutex_unlock(&mutResourceA);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* bar(void*) {\n    pthread_mutex_lock(&mutResourceB);\n    cout << \"bar acquired resource B\" << endl;\n\n    sleep(1);\n\n    pthread_mutex_lock(&mutResourceA);\n    cout << \"bar acquired resource A\" << endl;\n    pthread_mutex_unlock(&mutResourceA);\n\n    pthread_mutex_unlock(&mutResourceB);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tidFoo, tidBar;\n    int ret = 0;\n\n    ret = pthread_create(&tidFoo, nullptr, &foo, nullptr);\n    ret = pthread_create(&tidBar, nullptr, &bar, nullptr);\n\n    ret = pthread_join(tidFoo, nullptr);\n    ret = pthread_join(tidBar, nullptr);\n\n    ret = pthread_mutex_destroy(&mutResourceA);\n    ret = pthread_mutex_destroy(&mutResourceB);\n\n    cout << \"You will never see this statement due to deadlock!\" << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo16-monitor.cpp",
    "content": "/*\nMONITORS\nImplementation of a monitor for managing a counter\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\nclass Monitor {\nprivate:\n    pthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\n    int* pCounter = nullptr;\n\n\npublic:\n    // Should disable copy/move constructors, copy/move assignment operators\n\n\n    void init(int* pCounter) {\n        destroy();\n        mut = PTHREAD_MUTEX_INITIALIZER;\n        this->pCounter = pCounter;\n    }\n\n\n    void increaseCounter() {\n        int ret = 0;\n        ret = pthread_mutex_lock(&mut);\n        (*pCounter) += 1;\n        ret = pthread_mutex_unlock(&mut);\n    }\n\n\n    void destroy() {\n        pthread_mutex_destroy(&mut);\n    }\n};\n\n\n\nvoid* doTask(void* arg) {\n    auto monitor = (Monitor*) arg;\n\n    sleep(1);\n\n    for (int i = 0; i < 1000; ++i)\n        monitor->increaseCounter();\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    int counter = 0;\n    Monitor monitor;\n\n    constexpr int NUM_THREADS = 16;\n    pthread_t lstTid[NUM_THREADS];\n\n    int ret = 0;\n\n    monitor.init(&counter);\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_create(&tid, nullptr, &doTask, &monitor);\n    }\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_join(tid, nullptr);\n    }\n\n    monitor.destroy();\n\n    cout << \"counter = \" << counter << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo17a-reentrant-lock.cpp",
    "content": "/*\nREENTRANT LOCKS (RECURSIVE MUTEXES)\nVersion A: Introduction to reentrant locks\n*/\n\n\n#include <iostream>\n#include <pthread.h>\nusing namespace std;\n\n\n\npthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\n\n\n\nvoid* doTask(void*) {\n    pthread_mutex_lock(&mut);\n    cout << \"First time acquiring the resource\" << endl;\n\n    pthread_mutex_lock(&mut);\n    cout << \"Second time acquiring the resource\" << endl;\n\n    pthread_mutex_unlock(&mut);\n    pthread_mutex_unlock(&mut);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tid;\n    int ret = 0;\n\n    ret = pthread_create(&tid, nullptr, &doTask, nullptr);\n\n    /*\n    The thread tid shall meet deadlock.\n    So, you will never get output \"Second time acquiring the resource\".\n    */\n\n    ret = pthread_join(tid, nullptr);\n\n    ret = pthread_mutex_destroy(&mut);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo17b-reentrant-lock.cpp",
    "content": "/*\nREENTRANT LOCKS (RECURSIVE MUTEXES)\nVersion B: Solving the problem from version A\n*/\n\n\n#include <iostream>\n#include <pthread.h>\nusing namespace std;\n\n\n\npthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\n\n\n\nvoid* doTask(void*) {\n    pthread_mutex_lock(&mut);\n    cout << \"First time acquiring the resource\" << endl;\n\n    pthread_mutex_lock(&mut);\n    cout << \"Second time acquiring the resource\" << endl;\n\n    pthread_mutex_unlock(&mut);\n    pthread_mutex_unlock(&mut);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tid;\n    pthread_mutexattr_t attr;\n    int ret = 0;\n\n    ret = pthread_mutexattr_init(&attr);\n    ret = pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE);\n    ret = pthread_mutex_init(&mut, &attr);\n\n    ret = pthread_create(&tid, nullptr, &doTask, nullptr);\n    ret = pthread_join(tid, nullptr);\n\n    ret = pthread_mutexattr_destroy(&attr);\n    ret = pthread_mutex_destroy(&mut);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo17c-reentrant-lock.cpp",
    "content": "/*\nREENTRANT LOCKS (RECURSIVE MUTEXES)\nVersion C: A multithreaded app example\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\npthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\n\n\n\nvoid* doTask(void* arg) {\n    char name = *(char*) arg;\n    sleep(1);\n\n    pthread_mutex_lock(&mut);\n    cout << \"First time \" << name << \" acquiring the resource\" << endl;\n\n    pthread_mutex_lock(&mut);\n    cout << \"Second time \" << name << \" acquiring the resource\" << endl;\n\n    pthread_mutex_unlock(&mut);\n    pthread_mutex_unlock(&mut);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 3;\n\n    pthread_t lstTid[NUM_THREADS];\n    char lstArg[NUM_THREADS];\n\n    pthread_mutexattr_t attr;\n    int ret = 0;\n\n    ret = pthread_mutexattr_init(&attr);\n    ret = pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE);\n    ret = pthread_mutex_init(&mut, &attr);\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstArg[i] = char(i + 'A');\n        ret = pthread_create(&lstTid[i], nullptr, &doTask, &lstArg[i]);\n    }\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_join(tid, nullptr);\n    }\n\n    pthread_mutexattr_destroy(&attr);\n    pthread_mutex_destroy(&mut);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo18a01-barrier.cpp",
    "content": "/*\nBARRIERS AND LATCHES\nVersion A: Cyclic barriers\n*/\n\n\n#include <iostream>\n#include <string>\n#include <tuple>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\npthread_barrier_t syncPoint;\n\n\n\nvoid* processRequest(void* argVoid) {\n    auto arg = *(tuple<string,int>*) argVoid;\n    string userName = std::get<0>(arg);\n    int waitTime = std::get<1>(arg);\n\n    sleep(waitTime);\n\n    cout << \"Get request from \" << userName << endl;\n    pthread_barrier_wait(&syncPoint);\n\n    cout << \"Process request for \" << userName << endl;\n    pthread_barrier_wait(&syncPoint);\n\n    cout << \"Done \" << userName << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 3;\n    pthread_t lstTid[NUM_THREADS];\n    int ret = 0;\n\n    // tuple<userName, waitTime>\n    tuple<string,int> lstArg[NUM_THREADS] = {\n        { \"lorem\", 1 },\n        { \"ipsum\", 2 },\n        { \"dolor\", 3 },\n    };\n\n    ret = pthread_barrier_init(&syncPoint, nullptr, 3); // participant count = 3\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        ret = pthread_create(&lstTid[i], nullptr, &processRequest, &lstArg[i]);\n    }\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_join(tid, nullptr);\n    }\n\n    ret = pthread_barrier_destroy(&syncPoint);\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo18a02-barrier.cpp",
    "content": "/*\nBARRIERS AND LATCHES\nVersion A: Cyclic barriers\n*/\n\n\n#include <iostream>\n#include <string>\n#include <tuple>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\npthread_barrier_t syncPoint;\n\n\n\nvoid* processRequest(void* argVoid) {\n    auto arg = *(tuple<string,int>*) argVoid;\n    string userName = std::get<0>(arg);\n    int waitTime = std::get<1>(arg);\n\n    sleep(waitTime);\n\n    cout << \"Get request from \" << userName << endl;\n    pthread_barrier_wait(&syncPoint);\n\n    cout << \"Process request for \" << userName << endl;\n    pthread_barrier_wait(&syncPoint);\n\n    cout << \"Done \" << userName << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 4;\n    pthread_t lstTid[NUM_THREADS];\n    int ret = 0;\n\n    // tuple<userName, waitTime>\n    tuple<string,int> lstArg[NUM_THREADS] = {\n        { \"lorem\", 1 },\n        { \"ipsum\", 3 },\n        { \"dolor\", 3 },\n        { \"amet\", 10 }\n    };\n\n    ret = pthread_barrier_init(&syncPoint, nullptr, 2); // participant count = 2\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        ret = pthread_create(&lstTid[i], nullptr, &processRequest, &lstArg[i]);\n    }\n\n    // Thread with userName = \"amet\" shall be FREEZED\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_join(tid, nullptr);\n    }\n\n    ret = pthread_barrier_destroy(&syncPoint);\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo18a03-barrier.cpp",
    "content": "/*\nBARRIERS AND LATCHES\nVersion A: Cyclic barriers\n*/\n\n\n#include <iostream>\n#include <string>\n#include <tuple>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\npthread_barrier_t syncPointA;\npthread_barrier_t syncPointB;\n\n\n\nvoid* processRequest(void* argVoid) {\n    auto arg = *(tuple<string,int>*) argVoid;\n    string userName = std::get<0>(arg);\n    int waitTime = std::get<1>(arg);\n\n    sleep(waitTime);\n\n    cout << \"Get request from \" << userName << endl;\n    pthread_barrier_wait(&syncPointA);\n\n    cout << \"Process request for \" << userName << endl;\n    pthread_barrier_wait(&syncPointB);\n\n    cout << \"Done \" << userName << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 4;\n    pthread_t lstTid[NUM_THREADS];\n    int ret = 0;\n\n    // tuple<userName, waitTime>\n    tuple<string,int> lstArg[NUM_THREADS] = {\n        { \"lorem\", 1 },\n        { \"ipsum\", 3 },\n        { \"dolor\", 3 },\n        { \"amet\", 10 }\n    };\n\n    ret = pthread_barrier_init(&syncPointA, nullptr, 2);\n    ret = pthread_barrier_init(&syncPointB, nullptr, 2);\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        ret = pthread_create(&lstTid[i], nullptr, &processRequest, &lstArg[i]);\n    }\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_join(tid, nullptr);\n    }\n\n    ret = pthread_barrier_destroy(&syncPointA);\n    ret = pthread_barrier_destroy(&syncPointB);\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo18b01-latch.cpp",
    "content": "/*\nBARRIERS AND LATCHES\nVersion B: Count-down latches\n\nCount-down latches in C++ POSIX threading are not supported by default.\nSo, I use mylib::CountDownLatch for this demonstration.\n*/\n\n\n#include <iostream>\n#include <string>\n#include <tuple>\n#include <unistd.h>\n#include <pthread.h>\n#include \"mylib-latch.hpp\"\nusing namespace std;\n\n\n\nmylib::CountDownLatch syncPoint(3);\n\n\n\nvoid* processRequest(void* argVoid) {\n    auto arg = *(tuple<string,int>*) argVoid;\n    string userName = std::get<0>(arg);\n    int waitTime = std::get<1>(arg);\n\n    sleep(waitTime);\n\n    cout << \"Get request from \" << userName << endl;\n\n    syncPoint.countDown();\n    syncPoint.wait();\n\n    cout << \"Done \" << userName << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 3;\n    pthread_t lstTid[NUM_THREADS];\n    int ret = 0;\n\n    // tuple<userName, waitTime>\n    tuple<string,int> lstArg[NUM_THREADS] = {\n        { \"lorem\", 1 },\n        { \"ipsum\", 2 },\n        { \"dolor\", 3 }\n    };\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        ret = pthread_create(&lstTid[i], nullptr, &processRequest, &lstArg[i]);\n    }\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_join(tid, nullptr);\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo18b02-latch.cpp",
    "content": "/*\nBARRIERS AND LATCHES\nVersion B: Count-down latches\n\nMain thread waits for 3 child threads to get enough data to progress.\n\nCount-down latches in C++ POSIX threading are not supported by default.\nSo, I use mylib::CountDownLatch for this demonstration.\n*/\n\n\n#include <iostream>\n#include <string>\n#include <tuple>\n#include <unistd.h>\n#include <pthread.h>\n#include \"mylib-latch.hpp\"\nusing namespace std;\n\n\n\nconstexpr int NUM_THREADS = 3;\nmylib::CountDownLatch syncPoint(NUM_THREADS);\n\n\n\nvoid* doTask(void* argVoid) {\n    auto arg = *(tuple<string,int>*) argVoid;\n    string message = std::get<0>(arg);\n    int waitTime = std::get<1>(arg);\n\n    sleep(waitTime);\n\n    cout << message << endl;\n    syncPoint.countDown();\n\n    sleep(8);\n    cout << \"Cleanup\" << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t lstTid[NUM_THREADS];\n    int ret = 0;\n\n    // tuple<message, waitTime>\n    tuple<string,int> lstArg[NUM_THREADS] = {\n        { \"Send request to egg.net to get data\", 6 },\n        { \"Send request to foo.org to get data\", 2 },\n        { \"Send request to bar.com to get data\", 4 }\n    };\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        ret = pthread_create(&lstTid[i], nullptr, &doTask, &lstArg[i]);\n    }\n\n    syncPoint.wait();\n    cout << \"\\nNow we have enough data to progress to next step\\n\" << endl;\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_join(tid, nullptr);\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo19-read-write-lock.cpp",
    "content": "/*\nREAD-WRITE LOCKS\n\nLock for reading\n    A thread can hold multiple concurrent read locks on the rwlock object\n    (that is, successfully call the pthread_rwlock_rdlock subroutine n times).\n    If so, the thread must perform matching unlocks\n    (that is, it must call the pthread_rwlock_unlock subroutine n times).\n\n    There is a function that supports non-blocking:\n        pthread_rwlock_tryrdlock\n\nLock for writing\n    The pthread_rwlock_wrlock subroutine applies a write lock to the read/write lock\n    referenced by the rwlock object.\n    The calling thread acquires the write lock if no other thread (reader or writer)\n    holds the read/write lock on the rwlock object.\n    Otherwise, the thread does not return from the pthread_rwlock_wrlock call\n    until it can acquire the lock.\n\n    There is a function that supports non-blocking:\n        pthread_rwlock_trywrlock\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\n#include \"../cpp-std/mylib-random.hpp\"\nusing namespace std;\n\n\n\nvolatile int resource = 0;\npthread_rwlock_t rwlock = PTHREAD_RWLOCK_INITIALIZER;\n\n\n\nvoid* readFunc(void* arg) {\n    auto waitTime = *(int*) arg;\n    sleep(waitTime);\n\n    pthread_rwlock_rdlock(&rwlock);\n\n    cout << \"read: \" << resource << endl;\n\n    pthread_rwlock_unlock(&rwlock);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* writeFunc(void* arg) {\n    auto waitTime = *(int*) arg;\n    sleep(waitTime);\n\n    pthread_rwlock_wrlock(&rwlock);\n\n    resource = mylib::RandInt::get(100);\n    cout << \"write: \" << resource << endl;\n\n    pthread_rwlock_unlock(&rwlock);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS_READ = 10;\n    constexpr int NUM_THREADS_WRITE = 4;\n    constexpr int NUM_ARGS = 3;\n\n    pthread_t lstTidRead[NUM_THREADS_READ];\n    pthread_t lstTidWrite[NUM_THREADS_WRITE];\n    int lstArg[NUM_ARGS];\n    int ret = 0;\n\n\n    // INITIALIZE\n    for (int i = 0; i < NUM_ARGS; ++i) {\n        lstArg[i] = i;\n    }\n\n\n    // CREATE THREADS\n    for (auto&& tid : lstTidRead) {\n        int argIndex = mylib::RandInt::get(NUM_ARGS);\n        ret = pthread_create(&tid, nullptr, &readFunc, &lstArg[argIndex]);\n    }\n\n    for (auto&& tid : lstTidWrite) {\n        int argIndex = mylib::RandInt::get(NUM_ARGS);\n        ret = pthread_create(&tid, nullptr, &writeFunc, &lstArg[argIndex]);\n    }\n\n\n    // JOIN THREADS\n    for (auto&& tid : lstTidRead) {\n        ret = pthread_join(tid, nullptr);\n    }\n\n    for (auto&& tid : lstTidWrite) {\n        ret = pthread_join(tid, nullptr);\n    }\n\n\n    pthread_rwlock_destroy(&rwlock);\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo20a01-semaphore.cpp",
    "content": "/*\nSEMAPHORES\nVersion A: Paper sheets and packages\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\n#include <semaphore.h>\nusing namespace std;\n\n\n\nsem_t semPackage;\n\n\n\nvoid* makeOneSheet(void*) {\n    for (int i = 0; i < 4; ++i) {\n        cout << \"Make 1 sheet\" << endl;\n        sleep(1);\n        sem_post(&semPackage);\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* combineOnePackage(void*) {\n    for (int i = 0; i < 4; ++i) {\n        sem_wait(&semPackage);\n        sem_wait(&semPackage);\n        cout << \"Combine 2 sheets into 1 package\" << endl;\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tidMakeSheetA, tidMakeSheetB, tidCombinePackage;\n    int ret = 0;\n\n    ret = sem_init(&semPackage, 0, 0);\n\n    ret = pthread_create(&tidMakeSheetA, nullptr, &makeOneSheet, nullptr);\n    ret = pthread_create(&tidMakeSheetB, nullptr, &makeOneSheet, nullptr);\n    ret = pthread_create(&tidCombinePackage, nullptr, &combineOnePackage, nullptr);\n\n    ret = pthread_join(tidMakeSheetA, nullptr);\n    ret = pthread_join(tidMakeSheetB, nullptr);\n    ret = pthread_join(tidCombinePackage, nullptr);\n\n    ret = sem_destroy(&semPackage);\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo20a02-semaphore.cpp",
    "content": "/*\nSEMAPHORES\nVersion A: Paper sheets and packages\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\n#include <semaphore.h>\nusing namespace std;\n\n\n\nsem_t semPackage;\nsem_t semSheet;\n\n\n\nvoid* makeOneSheet(void*) {\n    for (int i = 0; i < 4; ++i) {\n        sem_wait(&semSheet);\n        cout << \"Make 1 sheet\" << endl;\n        sem_post(&semPackage);\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* combineOnePackage(void*) {\n    for (int i = 0; i < 4; ++i) {\n        sem_wait(&semPackage);\n        sem_wait(&semPackage);\n\n        cout << \"Combine 2 sheets into 1 package\" << endl;\n        sleep(2);\n\n        sem_post(&semSheet);\n        sem_post(&semSheet);\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tidMakeSheetA, tidMakeSheetB, tidCombinePackage;\n    int ret = 0;\n\n    ret = sem_init(&semPackage, 0, 0);\n    ret = sem_init(&semSheet, 0, 2);\n\n    ret = pthread_create(&tidMakeSheetA, nullptr, &makeOneSheet, nullptr);\n    ret = pthread_create(&tidMakeSheetB, nullptr, &makeOneSheet, nullptr);\n    ret = pthread_create(&tidCombinePackage, nullptr, &combineOnePackage, nullptr);\n\n    ret = pthread_join(tidMakeSheetA, nullptr);\n    ret = pthread_join(tidMakeSheetB, nullptr);\n    ret = pthread_join(tidCombinePackage, nullptr);\n\n    ret = sem_destroy(&semPackage);\n    ret = sem_destroy(&semSheet);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo20a03-semaphore-deadlock.cpp",
    "content": "/*\nSEMAPHORES\nVersion A: Paper sheets and packages\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\n#include <semaphore.h>\nusing namespace std;\n\n\n\nsem_t semPackage;\nsem_t semSheet;\n\n\n\nvoid* makeOneSheet(void*) {\n    for (int i = 0; i < 4; ++i) {\n        sem_wait(&semSheet);\n        cout << \"Make 1 sheet\" << endl;\n        sem_post(&semPackage);\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* combineOnePackage(void*) {\n    for (int i = 0; i < 4; ++i) {\n        sem_wait(&semPackage);\n        sem_wait(&semPackage);\n\n        cout << \"Combine 2 sheets into 1 package\" << endl;\n        sleep(2);\n\n        sem_post(&semSheet);\n        // Missing one statement: sem_post(&semSheet) ==> deadlock\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tidMakeSheetA, tidMakeSheetB, tidCombinePackage;\n    int ret = 0;\n\n    ret = sem_init(&semPackage, 0, 0);\n    ret = sem_init(&semSheet, 0, 2);\n\n    ret = pthread_create(&tidMakeSheetA, nullptr, &makeOneSheet, nullptr);\n    ret = pthread_create(&tidMakeSheetB, nullptr, &makeOneSheet, nullptr);\n    ret = pthread_create(&tidCombinePackage, nullptr, &combineOnePackage, nullptr);\n\n    ret = pthread_join(tidMakeSheetA, nullptr);\n    ret = pthread_join(tidMakeSheetB, nullptr);\n    ret = pthread_join(tidCombinePackage, nullptr);\n\n    ret = sem_destroy(&semPackage);\n    ret = sem_destroy(&semSheet);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo20b-semaphore.cpp",
    "content": "/*\nSEMAPHORES\nVersion B: Tires and chassis\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\n#include <semaphore.h>\nusing namespace std;\n\n\n\nsem_t semTire;\nsem_t semChassis;\n\n\n\nvoid* makeTire(void*) {\n    int ret = 0;\n\n    for (int i = 0; i < 8; ++i) {\n        sem_wait(&semTire);\n\n        cout << \"Make 1 tire\" << endl;\n        sleep(1);\n\n        sem_post(&semChassis);\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* makeChassis(void*) {\n    for (int i = 0; i < 4; ++i) {\n        sem_wait(&semChassis);\n        sem_wait(&semChassis);\n        sem_wait(&semChassis);\n        sem_wait(&semChassis);\n\n        cout << \"Make 1 chassis\" << endl;\n        sleep(3);\n\n        sem_post(&semTire);\n        sem_post(&semTire);\n        sem_post(&semTire);\n        sem_post(&semTire);\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tidTireA, tidTireB, tidChassis;\n    int ret = 0;\n\n    ret = sem_init(&semTire, 0, 4);\n    ret = sem_init(&semChassis, 0, 0);\n\n    ret = pthread_create(&tidTireA, nullptr, &makeTire, nullptr);\n    ret = pthread_create(&tidTireB, nullptr, &makeTire, nullptr);\n    ret = pthread_create(&tidChassis, nullptr, &makeChassis, nullptr);\n\n    ret = pthread_join(tidTireA, nullptr);\n    ret = pthread_join(tidTireB, nullptr);\n    ret = pthread_join(tidChassis, nullptr);\n\n    ret = sem_destroy(&semTire);\n    ret = sem_destroy(&semChassis);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo21a01-condition-variable.cpp",
    "content": "/*\nCONDITION VARIABLES\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\npthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\npthread_cond_t conditionVar = PTHREAD_COND_INITIALIZER;\n\n\n\nvoid* foo(void*) {\n    cout << \"foo is waiting...\" << endl;\n\n    pthread_mutex_lock(&mut);\n    pthread_cond_wait(&conditionVar, &mut);\n\n    cout << \"foo resumed\" << endl;\n\n    pthread_mutex_unlock(&mut);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* bar(void*) {\n    sleep(3);\n    pthread_cond_signal(&conditionVar);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tidFoo, tidBar;\n    int ret = 0;\n\n    ret = pthread_create(&tidFoo, nullptr, &foo, nullptr);\n    ret = pthread_create(&tidBar, nullptr, &bar, nullptr);\n\n    ret = pthread_join(tidFoo, nullptr);\n    ret = pthread_join(tidBar, nullptr);\n\n    ret = pthread_cond_destroy(&conditionVar);\n    ret = pthread_mutex_destroy(&mut);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo21a02-condition-variable.cpp",
    "content": "/*\nCONDITION VARIABLES\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\npthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\npthread_cond_t conditionVar = PTHREAD_COND_INITIALIZER;\n\nconstexpr int NUM_TH_FOO = 3;\n\n\n\nvoid* foo(void*) {\n    cout << \"foo is waiting...\" << endl;\n\n    pthread_mutex_lock(&mut);\n    pthread_cond_wait(&conditionVar, &mut);\n\n    cout << \"foo resumed\" << endl;\n\n    pthread_mutex_unlock(&mut);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* bar(void*) {\n    for (int i = 0; i < NUM_TH_FOO; ++i) {\n        sleep(2);\n        pthread_cond_signal(&conditionVar);\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t lstTidFoo[NUM_TH_FOO];\n    pthread_t tidBar;\n\n    int ret = 0;\n\n    for (auto&& tidFoo : lstTidFoo) {\n        ret = pthread_create(&tidFoo, nullptr, &foo, nullptr);\n    }\n\n    ret = pthread_create(&tidBar, nullptr, &bar, nullptr);\n\n    for (auto&& tidFoo : lstTidFoo) {\n        ret = pthread_join(tidFoo, nullptr);\n    }\n\n    ret = pthread_join(tidBar, nullptr);\n\n    ret = pthread_cond_destroy(&conditionVar);\n    ret = pthread_mutex_destroy(&mut);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo21a03-condition-variable.cpp",
    "content": "/*\nCONDITION VARIABLES\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\npthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\npthread_cond_t conditionVar = PTHREAD_COND_INITIALIZER;\n\nconstexpr int NUM_TH_FOO = 3;\n\n\n\nvoid* foo(void*) {\n    cout << \"foo is waiting...\" << endl;\n\n    pthread_mutex_lock(&mut);\n    pthread_cond_wait(&conditionVar, &mut);\n\n    cout << \"foo resumed\" << endl;\n\n    pthread_mutex_unlock(&mut);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* bar(void*) {\n    sleep(3);\n    // Notify all waiting threads\n    pthread_cond_broadcast(&conditionVar);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t lstTidFoo[NUM_TH_FOO];\n    pthread_t tidBar;\n\n    int ret = 0;\n\n    for (auto&& tidFoo : lstTidFoo) {\n        ret = pthread_create(&tidFoo, nullptr, &foo, nullptr);\n    }\n\n    ret = pthread_create(&tidBar, nullptr, &bar, nullptr);\n\n    for (auto&& tidFoo : lstTidFoo) {\n        ret = pthread_join(tidFoo, nullptr);\n    }\n\n    ret = pthread_join(tidBar, nullptr);\n\n    ret = pthread_cond_destroy(&conditionVar);\n    ret = pthread_mutex_destroy(&mut);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo21b-condition-variable.cpp",
    "content": "/*\nCONDITION VARIABLES\n*/\n\n\n#include <iostream>\n#include <pthread.h>\nusing namespace std;\n\n\n\npthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\n\npthread_cond_t conditionVar = PTHREAD_COND_INITIALIZER;\n\nint counter = 0;\n\nconstexpr int COUNT_HALT_01 = 3;\nconstexpr int COUNT_HALT_02 = 6;\nconstexpr int COUNT_DONE = 10;\n\n\n\n// Write numbers 1-3 and 8-10 as permitted by egg()\nvoid* foo(void*) {\n    for (;;) {\n        // Lock mutex and then wait for signal to relase mutex\n        pthread_mutex_lock(&mut);\n\n        // Wait while egg() operates on counter,\n        // Mutex unlocked if condition variable in egg() signaled\n        pthread_cond_wait(&conditionVar, &mut);\n\n        ++counter;\n        cout << \"foo count = \" << counter << endl;\n\n        if (counter >= COUNT_DONE) {\n            pthread_mutex_unlock(&mut);\n            pthread_exit(nullptr);\n            return nullptr;\n        }\n\n        pthread_mutex_unlock(&mut);\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\n// Write numbers 4-7\nvoid* egg(void*) {\n    for (;;) {\n        pthread_mutex_lock(&mut);\n\n        if (counter < COUNT_HALT_01 || counter > COUNT_HALT_02) {\n            // Signal to free waiting thread by freeing the mutex\n            // Note: foo() is now permitted to modify \"counter\"\n            pthread_cond_signal(&conditionVar);\n        }\n        else {\n            ++counter;\n            cout << \"egg counter = \" << counter << endl;\n        }\n\n        if (counter >= COUNT_DONE) {\n            pthread_mutex_unlock(&mut);\n            pthread_exit(nullptr);\n            return nullptr;\n        }\n\n        pthread_mutex_unlock(&mut);\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tidFoo, tidEgg;\n    int ret = 0;\n\n    ret = pthread_create(&tidFoo, nullptr, &foo, nullptr);\n    ret = pthread_create(&tidEgg, nullptr, &egg, nullptr);\n\n    ret = pthread_join(tidFoo, nullptr);\n    ret = pthread_join(tidEgg, nullptr);\n\n    ret = pthread_cond_destroy(&conditionVar);\n    ret = pthread_mutex_destroy(&mut);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo22a-blocking-queue.cpp",
    "content": "/*\nBLOCKING QUEUES\nVersion A: A slow producer and a fast consumer\n\nBlocking queues in C++ POSIX threading are not supported by default.\nSo, I use mylib::BlockingQueue for this demonstration.\n*/\n\n\n#include <iostream>\n#include <string>\n#include <unistd.h>\n#include <pthread.h>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\nusing namespace mylib;\n\n\n\nvoid* producer(void* arg) {\n    auto blkQueue = (BlockingQueue<string>*) arg;\n\n    sleep(2);\n    blkQueue->put(\"Alice\");\n\n    sleep(2);\n    blkQueue->put(\"likes\");\n\n    sleep(2);\n    blkQueue->put(\"singing\");\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* consumer(void* arg) {\n    auto blkQueue = (BlockingQueue<string>*) arg;\n    string data;\n\n    for (int i = 0; i < 3; ++i) {\n        cout << \"\\nWaiting for data...\" << endl;\n        data = blkQueue->take();\n        cout << \"    \" << data << endl;\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tidProducer, tidConsumer;\n    auto blkQueue = BlockingQueue<string>();\n\n    int ret = 0;\n\n    ret = pthread_create(&tidProducer, nullptr, &producer, &blkQueue);\n    ret = pthread_create(&tidConsumer, nullptr, &consumer, &blkQueue);\n\n    ret = pthread_join(tidProducer, nullptr);\n    ret = pthread_join(tidConsumer, nullptr);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo22b-blocking-queue.cpp",
    "content": "/*\nBLOCKING QUEUES\nVersion B: A fast producer and a slow consumer\n\nBlocking queues in C++ POSIX threading are not supported by default.\nSo, I use mylib::BlockingQueue for this demonstration.\n*/\n\n\n#include <iostream>\n#include <string>\n#include <unistd.h>\n#include <pthread.h>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\nusing namespace mylib;\n\n\n\nvoid* producer(void* arg) {\n    auto blkQueue = (BlockingQueue<string>*) arg;\n\n    blkQueue->put(\"Alice\");\n    blkQueue->put(\"likes\");\n\n    /*\n    Due to reaching the maximum capacity = 2, when executing blkQueue->put(\"singing\"),\n    this thread is going to sleep until the queue removes an element.\n    */\n\n    blkQueue->put(\"singing\");\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* consumer(void* arg) {\n    auto blkQueue = (BlockingQueue<string>*) arg;\n    string data;\n\n    sleep(2);\n\n    for (int i = 0; i < 3; ++i) {\n        cout << \"\\nWaiting for data...\" << endl;\n        data = blkQueue->take();\n        cout << \"    \" << data << endl;\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tidProducer, tidConsumer;\n    auto blkQueue = BlockingQueue<string>(); // blocking queue with capacity = 2\n\n    int ret = 0;\n\n    ret = pthread_create(&tidProducer, nullptr, &producer, &blkQueue);\n    ret = pthread_create(&tidConsumer, nullptr, &consumer, &blkQueue);\n\n    ret = pthread_join(tidProducer, nullptr);\n    ret = pthread_join(tidConsumer, nullptr);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo23a-thread-local.cpp",
    "content": "/*\nTHREAD-LOCAL STORAGE\nIntroduction\n\nThe code is specific for gcc.\n*/\n\n\n#include <iostream>\n#include <pthread.h>\nusing namespace std;\n\n\n\n__thread int value = 123;\n\n\n\nvoid* doTask(void* arg) {\n    cout << value << endl;\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tid;\n    int ret = 0;\n\n    // Main thread sets value = 999\n    value = 999;\n    cout << value << endl;\n\n    // Child thread gets value\n    // Expected output: 123\n    ret = pthread_create(&tid, nullptr, &doTask, nullptr);\n    ret = pthread_join(tid, nullptr);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo23b-thread-local.cpp",
    "content": "/*\nTHREAD-LOCAL STORAGE\nAvoiding synchronization using thread-local storage\n\nThe code is specific for gcc.\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\n__thread int counter = 0;\n\n\n\nvoid* doTask(void* arg) {\n    auto t = *(int*) arg;\n\n    sleep(1);\n\n    for (int i = 0; i < 1000; ++i)\n        ++counter;\n\n    cout << \"Thread \" << t << \" gives counter = \" << counter << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 3;\n\n    pthread_t lstTid[NUM_THREADS];\n    int lstArg[NUM_THREADS];\n\n    int ret = 0;\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstArg[i] = i;\n        ret = pthread_create(&lstTid[i], nullptr, &doTask, &lstArg[i]);\n    }\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_join(tid, nullptr);\n    }\n\n    cout << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo24-volatile.cpp",
    "content": "/*\nTHE VOLATILE KEYWORD\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\nvolatile bool isRunning;\n\n\n\nvoid* doTask(void*) {\n    while (isRunning) {\n        cout << \"Running...\" << endl;\n        sleep(2);\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tid;\n    int ret = 0;\n\n    isRunning = true;\n    ret = pthread_create(&tid, nullptr, &doTask, nullptr);\n\n    sleep(6);\n    isRunning = false;\n\n    ret = pthread_join(tid, nullptr);\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo25a-atomic.c",
    "content": "/*\nATOMIC ACCESS\n\nIn this demo, I use raw C language (not C++).\n*/\n\n\n#include <stdio.h>\n#include <pthread.h>\n#include <unistd.h>\n\n\n\nvolatile int counter;\n\n\n\nvoid* doTask(void* arg) {\n    sleep(1);\n    counter += 1;\n\n    pthread_exit(NULL);\n    return NULL;\n}\n\n\n\nint main() {\n    counter = 0;\n\n    pthread_t lstTid[1000];\n    int ret = 0;\n\n    for (int i = 0; i < 1000; ++i) {\n        ret = pthread_create(&lstTid[i], NULL, &doTask, NULL);\n    }\n\n    for (int i = 0; i < 1000; ++i) {\n        ret = pthread_join(lstTid[i], NULL);\n    }\n\n    // Unpredictable result\n    printf(\"counter = %d \\n\", counter);\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demo25b-atomic.c",
    "content": "/*\nATOMIC ACCESS\n\nIn this demo, I use raw C language (not C++).\n*/\n\n\n#include <stdio.h>\n#include <pthread.h>\n#include <stdatomic.h> // C11 atomic\n#include <unistd.h>\n\n\n\natomic_int counter;\n\n\n\nvoid* doTask(void* arg) {\n    sleep(1);\n    atomic_fetch_add(&counter, 1);\n\n    pthread_exit(NULL);\n    return NULL;\n}\n\n\n\nint main() {\n    atomic_store(&counter, 0); // Assign counter = 0\n\n    pthread_t lstTid[1000];\n    int ret = 0;\n\n    for (int i = 0; i < 1000; ++i) {\n        ret = pthread_create(&lstTid[i], NULL, &doTask, NULL);\n    }\n\n    for (int i = 0; i < 1000; ++i) {\n        ret = pthread_join(lstTid[i], NULL);\n    }\n\n    // counter = 1000\n    printf(\"counter = %d \\n\", counter);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demoex-attribute.cpp",
    "content": "#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\nvoid* doTask(void* ptrId) {\n    int id = *(int*)ptrId;\n\n    cout << \"Sleeping in thread \" << id << endl;\n    sleep(1);\n\n    cout << \"Thread with id \" << id << \" exiting...\" << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 3;\n\n    pthread_t tid[NUM_THREADS];\n    pthread_attr_t attr;\n    int arg[NUM_THREADS];\n\n    int ret = 0;\n\n    // Initialize and set thread joinable\n    pthread_attr_init(&attr);\n    pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE); // PTHREAD_CREATE_JOINABLE or PTHREAD_CREATE_DETACHED\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        arg[i] = i;\n        ret = pthread_create(&tid[i], &attr, &doTask, &arg[i]);\n    }\n\n    void* status = nullptr;\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        ret = pthread_join(tid[i], &status);\n        cout << \"completed thread id \" << i << \" with status \" << status << endl;\n    }\n\n    ret = pthread_attr_destroy(&attr);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demoex-oop.cpp",
    "content": "#include <iostream>\n#include <pthread.h>\nusing namespace std;\n\n\n\nclass Task {\nprivate:\n    pthread_t tid;\npublic:\n    int index;\n\n\npublic:\n    Task(const Task& other) = delete;\n    Task(const Task&& other) = delete;\n    void operator=(const Task& other) = delete;\n    void operator=(const Task&& other) = delete;\n\n\n    Task(int index = -1): index(index) {\n\n    }\n\n\n    int start() {\n        int ret = pthread_create(&tid, nullptr, &work, (void*)this);\n        return ret;\n    }\n\n\n    int join() {\n        int ret = pthread_join(tid, nullptr);\n        return ret;\n    }\n\n\nprivate:\n    static void* work(void* arg) {\n        auto thisPtr = (Task*) arg;\n\n        cout << thisPtr->index << endl;\n\n        pthread_exit(nullptr);\n        return nullptr;\n    }\n};\n\n\n\nint main() {\n    constexpr int NUM_TASKS = 3;\n    Task task[NUM_TASKS];\n\n    for (int i = 0; i < NUM_TASKS; ++i) {\n        task[i].index = i;\n    }\n\n    for (int i = 0; i < NUM_TASKS; ++i) {\n        task[i].start();\n    }\n\n    for (int i = 0; i < NUM_TASKS; ++i) {\n        task[i].join();\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/demoex-signal.cpp",
    "content": "#include <iostream>\n#include <unistd.h>\n#include <signal.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\nvoid signalHandler(int sig) {\n    cout << \"caught signal \" << sig << endl;\n    // signal(SIGSEGV, signalHandler);\n}\n\n\n\nvoid* func(void* arg) {\n    cout << \"foo\" << endl;\n    sleep(5);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tid;\n\n    signal(SIGSEGV, signalHandler); // Register signal handler before going multithread\n\n    pthread_create(&tid, nullptr, &func, nullptr);\n    sleep(1); // Leave time for initialization\n\n    pthread_kill(tid, SIGSEGV);\n\n    pthread_join(tid, NULL);\n\n    cout << \"bar\" << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer01a-max-div.cpp",
    "content": "/*\nMAXIMUM NUMBER OF DIVISORS\n*/\n\n\n#include <iostream>\n#include \"../cpp-std/mylib-time.hpp\"\nusing namespace std;\n\n\n\nint main() {\n    constexpr int RANGE_START = 1;\n    constexpr int RANGE_END = 100000;\n\n    int resValue = 0;\n    int resNumDiv = 0;  // number of divisors of result\n\n    auto tpStart = mylib::HiResClock::now();\n\n\n    for (int i = RANGE_START; i <= RANGE_END; ++i) {\n        int numDiv = 0;\n\n        for (int j = i / 2; j > 0; --j)\n            if (i % j == 0)\n                ++numDiv;\n\n        if (resNumDiv < numDiv) {\n            resNumDiv = numDiv;\n            resValue = i;\n        }\n    }\n\n\n    auto timeElapsed = mylib::HiResClock::getTimeSpan(tpStart);\n\n    cout << \"The integer which has largest number of divisors is \" << resValue << endl;\n    cout << \"The largest number of divisor is \" << resNumDiv << endl;\n    cout << \"Time elapsed = \" << timeElapsed.count() << endl;\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer01b-max-div.cpp",
    "content": "/*\nMAXIMUM NUMBER OF DIVISORS\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <algorithm>\n#include <pthread.h>\n#include \"../cpp-std/mylib-time.hpp\"\nusing namespace std;\n\n\n\nstruct WorkerResult {\n    int value;\n    int numDiv;\n\n    WorkerResult(int value = 0, int numDiv = 0): value(value), numDiv(numDiv)\n    {\n    }\n};\n\n\n\nstruct WorkerArg {\n    int iStart;\n    int iEnd;\n    WorkerResult *res;\n\n    WorkerArg(int iStart = 0, int iEnd = 0, WorkerResult* res = nullptr):\n        iStart(iStart), iEnd(iEnd), res(res)\n    {\n    }\n};\n\n\n\nvoid* workerFunc(void* argVoid) {\n    auto arg = (WorkerArg*) argVoid;\n    auto res = arg->res;\n\n    int resValue = 0;\n    int resNumDiv = 0;\n\n    for (int i = arg->iStart; i <= arg->iEnd; ++i) {\n        int numDiv = 0;\n\n        for (int j = i / 2; j > 0; --j)\n            if (i % j == 0)\n                ++numDiv;\n\n        if (resNumDiv < numDiv) {\n            resNumDiv = numDiv;\n            resValue = i;\n        }\n    }\n\n    (*res) = WorkerResult(resValue, resNumDiv);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid prepare(\n    int rangeStart, int rangeEnd,\n    int numThreads,\n    vector<pthread_t>& lstTid,\n    vector<WorkerArg>& lstWorkerArg,\n    vector<WorkerResult>& lstWorkerRes\n) {\n    lstTid.resize(numThreads);\n    lstWorkerArg.resize(numThreads);\n    lstWorkerRes.resize(numThreads);\n\n    int rangeA, rangeB, rangeBlock;\n\n    rangeBlock = (rangeEnd - rangeStart + 1) / numThreads;\n    rangeA = rangeStart;\n\n    for (int i = 0; i < numThreads; ++i, rangeA += rangeBlock) {\n        rangeB = rangeA + rangeBlock - 1;\n\n        if (i == numThreads - 1)\n            rangeB = rangeEnd;\n\n        lstWorkerArg[i] = WorkerArg(rangeA, rangeB, &lstWorkerRes[i]);\n    }\n}\n\n\n\nint main() {\n    constexpr int RANGE_START = 1;\n    constexpr int RANGE_END = 100000;\n    constexpr int NUM_THREADS = 8;\n\n    vector<pthread_t> lstTid;\n    vector<WorkerArg> lstWorkerArg;\n    vector<WorkerResult> lstWorkerRes;\n\n    int ret = 0;\n\n    prepare(RANGE_START, RANGE_END, NUM_THREADS, lstTid, lstWorkerArg, lstWorkerRes);\n\n    auto tpStart = mylib::HiResClock::now();\n\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        ret = pthread_create(&lstTid[i], nullptr, &workerFunc, &lstWorkerArg[i]);\n    }\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_join(tid, nullptr);\n    }\n\n    // for (auto&& res: lstWorkerRes) {\n    //     cout << res.value << \"  \" << res.numDiv << endl;\n    // }\n\n    auto finalRes = *max_element(lstWorkerRes.begin(), lstWorkerRes.end(),\n        [](const WorkerResult &lhs, const WorkerResult &rhs) -> bool {\n            return lhs.numDiv < rhs.numDiv;\n        }\n    );\n\n\n    auto timeElapsed = mylib::HiResClock::getTimeSpan(tpStart);\n\n    cout << \"The integer which has largest number of divisors is \" << finalRes.value << endl;\n    cout << \"The largest number of divisor is \" << finalRes.numDiv << endl;\n    cout << \"Time elapsed = \" << timeElapsed.count() << endl;\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer01c-max-div.cpp",
    "content": "/*\nMAXIMUM NUMBER OF DIVISORS\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <algorithm>\n#include <pthread.h>\n#include \"../cpp-std/mylib-time.hpp\"\nusing namespace std;\n\n\n\nclass FinalResult {\npublic:\n    int value = 0;\n    int numDiv = 0;\n\nprivate:\n    pthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\n\n\npublic:\n    void update(int value, int numDiv) {\n        pthread_mutex_lock(&mut);\n\n        if (this->numDiv < numDiv) {\n            this->numDiv = numDiv;\n            this->value = value;\n        }\n\n        pthread_mutex_unlock(&mut);\n    }\n\n    void init() {\n        destroy();\n        mut = PTHREAD_MUTEX_INITIALIZER;\n    }\n\n    void destroy() {\n        pthread_mutex_destroy(&mut);\n    }\n};\n\n\n\nstruct WorkerArg {\n    int iStart;\n    int iEnd;\n    FinalResult *res;\n\n    WorkerArg(int iStart = 0, int iEnd = 0, FinalResult* res = nullptr):\n        iStart(iStart), iEnd(iEnd), res(res)\n    {\n    }\n};\n\n\n\nvoid* workerFunc(void* argVoid) {\n    auto arg = (WorkerArg*) argVoid;\n    auto res = arg->res;\n\n    int resValue = 0;\n    int resNumDiv = 0;\n\n    for (int i = arg->iStart; i <= arg->iEnd; ++i) {\n        int numDiv = 0;\n\n        for (int j = i / 2; j > 0; --j)\n            if (i % j == 0)\n                ++numDiv;\n\n        if (resNumDiv < numDiv) {\n            resNumDiv = numDiv;\n            resValue = i;\n        }\n    }\n\n    res->update(resValue, resNumDiv);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid prepare(\n    int rangeStart, int rangeEnd,\n    int numThreads,\n    vector<pthread_t>& lstTid,\n    vector<WorkerArg>& lstWorkerArg,\n    FinalResult* finalRes\n) {\n    lstTid.resize(numThreads);\n    lstWorkerArg.resize(numThreads);\n\n    int rangeA, rangeB, rangeBlock;\n\n    rangeBlock = (rangeEnd - rangeStart + 1) / numThreads;\n    rangeA = rangeStart;\n\n    for (int i = 0; i < numThreads; ++i, rangeA += rangeBlock) {\n        rangeB = rangeA + rangeBlock - 1;\n\n        if (i == numThreads - 1)\n            rangeB = rangeEnd;\n\n        lstWorkerArg[i] = WorkerArg(rangeA, rangeB, finalRes);\n    }\n}\n\n\n\nint main() {\n    constexpr int RANGE_START = 1;\n    constexpr int RANGE_END = 100000;\n    constexpr int NUM_THREADS = 8;\n\n    vector<pthread_t> lstTid;\n    vector<WorkerArg> lstWorkerArg;\n\n    FinalResult finalRes;\n    int ret = 0;\n\n    finalRes.init();\n    prepare(RANGE_START, RANGE_END, NUM_THREADS, lstTid, lstWorkerArg, &finalRes);\n\n    auto tpStart = mylib::HiResClock::now();\n\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        ret = pthread_create(&lstTid[i], nullptr, &workerFunc, &lstWorkerArg[i]);\n    }\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_join(tid, nullptr);\n    }\n\n\n    finalRes.destroy();\n    auto timeElapsed = mylib::HiResClock::getTimeSpan(tpStart);\n\n    cout << \"The integer which has largest number of divisors is \" << finalRes.value << endl;\n    cout << \"The largest number of divisor is \" << finalRes.numDiv << endl;\n    cout << \"Time elapsed = \" << timeElapsed.count() << endl;\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer02a01-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE A: USING BLOCKING QUEUES\n    Version A01: 1 slow producer, 1 fast consumer\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\nusing namespace mylib;\n\n\n\nvoid* producer(void* arg) {\n    auto blkq = (BlockingQueue<int>*) arg;\n    int i = 1;\n\n    for (;; ++i) {\n        blkq->put(i);\n        sleep(1);\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* consumer(void* arg) {\n    auto blkq = (BlockingQueue<int>*) arg;\n    int data = 0;\n\n    for (;;) {\n        data = blkq->take();\n        cout << \"Consumer \" << data << endl;\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tidProducer, tidConsumer;\n    BlockingQueue<int> blkq;\n\n    int ret = 0;\n\n    ret = pthread_create(&tidProducer, nullptr, &producer, &blkq);\n    ret = pthread_create(&tidConsumer, nullptr, &consumer, &blkq);\n\n    ret = pthread_join(tidProducer, nullptr);\n    ret = pthread_join(tidConsumer, nullptr);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer02a02-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE A: USING BLOCKING QUEUES\n    Version A02: 2 slow producers, 1 fast consumer\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\nusing namespace mylib;\n\n\n\nvoid* producer(void* arg) {\n    auto blkq = (BlockingQueue<int>*) arg;\n    int i = 1;\n\n    for (;; ++i) {\n        blkq->put(i);\n        sleep(1);\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* consumer(void* arg) {\n    auto blkq = (BlockingQueue<int>*) arg;\n    int data = 0;\n\n    for (;;) {\n        data = blkq->take();\n        cout << \"Consumer \" << data << endl;\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tidProducerA, tidProducerB;\n    pthread_t tidConsumer;\n    BlockingQueue<int> blkq;\n\n    int ret = 0;\n\n    ret = pthread_create(&tidProducerA, nullptr, &producer, &blkq);\n    ret = pthread_create(&tidProducerB, nullptr, &producer, &blkq);\n    ret = pthread_create(&tidConsumer, nullptr, &consumer, &blkq);\n\n    ret = pthread_join(tidProducerA, nullptr);\n    ret = pthread_join(tidProducerB, nullptr);\n    ret = pthread_join(tidConsumer, nullptr);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer02a03-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE A: USING BLOCKING QUEUES\n    Version A03: 1 slow producer, 2 fast consumers\n*/\n\n\n#include <iostream>\n#include <string>\n#include <unistd.h>\n#include <pthread.h>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\nusing namespace mylib;\n\n\n\nstruct ConsumerArg {\n    string name;\n    BlockingQueue<int> *blkq;\n};\n\n\n\nvoid* producer(void* arg) {\n    auto blkq = (BlockingQueue<int>*) arg;\n    int i = 1;\n\n    for (;; ++i) {\n        blkq->put(i);\n        sleep(1);\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* consumer(void* argVoid) {\n    auto arg = (ConsumerArg*) argVoid;\n    auto blkq = arg->blkq;\n\n    int data = 0;\n\n    for (;;) {\n        data = blkq->take();\n        cout << \"Consumer \" << arg->name << \": \" << data << endl;\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    pthread_t tidProducer;\n    pthread_t tidConsumerFoo, tidConsumerBar;\n    BlockingQueue<int> blkq;\n\n    int ret = 0;\n\n    ConsumerArg argFoo = { \"foo\", &blkq };\n    ConsumerArg argBar = { \"bar\", &blkq };\n\n    ret = pthread_create(&tidProducer, nullptr, &producer, &blkq);\n    ret = pthread_create(&tidConsumerFoo, nullptr, &consumer, &argFoo);\n    ret = pthread_create(&tidConsumerBar, nullptr, &consumer, &argBar);\n\n    ret = pthread_join(tidProducer, nullptr);\n    ret = pthread_join(tidConsumerFoo, nullptr);\n    ret = pthread_join(tidConsumerBar, nullptr);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer02a04-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE A: USING BLOCKING QUEUES\n    Version A04: Multiple fast producers, multiple slow consumers\n*/\n\n\n#include <iostream>\n#include <thread>\n#include <chrono>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\nusing namespace mylib;\n\n\n\nstruct ProducerArg {\n    BlockingQueue<int>* blkq;\n    int startValue;\n};\n\n\n\nvoid* producer(void* argVoid) {\n    auto arg = (ProducerArg*) argVoid;\n    int i = 1;\n\n    for (;; ++i) {\n        arg->blkq->put(i + arg->startValue);\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* consumer(void* arg) {\n    auto blkq = (BlockingQueue<int>*) arg;\n    int data = 0;\n\n    for (;;) {\n        data = blkq->take();\n        cout << \"Consumer \" << data << endl;\n        std::this_thread::sleep_for(std::chrono::seconds(1));\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    auto blkq = BlockingQueue<int>(5);\n\n    constexpr int NUM_PRODUCERS = 3;\n    constexpr int NUM_CONSUMERS = 2;\n\n    pthread_t lstTidProducer[NUM_PRODUCERS];\n    pthread_t lstTidConsumer[NUM_CONSUMERS];\n    ProducerArg lstArgPro[NUM_PRODUCERS];\n\n    int ret = 0;\n\n\n    // PREPARE ARGUMENTS\n    for (int i = 0; i < NUM_PRODUCERS; ++i) {\n        lstArgPro[i] = { &blkq, i * 1000 };\n    }\n\n\n    // CREATE THREADS\n    for (int i = 0; i < NUM_PRODUCERS; ++i) {\n        ret = pthread_create(&lstTidProducer[i], nullptr, &producer, &lstArgPro[i]);\n    }\n\n    for (int i = 0; i < NUM_CONSUMERS; ++i) {\n        ret = pthread_create(&lstTidConsumer[i], nullptr, &consumer, &blkq);\n    }\n\n\n    // JOIN THREADS\n    for (int i = 0; i < NUM_PRODUCERS; ++i) {\n        ret = pthread_join(lstTidProducer[i], nullptr);\n    }\n\n    for (int i = 0; i < NUM_CONSUMERS; ++i) {\n        ret = pthread_join(lstTidConsumer[i], nullptr);\n    }\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer02b01-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE B: USING SEMAPHORES\n    Version B01: 1 slow producer, 1 fast consumer\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <unistd.h>\n#include <pthread.h>\n#include <semaphore.h>\nusing namespace std;\n\n\n\nstruct GlobalSemaphore {\n    sem_t semFill;      // item produced\n    sem_t semEmpty;     // remaining space in queue\n\n    void init(int semFillValue, int semEmptyValue) {\n        sem_init(&semFill, 0, semFillValue);\n        sem_init(&semEmpty, 0, semEmptyValue);\n    }\n\n    void destroy() {\n        sem_destroy(&semFill);\n        sem_destroy(&semEmpty);\n    }\n\n    void waitFill() {\n        sem_wait(&semFill);\n    }\n\n    void waitEmpty() {\n        sem_wait(&semEmpty);\n    }\n\n    void postFill() {\n        sem_post(&semFill);\n    }\n\n    void postEmpty() {\n        sem_post(&semEmpty);\n    }\n};\n\n\n\nstruct GlobalArg {\n    queue<int>* q;\n    GlobalSemaphore* sem;\n};\n\n\n\nvoid* producer(void* argVoid) {\n    auto arg = (GlobalArg*) argVoid;\n    auto q = arg->q;\n    auto sem = arg->sem;\n\n    int i = 1;\n\n    for (;; ++i) {\n        sem->waitEmpty();\n\n        q->push(i);\n        sleep(1);\n\n        sem->postFill();\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* consumer(void* argVoid) {\n    auto arg = (GlobalArg*) argVoid;\n    auto q = arg->q;\n    auto sem = arg->sem;\n\n    int data = 0;\n\n    for (;;) {\n        sem->waitFill();\n\n        data = q->front();\n        q->pop();\n\n        cout << \"Consumer \" << data << endl;\n\n        sem->postEmpty();\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    GlobalSemaphore sem;\n    queue<int> q;\n    pthread_t tidProducer, tidConsumer;\n\n    GlobalArg arg = { &q, &sem };\n    int ret = 0;\n\n    sem.init(0, 1);\n\n    ret = pthread_create(&tidProducer, nullptr, &producer, &arg);\n    ret = pthread_create(&tidConsumer, nullptr, &consumer, &arg);\n\n    ret = pthread_join(tidProducer, nullptr);\n    ret = pthread_join(tidConsumer, nullptr);\n\n    sem.destroy();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer02b02-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE B: USING SEMAPHORES\n    Version B02: 2 slow producers, 1 fast consumer\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <pthread.h>\n#include <semaphore.h>\n#include <unistd.h>\nusing namespace std;\n\n\n\nstruct GlobalSemaphore {\n    sem_t semFill;      // item produced\n    sem_t semEmpty;     // remaining space in queue\n\n    void init(int semFillValue, int semEmptyValue) {\n        sem_init(&semFill, 0, semFillValue);\n        sem_init(&semEmpty, 0, semEmptyValue);\n    }\n\n    void destroy() {\n        sem_destroy(&semFill);\n        sem_destroy(&semEmpty);\n    }\n\n    void waitFill() {\n        sem_wait(&semFill);\n    }\n\n    void waitEmpty() {\n        sem_wait(&semEmpty);\n    }\n\n    void postFill() {\n        sem_post(&semFill);\n    }\n\n    void postEmpty() {\n        sem_post(&semEmpty);\n    }\n};\n\n\n\nstruct GlobalArg {\n    queue<int>* q;\n    GlobalSemaphore* sem;\n    int startValue;\n};\n\n\n\nvoid* producer(void* argVoid) {\n    auto arg = (GlobalArg*) argVoid;\n    auto q = arg->q;\n    auto sem = arg->sem;\n\n    int i = 1;\n\n    for (;; ++i) {\n        sem->waitEmpty();\n\n        q->push(i + arg->startValue);\n        sleep(1);\n\n        sem->postFill();\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* consumer(void* argVoid) {\n    auto arg = (GlobalArg*) argVoid;\n    auto q = arg->q;\n    auto sem = arg->sem;\n\n    int data = 0;\n\n    for (;;) {\n        sem->waitFill();\n\n        data = q->front();\n        q->pop();\n\n        cout << \"Consumer \" << data << endl;\n\n        sem->postEmpty();\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    GlobalSemaphore sem;\n    queue<int> q;\n    pthread_t tidProducerA, tidProducerB, tidConsumer;\n\n    GlobalArg argCon, argProA, argProB;\n    int ret = 0;\n\n    sem.init(0, 1);\n\n    argCon = argProA = argProB = { &q, &sem, 0 };\n    argProA.startValue = 0;\n    argProB.startValue = 1000;\n\n    ret = pthread_create(&tidProducerA, nullptr, &producer, &argProA);\n    ret = pthread_create(&tidProducerB, nullptr, &producer, &argProB);\n    ret = pthread_create(&tidConsumer, nullptr, &consumer, &argCon);\n\n    ret = pthread_join(tidProducerA, nullptr);\n    ret = pthread_join(tidProducerB, nullptr);\n    ret = pthread_join(tidConsumer, nullptr);\n\n    sem.destroy();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer02b03-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE B: USING SEMAPHORES\n    Version B03: 2 fast producers, 1 slow consumer\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <pthread.h>\n#include <semaphore.h>\n#include <unistd.h>\nusing namespace std;\n\n\n\nstruct GlobalSemaphore {\n    sem_t semFill;      // item produced\n    sem_t semEmpty;     // remaining space in queue\n\n    void init(int semFillValue, int semEmptyValue) {\n        sem_init(&semFill, 0, semFillValue);\n        sem_init(&semEmpty, 0, semEmptyValue);\n    }\n\n    void destroy() {\n        sem_destroy(&semFill);\n        sem_destroy(&semEmpty);\n    }\n\n    void waitFill() {\n        sem_wait(&semFill);\n    }\n\n    void waitEmpty() {\n        sem_wait(&semEmpty);\n    }\n\n    void postFill() {\n        sem_post(&semFill);\n    }\n\n    void postEmpty() {\n        sem_post(&semEmpty);\n    }\n};\n\n\n\nstruct GlobalArg {\n    queue<int>* q;\n    GlobalSemaphore* sem;\n    int startValue;\n};\n\n\n\nvoid* producer(void* argVoid) {\n    auto arg = (GlobalArg*) argVoid;\n    auto q = arg->q;\n    auto sem = arg->sem;\n\n    int i = 1;\n\n    for (;; ++i) {\n        sem->waitEmpty();\n        q->push(i + arg->startValue);\n        sem->postFill();\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* consumer(void* argVoid) {\n    auto arg = (GlobalArg*) argVoid;\n    auto q = arg->q;\n    auto sem = arg->sem;\n\n    int data = 0;\n\n    for (;;) {\n        sem->waitFill();\n\n        data = q->front();\n        q->pop();\n\n        cout << \"Consumer \" << data << endl;\n        sleep(1);\n\n        sem->postEmpty();\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    GlobalSemaphore sem;\n    queue<int> q;\n    pthread_t tidProducerA, tidProducerB, tidConsumer;\n\n    GlobalArg argCon, argProA, argProB;\n    int ret = 0;\n\n    sem.init(0, 1);\n\n    argCon = argProA = argProB = { &q, &sem, 0 };\n    argProA.startValue = 0;\n    argProB.startValue = 1000;\n\n    ret = pthread_create(&tidProducerA, nullptr, &producer, &argProA);\n    ret = pthread_create(&tidProducerB, nullptr, &producer, &argProB);\n    ret = pthread_create(&tidConsumer, nullptr, &consumer, &argCon);\n\n    ret = pthread_join(tidProducerA, nullptr);\n    ret = pthread_join(tidProducerB, nullptr);\n    ret = pthread_join(tidConsumer, nullptr);\n\n    sem.destroy();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer02b04-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE B: USING SEMAPHORES\n    Version B04: Multiple fast producers, multiple slow consumers\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <pthread.h>\n#include <semaphore.h>\n#include <unistd.h>\nusing namespace std;\n\n\n\nstruct GlobalSemaphore {\n    sem_t semFill;      // item produced\n    sem_t semEmpty;     // remaining space in queue\n\n    void init(int semFillValue, int semEmptyValue) {\n        sem_init(&semFill, 0, semFillValue);\n        sem_init(&semEmpty, 0, semEmptyValue);\n    }\n\n    void destroy() {\n        sem_destroy(&semFill);\n        sem_destroy(&semEmpty);\n    }\n\n    void waitFill() {\n        sem_wait(&semFill);\n    }\n\n    void waitEmpty() {\n        sem_wait(&semEmpty);\n    }\n\n    void postFill() {\n        sem_post(&semFill);\n    }\n\n    void postEmpty() {\n        sem_post(&semEmpty);\n    }\n};\n\n\n\nstruct GlobalArg {\n    queue<int>* q;\n    GlobalSemaphore* sem;\n    int startValue;\n};\n\n\n\nvoid* producer(void* argVoid) {\n    auto arg = (GlobalArg*) argVoid;\n    auto q = arg->q;\n    auto sem = arg->sem;\n\n    int i = 1;\n\n    for (;; ++i) {\n        sem->waitEmpty();\n        q->push(i + arg->startValue);\n        sem->postFill();\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* consumer(void* argVoid) {\n    auto arg = (GlobalArg*) argVoid;\n    auto q = arg->q;\n    auto sem = arg->sem;\n\n    int data = 0;\n\n    for (;;) {\n        sem->waitFill();\n\n        data = q->front();\n        q->pop();\n\n        cout << \"Consumer \" << data << endl;\n        sleep(1);\n\n        sem->postEmpty();\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    constexpr int NUM_PRODUCERS = 3;\n    constexpr int NUM_CONSUMERS = 2;\n\n    GlobalSemaphore sem;\n    queue<int> q;\n\n    pthread_t lstTidProducer[NUM_PRODUCERS];\n    pthread_t lstTidConsumer[NUM_CONSUMERS];\n\n    int ret = 0;\n\n\n    sem.init(0, 1);\n\n\n    // PREPARE ARGUMENTS\n    GlobalArg lstArgPro[NUM_PRODUCERS];\n    GlobalArg argCon;\n\n    for (int i = 0; i < NUM_PRODUCERS; ++i) {\n        lstArgPro[i].q = &q;\n        lstArgPro[i].sem = &sem;\n        lstArgPro[i].startValue = i * 1000;\n    }\n\n    argCon.q = &q;\n    argCon.sem = &sem;\n\n\n    // CREATE THREADS\n    for (int i = 0; i < NUM_PRODUCERS; ++i) {\n        ret = pthread_create(&lstTidProducer[i], nullptr, &producer, &lstArgPro[i]);\n    }\n\n    for (int i = 0; i < NUM_CONSUMERS; ++i) {\n        ret = pthread_create(&lstTidConsumer[i], nullptr, &consumer, &argCon);\n    }\n\n\n    // JOIN THREADS\n    for (int i = 0; i < NUM_PRODUCERS; ++i) {\n        ret = pthread_join(lstTidProducer[i], nullptr);\n    }\n\n    for (int i = 0; i < NUM_CONSUMERS; ++i) {\n        ret = pthread_join(lstTidConsumer[i], nullptr);\n    }\n\n\n    sem.destroy();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer02c-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE C: USING CONDITION VARIABLES & MONITORS\n    Multiple fast producers, multiple slow consumers\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <pthread.h>\n#include <unistd.h>\n\nusing namespace std;\n\n\n\ntemplate <typename T>\nclass Monitor {\nprivate:\n    std::queue<T>* q = nullptr;\n    int maxQueueSize = 0;\n\n    pthread_cond_t condFull = PTHREAD_COND_INITIALIZER;\n    pthread_cond_t condEmpty = PTHREAD_COND_INITIALIZER;\n    pthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\n\n\npublic:\n    Monitor() = default;\n    Monitor(const Monitor &other) = delete;\n    Monitor(const Monitor &&other) = delete;\n    void operator=(const Monitor &other) = delete;\n    void operator=(const Monitor &&other) = delete;\n\n\n    void init(int maxQueueSize, std::queue<T>* q) {\n        destroy();\n\n        this->q = q;\n        this->maxQueueSize = maxQueueSize;\n        condFull = PTHREAD_COND_INITIALIZER;\n        condEmpty = PTHREAD_COND_INITIALIZER;\n        mut = PTHREAD_MUTEX_INITIALIZER;\n    }\n\n\n    void destroy() {\n        q = nullptr;\n        maxQueueSize = 0;\n        pthread_cond_destroy(&condFull);\n        pthread_cond_destroy(&condEmpty);\n        pthread_mutex_destroy(&mut);\n    }\n\n\n    void add(const T& item) {\n        pthread_mutex_lock(&mut);\n\n        while (q->size() == maxQueueSize) {\n            pthread_cond_wait(&condFull, &mut);\n        }\n\n        q->push(item);\n\n        if (q->size() == 1) {\n            pthread_cond_signal(&condEmpty);\n        }\n\n        pthread_mutex_unlock(&mut);\n    }\n\n\n    T remove() {\n        pthread_mutex_lock(&mut);\n\n        while (q->size() == 0) {\n            pthread_cond_wait(&condEmpty, &mut);\n        }\n\n        T item = q->front();\n        q->pop();\n\n        if (q->size() == maxQueueSize - 1) {\n            pthread_cond_signal(&condFull);\n        }\n\n        pthread_mutex_unlock(&mut);\n\n        return item;\n    }\n};\n\n\n\ntemplate <typename T>\nstruct ProducerArg {\n    Monitor<T>* monitor;\n    int startValue;\n};\n\n\n\ntemplate <typename T>\nvoid* producer(void* argVoid) {\n    auto arg = (ProducerArg<T>*) argVoid;\n    auto monitor = arg->monitor;\n    auto startValue = arg->startValue;\n\n    int i = 1;\n\n    for (;; ++i) {\n        monitor->add(i + startValue);\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\ntemplate <typename T>\nvoid* consumer(void* argVoid) {\n    auto monitor = (Monitor<T>*) argVoid;\n    T data;\n\n    for (;;) {\n        data = monitor->remove();\n        cout << \"Consumer \" << data << endl;\n        sleep(1);\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    Monitor<int> monitor;\n    queue<int> q;\n\n    constexpr int MAX_QUEUE_SIZE = 6;\n    constexpr int NUM_PRODUCERS = 3;\n    constexpr int NUM_CONSUMERS = 2;\n\n    pthread_t lstTidProducer[NUM_PRODUCERS];\n    pthread_t lstTidConsumer[NUM_CONSUMERS];\n    ProducerArg<int> argPro[NUM_PRODUCERS];\n\n    int ret = 0;\n\n\n    // PREPARE ARGUMENTS\n    monitor.init(MAX_QUEUE_SIZE, &q);\n\n    for (int i = 0; i < NUM_PRODUCERS; ++i)\n        argPro[i] = { &monitor, i * 1000 };\n\n\n    // CREATE THREADS\n    for (int i = 0; i < NUM_PRODUCERS; ++i) {\n        ret = pthread_create(&lstTidProducer[i], nullptr, &producer<int>, &argPro[i]);\n    }\n\n    for (int i = 0; i < NUM_CONSUMERS; ++i) {\n        ret = pthread_create(&lstTidConsumer[i], nullptr, &consumer<int>, &monitor);\n    }\n\n\n    // JOIN THREADS\n    for (int i = 0; i < NUM_PRODUCERS; ++i) {\n        ret = pthread_join(lstTidProducer[i], nullptr);\n    }\n\n    for (int i = 0; i < NUM_CONSUMERS; ++i) {\n        ret = pthread_join(lstTidConsumer[i], nullptr);\n    }\n\n\n    monitor.destroy();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer03a-readers-writers.cpp",
    "content": "/*\nTHE READERS-WRITERS PROBLEM\nSolution for the first readers-writers problem\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\n#include \"../cpp-std/mylib-random.hpp\"\nusing namespace std;\n\n\n\nstruct GlobalData {\n    volatile int resource = 0;\n    int readerCount = 0;\n\n    pthread_mutex_t mutResource = PTHREAD_MUTEX_INITIALIZER;\n    pthread_mutex_t mutReaderCount = PTHREAD_MUTEX_INITIALIZER;\n};\n\n\n\nstruct ThreadArg {\n    GlobalData* g;\n    int delayTime;\n};\n\n\n\nvoid* doTaskWriter(void* argVoid) {\n    auto arg = (ThreadArg*) argVoid;\n    GlobalData* g = arg->g;\n\n    sleep(arg->delayTime);\n\n    pthread_mutex_lock(&g->mutResource);\n\n    g->resource = mylib::RandInt::get(100);\n    cout << \"Write \" << g->resource << endl;\n\n    pthread_mutex_unlock(&g->mutResource);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* doTaskReader(void* argVoid) {\n    auto arg = (ThreadArg*) argVoid;\n    GlobalData* g = arg->g;\n\n    sleep(arg->delayTime);\n\n\n    // Increase reader count\n    pthread_mutex_lock(&g->mutReaderCount);\n\n    g->readerCount += 1;\n\n    if (1 == g->readerCount)\n        pthread_mutex_lock(&g->mutResource);\n\n    pthread_mutex_unlock(&g->mutReaderCount);\n\n\n    // Do the reading\n    cout << \"Read \" << g->resource << endl;\n\n\n    // Decrease reader count\n    pthread_mutex_lock(&g->mutReaderCount);\n\n    g->readerCount -= 1;\n\n    if (0 == g->readerCount)\n        pthread_mutex_unlock(&g->mutResource);\n\n    pthread_mutex_unlock(&g->mutReaderCount);\n\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid prepareArg(ThreadArg arg[], int numArg, GlobalData* g) {\n    for (int i = 0; i < numArg; ++i) {\n        arg[i].g = g;\n        arg[i].delayTime = mylib::RandInt::get(3);\n    }\n}\n\n\n\nint main() {\n    GlobalData globalData;\n\n    constexpr int NUM_READERS = 8;\n    constexpr int NUM_WRITERS = 6;\n\n    pthread_t lstTidReader[NUM_READERS];\n    pthread_t lstTidWriter[NUM_WRITERS];\n\n    int ret = 0;\n\n\n    // PREPARE ARGUMENTS\n    ThreadArg argReader[NUM_READERS];\n    ThreadArg argWriter[NUM_WRITERS];\n\n    prepareArg(argReader, NUM_READERS, &globalData);\n    prepareArg(argWriter, NUM_WRITERS, &globalData);\n\n\n    // CREATE THREADS\n    for (int i = 0; i < NUM_READERS; ++i) {\n        ret = pthread_create(&lstTidReader[i], nullptr, &doTaskReader, &argReader[i]);\n    }\n\n    for (int i = 0; i < NUM_WRITERS; ++i) {\n        ret = pthread_create(&lstTidWriter[i], nullptr, &doTaskWriter, &argWriter[i]);\n    }\n\n\n    // JOIN THREADS\n    for (int i = 0; i < NUM_READERS; ++i) {\n        ret = pthread_join(lstTidReader[i], nullptr);\n    }\n\n    for (int i = 0; i < NUM_WRITERS; ++i) {\n        ret = pthread_join(lstTidWriter[i], nullptr);\n    }\n\n\n    // CLEAN UP\n    ret = pthread_mutex_destroy(&globalData.mutResource);\n    ret = pthread_mutex_destroy(&globalData.mutReaderCount);\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer03b-readers-writers.cpp",
    "content": "/*\nTHE READERS-WRITERS PROBLEM\nSolution for the third readers-writers problem\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\n#include \"../cpp-std/mylib-random.hpp\"\nusing namespace std;\n\n\n\nstruct GlobalData {\n    volatile int resource = 0;\n    int readerCount = 0;\n\n    pthread_mutex_t mutResource = PTHREAD_MUTEX_INITIALIZER;\n    pthread_mutex_t mutReaderCount = PTHREAD_MUTEX_INITIALIZER;\n\n    pthread_mutex_t mutServiceQueue = PTHREAD_MUTEX_INITIALIZER;\n};\n\n\n\nstruct ThreadArg {\n    GlobalData* g;\n    int delayTime;\n};\n\n\n\nvoid* doTaskWriter(void* argVoid) {\n    auto arg = (ThreadArg*) argVoid;\n    GlobalData* g = arg->g;\n\n    sleep(arg->delayTime);\n\n    pthread_mutex_lock(&g->mutServiceQueue);\n\n    pthread_mutex_lock(&g->mutResource);\n\n    pthread_mutex_unlock(&g->mutServiceQueue);\n\n    g->resource = mylib::RandInt::get(100);\n    cout << \"Write \" << g->resource << endl;\n\n    pthread_mutex_unlock(&g->mutResource);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* doTaskReader(void* argVoid) {\n    auto arg = (ThreadArg*) argVoid;\n    GlobalData* g = arg->g;\n\n    sleep(arg->delayTime);\n\n\n    // Increase reader count\n    pthread_mutex_lock(&g->mutServiceQueue);\n    pthread_mutex_lock(&g->mutReaderCount);\n\n    g->readerCount += 1;\n\n    if (1 == g->readerCount)\n        pthread_mutex_lock(&g->mutResource);\n\n    pthread_mutex_unlock(&g->mutReaderCount);\n    pthread_mutex_unlock(&g->mutServiceQueue);\n\n\n    // Do the reading\n    cout << \"Read \" << g->resource << endl;\n\n\n    // Decrease reader count\n    pthread_mutex_lock(&g->mutReaderCount);\n\n    g->readerCount -= 1;\n\n    if (0 == g->readerCount)\n        pthread_mutex_unlock(&g->mutResource);\n\n    pthread_mutex_unlock(&g->mutReaderCount);\n\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid prepareArg(ThreadArg arg[], int numArg, GlobalData* g) {\n    for (int i = 0; i < numArg; ++i) {\n        arg[i].g = g;\n        arg[i].delayTime = mylib::RandInt::get(3);\n    }\n}\n\n\n\nint main() {\n    GlobalData globalData;\n\n    constexpr int NUM_READERS = 8;\n    constexpr int NUM_WRITERS = 6;\n\n    pthread_t lstTidReader[NUM_READERS];\n    pthread_t lstTidWriter[NUM_WRITERS];\n\n    int ret = 0;\n\n\n    // PREPARE ARGUMENTS\n    ThreadArg argReader[NUM_READERS];\n    ThreadArg argWriter[NUM_WRITERS];\n\n    prepareArg(argReader, NUM_READERS, &globalData);\n    prepareArg(argWriter, NUM_WRITERS, &globalData);\n\n\n    // CREATE THREADS\n    for (int i = 0; i < NUM_READERS; ++i) {\n        ret = pthread_create(&lstTidReader[i], nullptr, &doTaskReader, &argReader[i]);\n    }\n\n    for (int i = 0; i < NUM_WRITERS; ++i) {\n        ret = pthread_create(&lstTidWriter[i], nullptr, &doTaskWriter, &argWriter[i]);\n    }\n\n\n    // JOIN THREADS\n    for (int i = 0; i < NUM_READERS; ++i) {\n        ret = pthread_join(lstTidReader[i], nullptr);\n    }\n\n    for (int i = 0; i < NUM_WRITERS; ++i) {\n        ret = pthread_join(lstTidWriter[i], nullptr);\n    }\n\n\n    // CLEAN UP\n    ret = pthread_mutex_destroy(&globalData.mutResource);\n    ret = pthread_mutex_destroy(&globalData.mutReaderCount);\n    ret = pthread_mutex_destroy(&globalData.mutServiceQueue);\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer04-dining-philosophers.cpp",
    "content": "/*\nTHE DINING PHILOSOPHERS PROBLEM\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\nstruct TaskArg {\n    pthread_mutex_t* chopstick;\n    int numPhilo;\n    int idPhiLo;\n};\n\n\n\nvoid* doTaskPhilosopher(void *argVoid) {\n    auto arg = (TaskArg*) argVoid;\n\n    auto chopstick = arg->chopstick;\n    int n = arg->numPhilo;\n    int i = arg->idPhiLo;\n\n    sleep(1);\n\n    pthread_mutex_lock(&chopstick[i]);\n    pthread_mutex_lock(&chopstick[(i + 1) % n]);\n\n    cout << \"Philosopher #\" << i << \" is eating the rice\" << endl;\n\n    pthread_mutex_unlock(&chopstick[(i + 1) % n]);\n    pthread_mutex_unlock(&chopstick[i]);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    constexpr int NUM_PHILOSOPHERS = 5;\n\n    pthread_t tid[NUM_PHILOSOPHERS];\n    pthread_mutex_t chopstick[NUM_PHILOSOPHERS];\n    TaskArg arg[NUM_PHILOSOPHERS];\n\n    int ret = 0;\n\n\n    // PREPARE ARGUMENTS\n    for (int i = 0; i < NUM_PHILOSOPHERS; ++i) {\n        chopstick[i] = PTHREAD_MUTEX_INITIALIZER;\n        arg[i].chopstick = chopstick;\n        arg[i].numPhilo = NUM_PHILOSOPHERS;\n        arg[i].idPhiLo = i;\n    }\n\n\n    // CREATE THREADS\n    for (int i = 0; i < NUM_PHILOSOPHERS; ++i) {\n        ret = pthread_create(&tid[i], nullptr, &doTaskPhilosopher, &arg[i]);\n    }\n\n\n    // JOIN THREADS\n    for (int i = 0; i < NUM_PHILOSOPHERS; ++i) {\n        ret = pthread_join(tid[i], nullptr);\n    }\n\n\n    // CLEAN UP\n    for (int i = 0; i < NUM_PHILOSOPHERS; ++i) {\n        ret = pthread_mutex_destroy(&chopstick[i]);\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer05-util.hpp",
    "content": "#ifndef _EXER05_UTIL_HPP_\n#define _EXER05_UTIL_HPP_\n\n\n\nstruct WorkerScProdArg {\n    double const* u = nullptr;\n    double const* v = nullptr;\n    int sizeVector = 0;\n    double* res = nullptr;\n};\n\n\n\nvoid* workerScalarProduct(void* argVoid) {\n    auto arg = (WorkerScProdArg*) argVoid;\n    auto u = arg->u;\n    auto v = arg->v;\n    auto sizeVector = arg->sizeVector;\n    auto res = arg->res;\n\n    double sum = 0;\n\n    for (int i = sizeVector - 1; i >= 0; --i) {\n        sum += u[i] * v[i];\n    }\n\n    (*res) = sum;\n\n    return nullptr;\n}\n\n\n\n#endif // _EXER05_UTIL_HPP_\n"
  },
  {
    "path": "cpp/cpp-pthread/exer05a-product-matrix-vector.cpp",
    "content": "/*\nMATRIX-VECTOR MULTIPLICATION\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <pthread.h>\n#include \"exer05-util.hpp\"\nusing namespace std;\n\n\n\nusing vectord = std::vector<double>;\nusing matrix = std::vector<vectord>;\n\n\n\nvoid getProduct(const matrix& mat, const vectord& vec, vectord& result) {\n    // Assume that size of mat and vec are both eligible\n    int sizeRowMat = mat.size();\n    int sizeColMat = mat[0].size();\n    int sizeVec = vec.size();\n    int ret = 0;\n\n    result.clear();\n    result.resize(sizeRowMat, 0);\n\n    vector<pthread_t> lstTid(sizeRowMat);\n    vector<WorkerScProdArg> lstArg(sizeRowMat);\n\n    for (int i = 0; i < sizeRowMat; ++i) {\n        auto&& u = mat[i].data();\n        auto&& v = vec.data();\n        lstArg[i] = { u, v, sizeVec, &result[i] };\n\n        ret = pthread_create(&lstTid[i], nullptr, &workerScalarProduct, &lstArg[i]);\n    }\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_join(tid, nullptr);\n    }\n}\n\n\n\nint main() {\n    matrix A = {\n        { 1, 2, 3 },\n        { 4, 5, 6 },\n        { 7, 8, 9 }\n    };\n\n    vectord b = {\n        3,\n        -1,\n        0\n    };\n\n    vectord result;\n    getProduct(A, b, result);\n\n    for (int i = 0; i < result.size(); ++i) {\n        cout << result[i] << endl;\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer05b-product-matrix-matrix.cpp",
    "content": "/*\nMATRIX-MATRIX MULTIPLICATION (DOT PRODUCT)\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <pthread.h>\n#include \"exer05-util.hpp\"\nusing namespace std;\n\n\n\nusing vectord = std::vector<double>;\nusing matrix = std::vector<vectord>;\n\n\n\nvoid getTransposeMatrix(const matrix& input, matrix& output) {\n    int numRow = input.size();\n    int numCol = input[0].size();\n\n    output.clear();\n    output.assign(numCol, vectord(numRow, 0));\n\n    for (int i = 0; i < numRow; ++i)\n        for (int j = 0; j < numCol; ++j)\n            output[j][i] = input[i][j];\n}\n\n\n\nvoid displayMatrix(const matrix& mat) {\n    int numRow = mat.size();\n    int numCol = mat[0].size();\n\n    for (int i = 0; i < numRow; ++i) {\n        for (int j = 0; j < numCol; ++j)\n            cout << \"\\t\" << mat[i][j];\n\n        cout << endl;\n    }\n}\n\n\n\nvoid getProduct(const matrix& matA, const matrix& matB, matrix& result) {\n    // Assume that size of matA and matB are both eligible\n    int sizeRowA = matA.size();\n    int sizeColA = matA[0].size();\n    int sizeColB = matB[0].size();\n    int sizeTotal = sizeRowA * sizeColB;\n\n    result.clear();\n    result.assign(sizeRowA, vectord(sizeColB, 0));\n\n    matrix matBT;\n    getTransposeMatrix(matB, matBT);\n\n    vector<pthread_t> lstTid(sizeTotal);\n    vector<WorkerScProdArg> lstArg(sizeTotal);\n\n    int iSca = 0;\n    int ret = 0;\n\n    for (int i = 0; i < sizeRowA; ++i) {\n        for (int j = 0; j < sizeColB; ++j) {\n            auto&& u = matA[i].data();\n            auto&& v = matBT[j].data();\n            auto&& sizeVector = sizeColA;\n\n            lstArg[iSca] = { u, v, sizeVector, &result[i][j] };\n\n            ret = pthread_create(&lstTid[iSca], nullptr,\n                                 &workerScalarProduct, &lstArg[iSca]);\n\n            ++iSca;\n        }\n    }\n\n    for (auto&& tid : lstTid) {\n        ret = pthread_join(tid, nullptr);\n    }\n}\n\n\n\nint main() {\n    matrix A = {\n        { 1, 3, 5 },\n        { 2, 4, 6 },\n    };\n\n    matrix B = {\n        { 1, 0, 1, 0 },\n        { 0, 1, 0, 1 },\n        { 1, 0, 0, -2 }\n    };\n\n    matrix result;\n    getProduct(A, B, result);\n\n    displayMatrix(result);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer06a-blocking-queue.cpp",
    "content": "/*\nBLOCKING QUEUE IMPLEMENTATION\nVersion A: Synchronous queues\n*/\n\n\n#include <iostream>\n#include <string>\n#include <unistd.h>\n#include <pthread.h>\n#include <semaphore.h>\nusing namespace std;\n\n\n\ntemplate <typename T>\nclass SynchronousQueue {\n\nprivate:\n    sem_t semPut;\n    sem_t semTake;\n    T element;\n\n\npublic:\n    SynchronousQueue<T>() {\n        sem_init(&semPut, 0, 1);\n        sem_init(&semTake, 0, 0);\n    }\n\n\n    ~SynchronousQueue<T>() {\n        sem_destroy(&semPut);\n        sem_destroy(&semTake);\n    }\n\n\n    void put(const T& value) {\n        sem_wait(&semPut);\n        element = value;\n        sem_post(&semTake);\n    }\n\n\n    T take() {\n        sem_wait(&semTake);\n        T result = element;\n        sem_post(&semPut);\n        return result;\n    }\n\n};\n\n\n\nvoid* producer(void* arg) {\n    auto syncQueue = (SynchronousQueue<std::string>*) arg;\n\n    auto arr = { \"lorem\", \"ipsum\", \"foo\" };\n\n    for (auto&& data : arr) {\n        cout << \"Producer: \" << data << endl;\n        syncQueue->put(data);\n        cout << \"Producer: \" << data << \"\\t\\t\\t[done]\" << endl;\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* consumer(void* arg) {\n    auto syncQueue = (SynchronousQueue<std::string>*) arg;\n    std::string data;\n\n    sleep(5);\n\n    for (int i = 0; i < 3; ++i) {\n        data = syncQueue->take();\n        cout << \"\\tConsumer: \" << data << endl;\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    SynchronousQueue<std::string> syncQueue;\n\n    pthread_t tidProducer, tidConsumer;\n    int ret = 0;\n\n    ret = pthread_create(&tidProducer, nullptr, &producer, &syncQueue);\n    ret = pthread_create(&tidConsumer, nullptr, &consumer, &syncQueue);\n\n    ret = pthread_join(tidProducer, nullptr);\n    ret = pthread_join(tidConsumer, nullptr);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer06b01-blocking-queue.cpp",
    "content": "/*\nBLOCKING QUEUE IMPLEMENTATION\nVersion B01: General blocking queues\n             Underlying mechanism: Semaphores\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <string>\n#include <stdexcept>\n#include <unistd.h>\n#include <pthread.h>\n#include <semaphore.h>\nusing namespace std;\n\n\n\ntemplate <typename T>\nclass BlockingQueue {\n\nprivate:\n    int capacity = 0;\n\n    sem_t semRemain;\n    sem_t semFill;\n    pthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\n\n    std::queue<T> q;\n\n\npublic:\n    BlockingQueue(int capacity) {\n        if (capacity <= 0)\n            throw std::invalid_argument(\"capacity must be a positive integer\");\n\n        this->capacity = capacity;\n\n        sem_init(&semRemain, 0, capacity);\n        sem_init(&semFill, 0, 0);\n    }\n\n\n    ~BlockingQueue() {\n        sem_destroy(&semRemain);\n        sem_destroy(&semFill);\n        pthread_mutex_destroy(&mut);\n    }\n\n\n    void put(const T& value) {\n        int ret = 0;\n\n        ret = sem_wait(&semRemain);\n\n        ret = pthread_mutex_lock(&mut);\n        q.push(value);\n        ret = pthread_mutex_unlock(&mut);\n\n        ret = sem_post(&semFill);\n    }\n\n\n    T take() {\n        T result;\n        int ret = 0;\n\n        ret = sem_wait(&semFill);\n\n        ret = pthread_mutex_lock(&mut);\n        result = q.front();\n        q.pop();\n        ret = pthread_mutex_unlock(&mut);\n\n        ret = sem_post(&semRemain);\n\n        return result;\n    }\n\n};\n\n\n\nvoid* producer(void* arg) {\n    auto blkQueue = (BlockingQueue<std::string>*) arg;\n\n    auto arr = { \"nice\", \"to\", \"meet\", \"you\" };\n\n    for (auto&& data : arr) {\n        cout << \"Producer: \" << data << endl;\n        blkQueue->put(data);\n        cout << \"Producer: \" << data << \"\\t\\t\\t[done]\" << endl;\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* consumer(void* arg) {\n    auto blkQueue = (BlockingQueue<std::string>*) arg;\n    std::string data;\n\n    sleep(5);\n\n    for (int i = 0; i < 4; ++i) {\n        data = blkQueue->take();\n        cout << \"\\tConsumer: \" << data << endl;\n\n        if (0 == i)\n            sleep(5);\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    BlockingQueue<std::string> blkQueue(2); // capacity = 2\n\n    pthread_t tidProducer, tidConsumer;\n    int ret = 0;\n\n    ret = pthread_create(&tidProducer, nullptr, &producer, &blkQueue);\n    ret = pthread_create(&tidConsumer, nullptr, &consumer, &blkQueue);\n\n    ret = pthread_join(tidProducer, nullptr);\n    ret = pthread_join(tidConsumer, nullptr);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer06b02-blocking-queue.cpp",
    "content": "/*\nBLOCKING QUEUE IMPLEMENTATION\nVersion B02: General blocking queues\n             Underlying mechanism: Condition variables\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <string>\n#include <stdexcept>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\ntemplate <typename T>\nclass BlockingQueue {\n\nprivate:\n    pthread_cond_t condEmpty = PTHREAD_COND_INITIALIZER;\n    pthread_cond_t condFull = PTHREAD_COND_INITIALIZER;\n    pthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\n\n    int capacity = 0;\n    std::queue<T> q;\n\n\npublic:\n    BlockingQueue(int capacity) {\n        if (capacity <= 0)\n            throw std::invalid_argument(\"capacity must be a positive integer\");\n\n        this->capacity = capacity;\n    }\n\n\n    ~BlockingQueue() {\n        pthread_cond_destroy(&condEmpty);\n        pthread_cond_destroy(&condFull);\n        pthread_mutex_destroy(&mut);\n    }\n\n\n    void put(const T& value) {\n        int ret = 0;\n\n        ret = pthread_mutex_lock(&mut);\n\n        while ((int)q.size() >= capacity) {\n            // Queue is full, must wait for 'take'\n            ret = pthread_cond_wait(&condFull, &mut);\n        }\n\n        q.push(value);\n\n        ret = pthread_mutex_unlock(&mut);\n        ret = pthread_cond_signal(&condEmpty);\n    }\n\n\n    T take() {\n        T result;\n        int ret = 0;\n\n        ret = pthread_mutex_lock(&mut);\n\n        while (q.empty()) {\n            // Queue is empty, must wait for 'put'\n            ret = pthread_cond_wait(&condEmpty, &mut);\n        }\n\n        result = q.front();\n        q.pop();\n\n        ret = pthread_mutex_unlock(&mut);\n        ret = pthread_cond_signal(&condFull);\n\n        return result;\n    }\n\n};\n\n\n\nvoid* producer(void* arg) {\n    auto blkQueue = (BlockingQueue<std::string>*) arg;\n\n    auto arr = { \"nice\", \"to\", \"meet\", \"you\" };\n\n    for (auto&& data : arr) {\n        cout << \"Producer: \" << data << endl;\n        blkQueue->put(data);\n        cout << \"Producer: \" << data << \"\\t\\t\\t[done]\" << endl;\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nvoid* consumer(void* arg) {\n    auto blkQueue = (BlockingQueue<std::string>*) arg;\n    std::string data;\n\n    sleep(5);\n\n    for (int i = 0; i < 4; ++i) {\n        data = blkQueue->take();\n        cout << \"\\tConsumer: \" << data << endl;\n\n        if (0 == i)\n            sleep(5);\n    }\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    BlockingQueue<std::string> blkQueue(2); // capacity = 2\n\n    pthread_t tidProducer, tidConsumer;\n    int ret = 0;\n\n    ret = pthread_create(&tidProducer, nullptr, &producer, &blkQueue);\n    ret = pthread_create(&tidConsumer, nullptr, &consumer, &blkQueue);\n\n    ret = pthread_join(tidProducer, nullptr);\n    ret = pthread_join(tidConsumer, nullptr);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer07a-data-server.cpp",
    "content": "/*\nTHE DATA SERVER PROBLEM\nVersion A: Solving the problem using a condition variable\n*/\n\n\n#include <iostream>\n#include <string>\n#include <vector>\n#include <unistd.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\nstruct Counter {\n    int value;\n    pthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\n    pthread_cond_t cond = PTHREAD_COND_INITIALIZER;\n    Counter(int value) : value(value) { }\n};\n\n\n\nstruct ProcessFilesArg {\n    const vector<string> * lstFileName;\n    Counter * counter;\n};\n\n\n\nvoid checkAuthUser() {\n    cout << \"[   Auth   ] Start\" << endl;\n    // Send request to authenticator, check permissions, encrypt, decrypt...\n    sleep(20);\n    cout << \"[   Auth   ] Done\" << endl;\n}\n\n\n\nvoid* processFiles(void* argVoid) {\n    auto arg = (ProcessFilesArg*) argVoid;\n    auto&& lstFileName = *arg->lstFileName;\n    auto&& counter = *arg->counter;\n\n    for (auto&& fileName : lstFileName) {\n        // Read file\n        cout << \"[ ReadFile ] Start \" << fileName << endl;\n        sleep(10);\n        cout << \"[ ReadFile ] Done  \" << fileName << endl;\n\n        pthread_mutex_lock(&counter.mut);\n        {\n            --counter.value;\n            pthread_cond_signal(&counter.cond);\n        }\n        pthread_mutex_unlock(&counter.mut);\n\n        // Write log into disk\n        sleep(5);\n        cout << \"[ WriteLog ]\" << endl;\n    }\n\n    return nullptr;\n    pthread_exit(nullptr);\n}\n\n\n\nvoid processRequest() {\n    const vector<string> lstFileName = { \"foo.html\", \"bar.json\" };\n    Counter counter(lstFileName.size());\n\n    pthread_t tid;\n    ProcessFilesArg arg = { &lstFileName, &counter };\n\n    // The server checks auth user while reading files, concurrently\n    pthread_create(&tid, nullptr, &processFiles, &arg);\n    checkAuthUser();\n\n    // The server waits for completion of loading files\n    pthread_mutex_lock(&counter.mut);\n    {\n        while (counter.value > 0) {\n            pthread_cond_wait(&counter.cond, &counter.mut);\n        }\n    }\n    pthread_mutex_unlock(&counter.mut);\n\n    cout << \"\\nNow user is authorized and files are loaded\" << endl;\n    cout << \"Do other tasks...\\n\" << endl;\n\n    pthread_join(tid, nullptr);\n    pthread_cond_destroy(&counter.cond);\n    pthread_mutex_destroy(&counter.mut);\n}\n\n\n\nint main() {\n    processRequest();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer07b-data-server.cpp",
    "content": "/*\nTHE DATA SERVER PROBLEM\nVersion B: Solving the problem using a semaphore\n*/\n\n\n#include <iostream>\n#include <string>\n#include <vector>\n#include <unistd.h>\n#include <semaphore.h>\nusing namespace std;\n\n\n\nstruct ProcessFilesArg {\n    const vector<string> * lstFileName;\n    sem_t * sem;\n};\n\n\n\nvoid checkAuthUser() {\n    cout << \"[   Auth   ] Start\" << endl;\n    // Send request to authenticator, check permissions, encrypt, decrypt...\n    sleep(20);\n    cout << \"[   Auth   ] Done\" << endl;\n}\n\n\n\nvoid* processFiles(void* argVoid) {\n    auto arg = (ProcessFilesArg*) argVoid;\n    auto&& lstFileName = *arg->lstFileName;\n    auto&& sem = *arg->sem;\n\n    for (auto&& fileName : lstFileName) {\n        // Read file\n        cout << \"[ ReadFile ] Start \" << fileName << endl;\n        sleep(10);\n        cout << \"[ ReadFile ] Done  \" << fileName << endl;\n\n        sem_post(&sem);\n\n        // Write log into disk\n        sleep(5);\n        cout << \"[ WriteLog ]\" << endl;\n    }\n\n    return nullptr;\n    pthread_exit(nullptr);\n}\n\n\n\nvoid processRequest() {\n    const vector<string> lstFileName = { \"foo.html\", \"bar.json\" };\n    sem_t sem;\n    sem_init(&sem, 0, 0);\n\n    pthread_t tid;\n    ProcessFilesArg arg = { &lstFileName, &sem };\n\n    // The server checks auth user while reading files, concurrently\n    pthread_create(&tid, nullptr, &processFiles, &arg);\n    checkAuthUser();\n\n    // The server waits for completion of loading files\n    for (size_t i = lstFileName.size(); i > 0; --i) {\n        sem_wait(&sem);\n    }\n\n    cout << \"\\nNow user is authorized and files are loaded\" << endl;\n    cout << \"Do other tasks...\\n\" << endl;\n\n    pthread_join(tid, nullptr);\n    sem_destroy(&sem);\n}\n\n\n\nint main() {\n    processRequest();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer07c-data-server.cpp",
    "content": "/*\nTHE DATA SERVER PROBLEM\nVersion C: Solving the problem using a count-down latch\n*/\n\n\n#include <iostream>\n#include <string>\n#include <vector>\n#include <unistd.h>\n#include \"mylib-latch.hpp\"\nusing namespace std;\n\n\n\nstruct ProcessFilesArg {\n    const vector<string> * lstFileName;\n    mylib::CountDownLatch * rdLatch;\n};\n\n\n\nvoid checkAuthUser() {\n    cout << \"[   Auth   ] Start\" << endl;\n    // Send request to authenticator, check permissions, encrypt, decrypt...\n    sleep(20);\n    cout << \"[   Auth   ] Done\" << endl;\n}\n\n\n\nvoid* processFiles(void* argVoid) {\n    auto arg = (ProcessFilesArg*) argVoid;\n    auto&& lstFileName = *arg->lstFileName;\n    auto&& rdLatch = *arg->rdLatch;\n\n    for (auto&& fileName : lstFileName) {\n        // Read file\n        cout << \"[ ReadFile ] Start \" << fileName << endl;\n        sleep(10);\n        cout << \"[ ReadFile ] Done  \" << fileName << endl;\n\n        rdLatch.countDown();\n\n        // Write log into disk\n        sleep(5);\n        cout << \"[ WriteLog ]\" << endl;\n    }\n\n    return nullptr;\n    pthread_exit(nullptr);\n}\n\n\n\nvoid processRequest() {\n    const vector<string> lstFileName = { \"foo.html\", \"bar.json\" };\n    mylib::CountDownLatch readFileLatch(lstFileName.size());\n\n    pthread_t tid;\n    ProcessFilesArg arg = { &lstFileName, &readFileLatch };\n\n    // The server checks auth user while reading files, concurrently\n    pthread_create(&tid, nullptr, &processFiles, &arg);\n    checkAuthUser();\n\n    // The server waits for completion of loading files\n    readFileLatch.wait();\n\n    cout << \"\\nNow user is authorized and files are loaded\" << endl;\n    cout << \"Do other tasks...\\n\" << endl;\n\n    pthread_join(tid, nullptr);\n}\n\n\n\nint main() {\n    processRequest();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer07d-data-server.cpp",
    "content": "/*\nTHE DATA SERVER PROBLEM\nVersion D: Solving the problem using a blocking queue\n*/\n\n\n#include <iostream>\n#include <string>\n#include <vector>\n#include <unistd.h>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\n\n\n\nstruct ProcessFilesArg {\n    const vector<string> * lstFileName;\n    mylib::BlockingQueue<string> * blkq;\n};\n\n\n\nvoid checkAuthUser() {\n    cout << \"[   Auth   ] Start\" << endl;\n    // Send request to authenticator, check permissions, encrypt, decrypt...\n    sleep(20);\n    cout << \"[   Auth   ] Done\" << endl;\n}\n\n\n\nvoid* processFiles(void* argVoid) {\n    auto arg = (ProcessFilesArg*) argVoid;\n    auto&& lstFileName = *arg->lstFileName;\n    auto&& blkq = *arg->blkq;\n\n    for (auto&& fileName : lstFileName) {\n        // Read file\n        cout << \"[ ReadFile ] Start \" << fileName << endl;\n        sleep(10);\n        cout << \"[ ReadFile ] Done  \" << fileName << endl;\n\n        blkq.put(fileName); // You may put file data here\n\n        // Write log into disk\n        sleep(5);\n        cout << \"[ WriteLog ]\" << endl;\n    }\n\n    return nullptr;\n    pthread_exit(nullptr);\n}\n\n\n\nvoid processRequest() {\n    const vector<string> lstFileName = { \"foo.html\", \"bar.json\" };\n    mylib::BlockingQueue<string> blkq;\n\n    pthread_t tid;\n    ProcessFilesArg arg = { &lstFileName, &blkq };\n\n    // The server checks auth user while reading files, concurrently\n    pthread_create(&tid, nullptr, &processFiles, &arg);\n    checkAuthUser();\n\n    // The server waits for completion of loading files\n    for (size_t i = lstFileName.size(); i > 0; --i) {\n        blkq.take();\n    }\n\n    cout << \"\\nNow user is authorized and files are loaded\" << endl;\n    cout << \"Do other tasks...\\n\" << endl;\n\n    pthread_join(tid, nullptr);\n}\n\n\n\nint main() {\n    processRequest();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer08-exec-service-itask.hpp",
    "content": "#ifndef _MY_EXEC_SERVICE_ITASK_HPP_\n#define _MY_EXEC_SERVICE_ITASK_HPP_\n\n\n\n// interface ITask\nclass ITask {\npublic:\n    virtual ~ITask() = default;\n    virtual void run() = 0;\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_ITASK_HPP_\n"
  },
  {
    "path": "cpp/cpp-pthread/exer08-exec-service-main.cpp",
    "content": "/*\nEXECUTOR SERVICE & THREAD POOL IMPLEMENTATION\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include \"exer08-exec-service-itask.hpp\"\n#include \"exer08-exec-service-v0a.hpp\"\n#include \"exer08-exec-service-v0b.hpp\"\n#include \"exer08-exec-service-v1a.hpp\"\n#include \"exer08-exec-service-v1b.hpp\"\n#include \"exer08-exec-service-v2a.hpp\"\n#include \"exer08-exec-service-v2b.hpp\"\n\n\n\nclass MyTask : public ITask {\npublic:\n    char id;\n\npublic:\n    void run() override {\n        std::cout << \"Task \" << id << \" is starting\" << std::endl;\n        sleep(3);\n        std::cout << \"Task \" << id << \" is completed\" << std::endl;\n    }\n};\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 2;\n    constexpr int NUM_TASKS = 5;\n\n\n    MyExecServiceV0A execService(NUM_THREADS);\n\n\n    std::vector<MyTask> lstTask(NUM_TASKS);\n\n    for (int i = 0; i < NUM_TASKS; ++i)\n        lstTask[i].id = 'A' + i;\n\n\n    for (auto&& task : lstTask)\n        execService.submit(&task);\n\n    std::cout << \"All tasks are submitted\" << std::endl;\n\n\n    execService.waitTaskDone();\n    std::cout << \"All tasks are completed\" << std::endl;\n\n\n    execService.shutdown();\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exer08-exec-service-v0a.hpp",
    "content": "/*\nMY EXECUTOR SERVICE\n\nVersion 0A: The easiest executor service\n- It uses a blocking queue as underlying mechanism.\n*/\n\n\n\n#ifndef _MY_EXEC_SERVICE_V0A_HPP_\n#define _MY_EXEC_SERVICE_V0A_HPP_\n\n\n\n#include <iostream>\n#include <vector>\n#include <unistd.h>\n#include <pthread.h>\n#include \"mylib-blockingqueue.hpp\"\n#include \"exer08-exec-service-itask.hpp\"\n\n\n\nclass MyExecServiceV0A {\n\nprivate:\n    int numThreads = 0;\n    std::vector<pthread_t> lstTh;\n    mylib::BlockingQueue<ITask*> taskPending;\n\n\npublic:\n    MyExecServiceV0A(int numThreads) {\n        init(numThreads);\n    }\n\n\n    MyExecServiceV0A(const MyExecServiceV0A& other) = delete;\n    MyExecServiceV0A(const MyExecServiceV0A&& other) = delete;\n    void operator=(const MyExecServiceV0A& other) = delete;\n    void operator=(const MyExecServiceV0A&& other) = delete;\n\n\nprivate:\n    void init(int numThreads) {\n        this->numThreads = numThreads;\n        lstTh.resize(numThreads);\n\n        for (auto&& th : lstTh) {\n            pthread_create(&th, nullptr, &threadWorkerFunc, this);\n        }\n    }\n\n\npublic:\n    void submit(ITask* task) {\n        taskPending.add(task);\n    }\n\n\n    void waitTaskDone() {\n        // This ExecService is too simple,\n        // so there is no implementation for waitTaskDone()\n        sleep(11); // fake behaviour\n    }\n\n\n    void shutdown() {\n        // This ExecService is too simple,\n        // so there is no implementation for shutdown()\n        std::cout << \"No implementation for shutdown().\" << std::endl;\n        std::cout << \"You need to exit the app manually.\" << std::endl;\n    }\n\n\nprivate:\n    static void* threadWorkerFunc(void* argVoid) {\n        auto thisPtr = (MyExecServiceV0A*) argVoid;\n        auto&& taskPending = thisPtr->taskPending;\n\n        for (;;) {\n            // WAIT FOR AN AVAILABLE PENDING TASK\n            auto task = taskPending.take();\n\n            // DO THE TASK\n            task->run();\n        }\n\n        pthread_exit(nullptr);\n        return nullptr;\n    }\n\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_V0A_HPP_\n"
  },
  {
    "path": "cpp/cpp-pthread/exer08-exec-service-v0b.hpp",
    "content": "/*\nMY EXECUTOR SERVICE\n\nVersion 0B: The easiest executor service\n- It uses a blocking queue as underlying mechanism.\n- It supports waitTaskDone() and shutdown().\n*/\n\n\n\n#ifndef _MY_EXEC_SERVICE_V0B_HPP_\n#define _MY_EXEC_SERVICE_V0B_HPP_\n\n\n\n#include <vector>\n#include <atomic>\n#include <unistd.h>\n#include <pthread.h>\n#include \"mylib-blockingqueue.hpp\"\n#include \"exer08-exec-service-itask.hpp\"\n\n\n\nclass MyExecServiceV0B {\n\nprivate:\n    int numThreads = 0;\n    std::vector<pthread_t> lstTh;\n\n    mylib::BlockingQueue<ITask*> taskPending;\n    std::atomic_int32_t counterTaskRunning;\n\n    volatile bool forceThreadShutdown;\n\n    const class : ITask {\n        void run() override { }\n    }\n    emptyTask;\n\n\npublic:\n    MyExecServiceV0B(int numThreads) {\n        init(numThreads);\n    }\n\n\n    MyExecServiceV0B(const MyExecServiceV0B& other) = delete;\n    MyExecServiceV0B(const MyExecServiceV0B&& other) = delete;\n    void operator=(const MyExecServiceV0B& other) = delete;\n    void operator=(const MyExecServiceV0B&& other) = delete;\n\n\nprivate:\n    void init(int numThreads) {\n        this->numThreads = numThreads;\n        lstTh.resize(numThreads);\n        counterTaskRunning = 0;\n        forceThreadShutdown = false;\n\n        for (auto&& th : lstTh) {\n            pthread_create(&th, nullptr, &threadWorkerFunc, this);\n        }\n    }\n\n\npublic:\n    void submit(ITask* task) {\n        taskPending.add(task);\n    }\n\n\n    void waitTaskDone() {\n        // This ExecService is too simple,\n        // so there is no good implementation for waitTaskDone()\n        while (false == taskPending.empty() || counterTaskRunning > 0) {\n            sleep(1);\n            // pthread_yield();\n            // sched_yield();\n        }\n    }\n\n\n    void shutdown() {\n        forceThreadShutdown = true;\n        taskPending.clear();\n\n        // Invoke blocked threads by adding \"empty\" tasks\n        for (int i = 0; i < numThreads; ++i) {\n            taskPending.put( (ITask* const) &emptyTask );\n        }\n\n        for (auto&& th : lstTh) {\n            pthread_join(th, nullptr);\n        }\n\n        numThreads = 0;\n        lstTh.clear();\n    }\n\n\nprivate:\n    static void* threadWorkerFunc(void* argVoid) {\n        auto thisPtr = (MyExecServiceV0B*) argVoid;\n\n        auto&& taskPending = thisPtr->taskPending;\n        auto&& counterTaskRunning = thisPtr->counterTaskRunning;\n        auto&& forceThreadShutdown = thisPtr->forceThreadShutdown;\n\n        for (;;) {\n            // WAIT FOR AN AVAILABLE PENDING TASK\n            auto task = taskPending.take();\n\n            // If shutdown() was called, then exit the function\n            if (forceThreadShutdown) {\n                break;\n            }\n\n            // DO THE TASK\n            ++counterTaskRunning;\n            task->run();\n            --counterTaskRunning;\n        }\n\n        pthread_exit(nullptr);\n        return nullptr;\n    }\n\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_V0B_HPP_\n"
  },
  {
    "path": "cpp/cpp-pthread/exer08-exec-service-v1a.hpp",
    "content": "/*\nMY EXECUTOR SERVICE\n\nVersion 1A: Simple executor service\n- Method \"waitTaskDone\" invokes thread sleeps in loop (which can cause performance problems).\n*/\n\n\n\n#ifndef _MY_EXEC_SERVICE_V1A_HPP_\n#define _MY_EXEC_SERVICE_V1A_HPP_\n\n\n\n#include <vector>\n#include <queue>\n#include <atomic>\n#include <unistd.h>\n#include <pthread.h>\n#include \"exer08-exec-service-itask.hpp\"\n\n\n\nclass MyExecServiceV1A {\n\nprivate:\n    int numThreads = 0;\n    std::vector<pthread_t> lstTh;\n\n    std::queue<ITask*> taskPending;\n    pthread_mutex_t mutTaskPending = PTHREAD_MUTEX_INITIALIZER;\n    pthread_cond_t condTaskPending = PTHREAD_COND_INITIALIZER;\n\n    std::atomic_int32_t counterTaskRunning;\n\n    volatile bool forceThreadShutdown;\n\n\npublic:\n    MyExecServiceV1A(int numThreads) {\n        init(numThreads);\n    }\n\n\n    MyExecServiceV1A(const MyExecServiceV1A& other) = delete;\n    MyExecServiceV1A(const MyExecServiceV1A&& other) = delete;\n    void operator=(const MyExecServiceV1A& other) = delete;\n    void operator=(const MyExecServiceV1A&& other) = delete;\n\n\nprivate:\n    void init(int numThreads) {\n        // shutdown();\n\n        mutTaskPending = PTHREAD_MUTEX_INITIALIZER;\n        condTaskPending = PTHREAD_COND_INITIALIZER;\n\n        this->numThreads = numThreads;\n        lstTh.resize(numThreads);\n        counterTaskRunning = 0;\n        forceThreadShutdown = false;\n\n        for (auto&& th : lstTh) {\n            pthread_create(&th, nullptr, &threadWorkerFunc, this);\n        }\n    }\n\n\npublic:\n    void submit(ITask* task) {\n        pthread_mutex_lock(&mutTaskPending);\n        taskPending.push(task);\n        pthread_mutex_unlock(&mutTaskPending);\n\n        pthread_cond_signal(&condTaskPending);\n    }\n\n\n    void waitTaskDone() {\n        bool done = false;\n\n        for (;;) {\n            pthread_mutex_lock(&mutTaskPending);\n\n            if (taskPending.empty() && 0 == counterTaskRunning) {\n                done = true;\n            }\n\n            pthread_mutex_unlock(&mutTaskPending);\n\n            if (done) {\n                break;\n            }\n\n            sleep(1);\n            // pthread_yield();\n            // sched_yield();\n        }\n    }\n\n\n    void shutdown() {\n        pthread_mutex_lock(&mutTaskPending);\n\n        forceThreadShutdown = true;\n        std::queue<ITask*>().swap(taskPending);\n\n        pthread_mutex_unlock(&mutTaskPending);\n\n        pthread_cond_broadcast(&condTaskPending);\n\n        for (auto&& th : lstTh) {\n            pthread_join(th, nullptr);\n        }\n\n        numThreads = 0;\n        lstTh.clear();\n\n        pthread_mutex_destroy(&mutTaskPending);\n        pthread_cond_destroy(&condTaskPending);\n    }\n\n\nprivate:\n    static void* threadWorkerFunc(void* argVoid) {\n        auto thisPtr = (MyExecServiceV1A*) argVoid;\n\n        auto&& taskPending = thisPtr->taskPending;\n        auto&& mutTaskPending = thisPtr->mutTaskPending;\n        auto&& condTaskPending = thisPtr->condTaskPending;\n\n        auto&& counterTaskRunning = thisPtr->counterTaskRunning;\n        auto&& forceThreadShutdown = thisPtr->forceThreadShutdown;\n\n\n        for (;;) {\n            // WAIT FOR AN AVAILABLE PENDING TASK\n            pthread_mutex_lock(&mutTaskPending);\n\n            while (taskPending.empty() and false == forceThreadShutdown) {\n                pthread_cond_wait(&condTaskPending, &mutTaskPending);\n            }\n\n            if (forceThreadShutdown) {\n                pthread_mutex_unlock(&mutTaskPending);\n                break;\n            }\n\n            // GET THE TASK FROM THE PENDING QUEUE\n            auto task = taskPending.front();\n            taskPending.pop();\n\n            ++counterTaskRunning;\n\n            pthread_mutex_unlock(&mutTaskPending);\n\n            // DO THE TASK\n            task->run();\n\n            --counterTaskRunning;\n        }\n\n        pthread_exit(nullptr);\n        return nullptr;\n    }\n\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_V1A_HPP_\n"
  },
  {
    "path": "cpp/cpp-pthread/exer08-exec-service-v1b.hpp",
    "content": "/*\nMY EXECUTOR SERVICE\n\nVersion 1A: Simple executor service\n- Method \"waitTaskDone\" consumes CPU (due to bad synchronization).\n*/\n\n\n\n#ifndef _MY_EXEC_SERVICE_V1B_HPP_\n#define _MY_EXEC_SERVICE_V1B_HPP_\n\n\n\n#include <vector>\n#include <queue>\n#include <pthread.h>\n#include \"exer08-exec-service-itask.hpp\"\n\n\n\nclass MyExecServiceV1B {\n\nprivate:\n    int numThreads = 0;\n    std::vector<pthread_t> lstTh;\n\n    std::queue<ITask*> taskPending;\n    pthread_mutex_t mutTaskPending = PTHREAD_MUTEX_INITIALIZER;\n    pthread_cond_t condTaskPending = PTHREAD_COND_INITIALIZER;\n\n    int counterTaskRunning;\n    pthread_mutex_t mutTaskRunning = PTHREAD_MUTEX_INITIALIZER;\n    pthread_cond_t condTaskRunning = PTHREAD_COND_INITIALIZER;\n\n    volatile bool forceThreadShutdown;\n\n\npublic:\n    MyExecServiceV1B(int numThreads) {\n        init(numThreads);\n    }\n\n\n    MyExecServiceV1B(const MyExecServiceV1B& other) = delete;\n    MyExecServiceV1B(const MyExecServiceV1B&& other) = delete;\n    void operator=(const MyExecServiceV1B& other) = delete;\n    void operator=(const MyExecServiceV1B&& other) = delete;\n\n\nprivate:\n    void init(int numThreads) {\n        // shutdown();\n\n        mutTaskPending = PTHREAD_MUTEX_INITIALIZER;\n        condTaskPending = PTHREAD_COND_INITIALIZER;\n\n        mutTaskRunning = PTHREAD_MUTEX_INITIALIZER;\n        condTaskRunning = PTHREAD_COND_INITIALIZER;\n\n        this->numThreads = numThreads;\n        lstTh.resize(numThreads);\n        counterTaskRunning = 0;\n        forceThreadShutdown = false;\n\n        for (auto&& th : lstTh) {\n            pthread_create(&th, nullptr, &threadWorkerFunc, this);\n        }\n    }\n\n\npublic:\n    void submit(ITask* task) {\n        pthread_mutex_lock(&mutTaskPending);\n        taskPending.push(task);\n        pthread_mutex_unlock(&mutTaskPending);\n\n        pthread_cond_signal(&condTaskPending);\n    }\n\n\n    void waitTaskDone() {\n        bool done = false;\n\n        for (;;) {\n            pthread_mutex_lock(&mutTaskPending);\n\n            if (taskPending.empty()) {\n                pthread_mutex_lock(&mutTaskRunning);\n\n                while (counterTaskRunning > 0)\n                    pthread_cond_wait(&condTaskRunning, &mutTaskRunning);\n\n                // no pending task and no running task\n                done = true;\n\n                pthread_mutex_unlock(&mutTaskRunning);\n            }\n\n            pthread_mutex_unlock(&mutTaskPending);\n\n            if (done) {\n                break;\n            }\n        }\n    }\n\n\n    void shutdown() {\n        pthread_mutex_lock(&mutTaskPending);\n\n        forceThreadShutdown = true;\n        std::queue<ITask*>().swap(taskPending);\n\n        pthread_mutex_unlock(&mutTaskPending);\n\n        pthread_cond_broadcast(&condTaskPending);\n\n        for (auto&& th : lstTh) {\n            pthread_join(th, nullptr);\n        }\n\n        numThreads = 0;\n        lstTh.clear();\n\n        pthread_mutex_destroy(&mutTaskPending);\n        pthread_cond_destroy(&condTaskPending);\n        pthread_mutex_destroy(&mutTaskRunning);\n        pthread_cond_destroy(&condTaskRunning);\n    }\n\n\nprivate:\n    static void* threadWorkerFunc(void* argVoid) {\n        auto thisPtr = (MyExecServiceV1B*) argVoid;\n\n        auto&& taskPending = thisPtr->taskPending;\n        auto&& mutTaskPending = thisPtr->mutTaskPending;\n        auto&& condTaskPending = thisPtr->condTaskPending;\n\n        auto&& counterTaskRunning = thisPtr->counterTaskRunning;\n        auto&& mutTaskRunning = thisPtr->mutTaskRunning;\n        auto&& condTaskRunning = thisPtr->condTaskRunning;\n\n        auto&& forceThreadShutdown = thisPtr->forceThreadShutdown;\n\n\n        for (;;) {\n            // WAIT FOR AN AVAILABLE PENDING TASK\n            pthread_mutex_lock(&mutTaskPending);\n\n            while (taskPending.empty() and false == forceThreadShutdown) {\n                pthread_cond_wait(&condTaskPending, &mutTaskPending);\n            }\n\n            if (forceThreadShutdown) {\n                pthread_mutex_unlock(&mutTaskPending);\n                break;\n            }\n\n            // GET THE TASK FROM THE PENDING QUEUE\n            auto task = taskPending.front();\n            taskPending.pop();\n\n            ++counterTaskRunning;\n\n            pthread_mutex_unlock(&mutTaskPending);\n\n            // DO THE TASK\n            task->run();\n\n            pthread_mutex_lock(&mutTaskRunning);\n\n            --counterTaskRunning;\n            if (0 == counterTaskRunning) {\n                pthread_cond_signal(&condTaskRunning);\n            }\n\n            pthread_mutex_unlock(&mutTaskRunning);\n        }\n\n        pthread_exit(nullptr);\n        return nullptr;\n    }\n\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_V1B_HPP_\n"
  },
  {
    "path": "cpp/cpp-pthread/exer08-exec-service-v2a.hpp",
    "content": "/*\nMY EXECUTOR SERVICE\n\nVersion 2A: The executor service storing running tasks\n- Method \"waitTaskDone\" uses a semaphore to synchronize.\n*/\n\n\n\n#ifndef _MY_EXEC_SERVICE_V2A_HPP_\n#define _MY_EXEC_SERVICE_V2A_HPP_\n\n\n\n#include <vector>\n#include <list>\n#include <queue>\n#include <pthread.h>\n#include <semaphore.h>\n#include \"exer08-exec-service-itask.hpp\"\n\n\n\nclass MyExecServiceV2A {\n\nprivate:\n    int numThreads = 0;\n    std::vector<pthread_t> lstTh;\n\n    std::queue<ITask*> taskPending;\n    pthread_mutex_t mutTaskPending = PTHREAD_MUTEX_INITIALIZER;\n    pthread_cond_t condTaskPending = PTHREAD_COND_INITIALIZER;\n\n    std::list<ITask*> taskRunning;\n    pthread_mutex_t mutTaskRunning = PTHREAD_MUTEX_INITIALIZER;\n    sem_t counterTaskRunning;\n\n    volatile bool forceThreadShutdown;\n\n\npublic:\n    MyExecServiceV2A(int numThreads) {\n        init(numThreads);\n    }\n\n\n    MyExecServiceV2A(const MyExecServiceV2A& other) = delete;\n    MyExecServiceV2A(const MyExecServiceV2A&& other) = delete;\n    void operator=(const MyExecServiceV2A& other) = delete;\n    void operator=(const MyExecServiceV2A&& other) = delete;\n\n\nprivate:\n    void init(int numThreads) {\n        // shutdown();\n\n        mutTaskPending = PTHREAD_MUTEX_INITIALIZER;\n        condTaskPending = PTHREAD_COND_INITIALIZER;\n\n        mutTaskRunning = PTHREAD_MUTEX_INITIALIZER;\n        sem_init(&counterTaskRunning, 0, 0);\n\n        this->numThreads = numThreads;\n        lstTh.resize(numThreads);\n        forceThreadShutdown = false;\n\n        for (auto&& th : lstTh) {\n            pthread_create(&th, nullptr, &threadWorkerFunc, this);\n        }\n    }\n\n\npublic:\n    void submit(ITask* task) {\n        pthread_mutex_lock(&mutTaskPending);\n        taskPending.push(task);\n        pthread_mutex_unlock(&mutTaskPending);\n\n        pthread_cond_signal(&condTaskPending);\n    }\n\n\n    void waitTaskDone() {\n        bool done = false;\n\n        for (;;) {\n            sem_wait(&counterTaskRunning);\n\n            pthread_mutex_lock(&mutTaskPending);\n            pthread_mutex_lock(&mutTaskRunning);\n\n            if (taskPending.empty() && taskRunning.empty()) {\n                done = true;\n            }\n\n            pthread_mutex_unlock(&mutTaskRunning);\n            pthread_mutex_unlock(&mutTaskPending);\n\n            if (done) {\n                break;\n            }\n        }\n    }\n\n\n    void shutdown() {\n        pthread_mutex_lock(&mutTaskPending);\n\n        forceThreadShutdown = true;\n        std::queue<ITask*>().swap(taskPending);\n\n        pthread_mutex_unlock(&mutTaskPending);\n\n        pthread_cond_broadcast(&condTaskPending);\n\n        for (auto&& th : lstTh) {\n            pthread_join(th, nullptr);\n        }\n\n        numThreads = 0;\n        lstTh.clear();\n\n        pthread_mutex_destroy(&mutTaskPending);\n        pthread_cond_destroy(&condTaskPending);\n        pthread_mutex_destroy(&mutTaskRunning);\n        sem_destroy(&counterTaskRunning);\n    }\n\n\nprivate:\n    static void* threadWorkerFunc(void* argVoid) {\n        auto thisPtr = (MyExecServiceV2A*) argVoid;\n\n        auto&& taskPending = thisPtr->taskPending;\n        auto&& mutTaskPending = thisPtr->mutTaskPending;\n        auto&& condTaskPending = thisPtr->condTaskPending;\n\n        auto&& taskRunning = thisPtr->taskRunning;\n        auto&& mutTaskRunning = thisPtr->mutTaskRunning;\n        auto&& counterTaskRunning = thisPtr->counterTaskRunning;\n\n        auto&& forceThreadShutdown = thisPtr->forceThreadShutdown;\n\n        ITask* task = nullptr;\n\n\n        for (;;) {\n            {\n                // WAIT FOR AN AVAILABLE PENDING TASK\n                pthread_mutex_lock(&mutTaskPending);\n\n                while (taskPending.empty() and false == forceThreadShutdown) {\n                    pthread_cond_wait(&condTaskPending, &mutTaskPending);\n                }\n\n                if (forceThreadShutdown) {\n                    pthread_mutex_unlock(&mutTaskPending);\n                    break;\n                }\n\n                // GET THE TASK FROM THE PENDING QUEUE\n                task = taskPending.front();\n                taskPending.pop();\n\n                // PUSH IT TO THE RUNNING QUEUE\n                pthread_mutex_lock(&mutTaskRunning);\n                taskRunning.push_back(task);\n                pthread_mutex_unlock(&mutTaskRunning);\n\n                pthread_mutex_unlock(&mutTaskPending);\n            }\n\n            // DO THE TASK\n            task->run();\n\n            // REMOVE IT FROM THE RUNNING QUEUE\n            pthread_mutex_lock(&mutTaskRunning);\n            taskRunning.remove(task);\n            pthread_mutex_unlock(&mutTaskRunning);\n\n            sem_post(&counterTaskRunning);\n        }\n\n        pthread_exit(nullptr);\n        return nullptr;\n    }\n\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_V2A_HPP_\n"
  },
  {
    "path": "cpp/cpp-pthread/exer08-exec-service-v2b.hpp",
    "content": "/*\nMY EXECUTOR SERVICE\n\nVersion 2B: The executor service storing running tasks\n- Method \"waitTaskDone\" uses a condition variable to synchronize.\n*/\n\n\n\n#ifndef _MY_EXEC_SERVICE_V2B_HPP_\n#define _MY_EXEC_SERVICE_V2B_HPP_\n\n\n\n#include <vector>\n#include <list>\n#include <queue>\n#include <pthread.h>\n#include \"exer08-exec-service-itask.hpp\"\n\n\n\nclass MyExecServiceV2B {\n\nprivate:\n    int numThreads = 0;\n    std::vector<pthread_t> lstTh;\n\n    std::queue<ITask*> taskPending;\n    pthread_mutex_t mutTaskPending = PTHREAD_MUTEX_INITIALIZER;\n    pthread_cond_t condTaskPending = PTHREAD_COND_INITIALIZER;\n\n    std::list<ITask*> taskRunning;\n    pthread_mutex_t mutTaskRunning = PTHREAD_MUTEX_INITIALIZER;\n    pthread_cond_t condTaskRunning = PTHREAD_COND_INITIALIZER;\n\n    volatile bool forceThreadShutdown;\n\n\npublic:\n    MyExecServiceV2B(int numThreads) {\n        init(numThreads);\n    }\n\n\n    MyExecServiceV2B(const MyExecServiceV2B& other) = delete;\n    MyExecServiceV2B(const MyExecServiceV2B&& other) = delete;\n    void operator=(const MyExecServiceV2B& other) = delete;\n    void operator=(const MyExecServiceV2B&& other) = delete;\n\n\nprivate:\n    void init(int numThreads) {\n        // shutdown();\n\n        mutTaskPending = PTHREAD_MUTEX_INITIALIZER;\n        condTaskPending = PTHREAD_COND_INITIALIZER;\n\n        mutTaskRunning = PTHREAD_MUTEX_INITIALIZER;\n        condTaskRunning = PTHREAD_COND_INITIALIZER;\n\n        this->numThreads = numThreads;\n        lstTh.resize(numThreads);\n        forceThreadShutdown = false;\n\n        for (auto&& th : lstTh) {\n            pthread_create(&th, nullptr, &threadWorkerFunc, this);\n        }\n    }\n\n\npublic:\n    void submit(ITask* task) {\n        pthread_mutex_lock(&mutTaskPending);\n        taskPending.push(task);\n        pthread_mutex_unlock(&mutTaskPending);\n\n        pthread_cond_signal(&condTaskPending);\n    }\n\n\n    void waitTaskDone() {\n        bool done = false;\n\n        for (;;) {\n            pthread_mutex_lock(&mutTaskPending);\n\n            if (taskPending.empty()) {\n                pthread_mutex_lock(&mutTaskRunning);\n\n                while (false == taskRunning.empty())\n                    pthread_cond_wait(&condTaskRunning, &mutTaskRunning);\n\n                pthread_mutex_unlock(&mutTaskRunning);\n\n                done = true;\n            }\n\n            pthread_mutex_unlock(&mutTaskPending);\n\n            if (done) {\n                break;\n            }\n        }\n    }\n\n\n    void shutdown() {\n        pthread_mutex_lock(&mutTaskPending);\n\n        forceThreadShutdown = true;\n        std::queue<ITask*>().swap(taskPending);\n\n        pthread_mutex_unlock(&mutTaskPending);\n\n        pthread_cond_broadcast(&condTaskPending);\n\n        for (auto&& th : lstTh) {\n            pthread_join(th, nullptr);\n        }\n\n        numThreads = 0;\n        lstTh.clear();\n\n        pthread_mutex_destroy(&mutTaskPending);\n        pthread_cond_destroy(&condTaskPending);\n        pthread_mutex_destroy(&mutTaskRunning);\n        pthread_cond_destroy(&condTaskRunning);\n    }\n\n\nprivate:\n    static void* threadWorkerFunc(void* argVoid) {\n        auto thisPtr = (MyExecServiceV2B*) argVoid;\n\n        auto&& taskPending = thisPtr->taskPending;\n        auto&& mutTaskPending = thisPtr->mutTaskPending;\n        auto&& condTaskPending = thisPtr->condTaskPending;\n\n        auto&& taskRunning = thisPtr->taskRunning;\n        auto&& mutTaskRunning = thisPtr->mutTaskRunning;\n        auto&& condTaskRunning = thisPtr->condTaskRunning;\n\n        auto&& forceThreadShutdown = thisPtr->forceThreadShutdown;\n\n        ITask* task = nullptr;\n\n\n        for (;;) {\n            {\n                // WAIT FOR AN AVAILABLE PENDING TASK\n                pthread_mutex_lock(&mutTaskPending);\n\n                while (taskPending.empty() and false == forceThreadShutdown) {\n                    pthread_cond_wait(&condTaskPending, &mutTaskPending);\n                }\n\n                if (forceThreadShutdown) {\n                    pthread_mutex_unlock(&mutTaskPending);\n                    break;\n                }\n\n                // GET THE TASK FROM THE PENDING QUEUE\n                task = taskPending.front();\n                taskPending.pop();\n\n                // PUSH IT TO THE RUNNING QUEUE\n                pthread_mutex_lock(&mutTaskRunning);\n                taskRunning.push_back(task);\n                pthread_mutex_unlock(&mutTaskRunning);\n\n                pthread_mutex_unlock(&mutTaskPending);\n            }\n\n            // DO THE TASK\n            task->run();\n\n            // REMOVE IT FROM THE RUNNING QUEUE\n            pthread_mutex_lock(&mutTaskRunning);\n            taskRunning.remove(task);\n            pthread_mutex_unlock(&mutTaskRunning);\n\n            pthread_cond_signal(&condTaskRunning);\n        }\n\n        pthread_exit(nullptr);\n        return nullptr;\n    }\n\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_V2B_HPP_\n"
  },
  {
    "path": "cpp/cpp-pthread/exerex-countdown-timer-a.cpp",
    "content": "/*\nCOUNTDOWN TIMER\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <time.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\nchar* buffer = nullptr;\n\npthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\npthread_cond_t cond = PTHREAD_COND_INITIALIZER;\n\n\n\nvoid* funcUserInput(void*) {\n    cin.getline(buffer, 1024);\n    pthread_cond_signal(&cond);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\n/*\nReturn true if no timeout. Otherwise, return false.\n*/\nbool waitForTime(const int waitTime) {\n    int ret = 0;\n\n    timespec ts;\n    clock_gettime(CLOCK_REALTIME, &ts);\n    ts.tv_sec += waitTime;\n\n    pthread_mutex_lock(&mut);\n\n    ret = pthread_cond_timedwait(&cond, &mut, &ts);\n\n    pthread_mutex_unlock(&mut);\n\n    if (0 != ret) {\n        // if (ETIMEDOUT == ret) return false;\n        return false;\n    }\n\n    return true;\n}\n\n\n\nint main() {\n    constexpr int SECONDS = 5;\n\n    char buff[1024] = { 0 };\n    buffer = buff;\n\n    pthread_t tid;\n    int ret = 0;\n\n\n    cout << \"You have \" << SECONDS << \" seconds to write anything you like in one line.\" << endl;\n    cout << \"Press enter to start.\" << endl;\n    cin.getline(buffer, 1024);\n    cout << \"START!!!\" << endl << endl;\n\n\n    ret = pthread_create(&tid, nullptr, &funcUserInput, nullptr);\n\n\n    if (waitForTime(SECONDS)) {\n        cout << \"\\nYou completed before the deadline.\" << endl;\n    }\n    else {\n        ret = pthread_cancel(tid);  // Kill thread\n        cout << \"\\n\\nTIMEOUT!!!\" << endl;\n    }\n\n\n    ret = pthread_join(tid, nullptr);\n\n\n    ret = pthread_mutex_destroy(&mut);\n    ret = pthread_cond_destroy(&cond);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/exerex-countdown-timer-b.cpp",
    "content": "/*\nCOUNTDOWN TIMER\n*/\n\n\n#include <iostream>\n#include <unistd.h>\n#include <time.h>\n#include <pthread.h>\nusing namespace std;\n\n\n\nchar* buffer = nullptr;\n\n\n\nvoid* userInputFunc(void*) {\n    cin.getline(buffer, 1024);\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    constexpr int SECONDS = 5;\n\n    char buff[1024] = { 0 };\n    buffer = buff;\n\n    pthread_t tid;\n    timespec ts;\n\n    int ret = 0;\n\n\n    cout << \"You have \" << SECONDS << \" seconds to write anything you like in one line.\" << endl;\n    cout << \"Press enter to start.\" << endl;\n    cin.getline(buffer, 1024);\n    cout << \"START!!!\" << endl << endl;\n\n\n    ret = pthread_create(&tid, nullptr, &userInputFunc, nullptr);\n\n\n    clock_gettime(CLOCK_REALTIME, &ts);\n    ts.tv_sec += SECONDS;\n\n\n    if (ret = pthread_timedjoin_np(tid, nullptr, &ts)) {\n        ret = pthread_cancel(tid);  // Kill thread\n        cout << \"\\n\\nTIMEOUT!!!\" << endl;\n    }\n    else {\n        cout << \"\\nYou completed before the deadline.\" << endl;\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-pthread/mylib-blockingqueue.hpp",
    "content": "/******************************************************\n*\n* File name:    mylib-blockingqueue.hpp\n*\n* Author:       Name:   Thanh Nguyen\n*               Email:  thanh.it1995(at)gmail(dot)com\n*\n* License:      3-Clause BSD License\n*\n* Description:  The blocking queue implementation in C++11 POSIX threading\n*\n******************************************************/\n\n\n\n#ifndef _MYLIB_BLOCKING_QUEUE_HPP_\n#define _MYLIB_BLOCKING_QUEUE_HPP_\n\n\n\n#include <limits>\n#include <queue>\n#include <pthread.h>\n\n\n\nnamespace mylib\n{\n\n\n\ntemplate <typename T>\nclass BlockingQueue {\n\nprivate:\n    pthread_cond_t condEmpty = PTHREAD_COND_INITIALIZER;\n    pthread_cond_t condFull = PTHREAD_COND_INITIALIZER;\n    pthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\n\n    size_t capacity;\n    std::queue<T> q;\n\n    struct PendingData {\n        BlockingQueue<T> * thisPtr;\n        const T data;\n\n        PendingData(BlockingQueue<T> * thisPtr, const T data)\n            : thisPtr(thisPtr), data(data) { }\n    };\n\n\npublic:\n    BlockingQueue() : capacity(std::numeric_limits<size_t>::max()) {\n    }\n\n\n    BlockingQueue(size_t capacity) : capacity(capacity) {\n    }\n\n\n    ~BlockingQueue() {\n        pthread_cond_destroy(&condEmpty);\n        pthread_cond_destroy(&condFull);\n        pthread_mutex_destroy(&mut);\n    }\n\n\n    BlockingQueue(const BlockingQueue& other) = delete;\n    BlockingQueue(const BlockingQueue&& other) = delete;\n    void operator=(const BlockingQueue& other) = delete;\n    void operator=(const BlockingQueue&& other) = delete;\n\n\n    bool empty() const {\n        return q.empty();\n    }\n\n\n    size_t size() const {\n        return q.size();\n    }\n\n\n    // sync enqueue\n    void put(const T& value) {\n        int tmp = 0;\n\n        tmp = pthread_mutex_lock(&mut);\n\n        while (q.size() >= capacity) {\n            tmp = pthread_cond_wait(&condFull, &mut);\n        }\n\n        q.push(value);\n\n        tmp = pthread_mutex_unlock(&mut);\n        tmp = pthread_cond_signal(&condEmpty);\n    }\n\n\n    // sync dequeue\n    T take() {\n        T result;\n        int tmp = 0;\n\n        tmp = pthread_mutex_lock(&mut);\n\n        while (q.empty()) {\n            // Queue is empty, must wait for 'put'\n            tmp = pthread_cond_wait(&condEmpty, &mut);\n        }\n\n        result = q.front();\n        q.pop();\n\n        tmp = pthread_mutex_unlock(&mut);\n        tmp = pthread_cond_signal(&condFull);\n\n        return result;\n    }\n\n\n    // async enqueue\n    void add(const T& value) {\n        // Note: For asynchronous operations, we should use a long-live background thread\n        // instead of using a temporary thread\n        pthread_t tid;\n        int tmp;\n        auto arg = new PendingData(this, value);\n        tmp = pthread_create(&tid, nullptr, &BlockingQueue<T>::putPending, arg);\n        tmp = pthread_detach(tid);\n    }\n\n\n    // returns false if queue is empty, otherwise returns true and assigns the result\n    bool peek(T& result) const {\n        bool ret = false;\n\n        int tmp;\n        tmp = pthread_mutex_lock(&mut);\n\n        if (false == q.empty()) {\n            result = q.front();\n            ret = true;\n        }\n\n        tmp = pthread_mutex_unlock(&mut);\n        return ret;\n    }\n\n\n    void clear() {\n        int tmp;\n        tmp = pthread_mutex_lock(&mut);\n        std::queue<T>().swap(q);\n        tmp = pthread_mutex_unlock(&mut);\n    }\n\n\nprivate:\n    static void* putPending(void* argVoid) {\n        auto arg = (PendingData*) argVoid;\n        arg->thisPtr->put(arg->data);\n        delete arg;\n        arg = nullptr;\n        pthread_exit(nullptr);\n        return nullptr;\n    }\n\n}; // BlockingQueue\n\n\n\n} // namespace mylib\n\n\n\n#endif // _MYLIB_BLOCKING_QUEUE_HPP_\n"
  },
  {
    "path": "cpp/cpp-pthread/mylib-execservice.hpp",
    "content": "/******************************************************\n*\n* File name:    mylib-execservice.hpp\n*\n* Author:       Name:   Thanh Nguyen\n*               Email:  thanh.it1995(at)gmail(dot)com\n*\n* License:      3-Clause BSD License\n*\n* Description:  The executor service implementation in C++11 POSIX threading\n*\n******************************************************/\n\n\n\n/*\nCopy code from \"MyExecServiceV1B\"\n*/\n\n\n#ifndef _MYLIB_EXEC_SERVICE_HPP_\n#define _MYLIB_EXEC_SERVICE_HPP_\n\n\n\n#include <vector>\n#include <queue>\n#include <functional>\n#include <pthread.h>\n\n\n\nnamespace mylib {\n\n\n\nclass ExecService {\n\npublic:\n    using taskFunc = std::function<void()>;\n\n\nprivate:\n    int numThreads = 0;\n    std::vector<pthread_t> lstTh;\n\n    std::queue<taskFunc> taskPending;\n    pthread_mutex_t mutTaskPending = PTHREAD_MUTEX_INITIALIZER;\n    pthread_cond_t condTaskPending = PTHREAD_COND_INITIALIZER;\n\n    int counterTaskRunning;\n    pthread_mutex_t mutTaskRunning = PTHREAD_MUTEX_INITIALIZER;\n    pthread_cond_t condTaskRunning = PTHREAD_COND_INITIALIZER;\n\n    volatile bool forceThreadShutdown;\n\n\npublic:\n    ExecService(int numThreads) {\n        init(numThreads);\n    }\n\n\n    ExecService(const ExecService& other) = delete;\n    ExecService(const ExecService&& other) = delete;\n    void operator=(const ExecService& other) = delete;\n    void operator=(const ExecService&& other) = delete;\n\n\nprivate:\n    void init(int numThreads) {\n        // shutdown();\n\n        mutTaskPending = PTHREAD_MUTEX_INITIALIZER;\n        condTaskPending = PTHREAD_COND_INITIALIZER;\n\n        mutTaskRunning = PTHREAD_MUTEX_INITIALIZER;\n        condTaskRunning = PTHREAD_COND_INITIALIZER;\n\n        this->numThreads = numThreads;\n        lstTh.resize(numThreads);\n        counterTaskRunning = 0;\n        forceThreadShutdown = false;\n\n        for (auto&& th : lstTh) {\n            pthread_create(&th, nullptr, &threadWorkerFunc, this);\n        }\n    }\n\n\npublic:\n    void submit(taskFunc task) {\n        pthread_mutex_lock(&mutTaskPending);\n        taskPending.push(task);\n        pthread_mutex_unlock(&mutTaskPending);\n\n        pthread_cond_signal(&condTaskPending);\n    }\n\n\n    void waitTaskDone() {\n        bool done = false;\n\n        for (;;) {\n            pthread_mutex_lock(&mutTaskPending);\n\n            if (taskPending.empty()) {\n                pthread_mutex_lock(&mutTaskRunning);\n\n                while (counterTaskRunning > 0)\n                    pthread_cond_wait(&condTaskRunning, &mutTaskRunning);\n\n                // no pending task and no running task\n                done = true;\n\n                pthread_mutex_unlock(&mutTaskRunning);\n            }\n\n            pthread_mutex_unlock(&mutTaskPending);\n\n            if (done) {\n                break;\n            }\n        }\n    }\n\n\n    void shutdown() {\n        pthread_mutex_lock(&mutTaskPending);\n\n        forceThreadShutdown = true;\n        std::queue<taskFunc>().swap(taskPending);\n\n        pthread_mutex_unlock(&mutTaskPending);\n\n        pthread_cond_broadcast(&condTaskPending);\n\n        for (auto&& th : lstTh) {\n            pthread_join(th, nullptr);\n        }\n\n        numThreads = 0;\n        lstTh.clear();\n\n        pthread_mutex_destroy(&mutTaskPending);\n        pthread_cond_destroy(&condTaskPending);\n        pthread_mutex_destroy(&mutTaskRunning);\n        pthread_cond_destroy(&condTaskRunning);\n    }\n\n\nprivate:\n    static void* threadWorkerFunc(void* argVoid) {\n        auto thisPtr = (ExecService*) argVoid;\n\n        auto&& taskPending = thisPtr->taskPending;\n        auto&& mutTaskPending = thisPtr->mutTaskPending;\n        auto&& condTaskPending = thisPtr->condTaskPending;\n\n        auto&& counterTaskRunning = thisPtr->counterTaskRunning;\n        auto&& mutTaskRunning = thisPtr->mutTaskRunning;\n        auto&& condTaskRunning = thisPtr->condTaskRunning;\n\n        auto&& forceThreadShutdown = thisPtr->forceThreadShutdown;\n\n\n        for (;;) {\n            // WAIT FOR AN AVAILABLE PENDING TASK\n            pthread_mutex_lock(&mutTaskPending);\n\n            while (taskPending.empty() and false == forceThreadShutdown) {\n                pthread_cond_wait(&condTaskPending, &mutTaskPending);\n            }\n\n            if (forceThreadShutdown) {\n                pthread_mutex_unlock(&mutTaskPending);\n                break;\n            }\n\n            // GET THE TASK FROM THE PENDING QUEUE\n            auto task = taskPending.front();\n            taskPending.pop();\n\n            ++counterTaskRunning;\n\n            pthread_mutex_unlock(&mutTaskPending);\n\n            // DO THE TASK\n            task();\n\n            pthread_mutex_lock(&mutTaskRunning);\n\n            --counterTaskRunning;\n            if (0 == counterTaskRunning) {\n                pthread_cond_signal(&condTaskRunning);\n            }\n\n            pthread_mutex_unlock(&mutTaskRunning);\n        }\n\n        pthread_exit(nullptr);\n        return nullptr;\n    }\n\n}; // ExecService\n\n\n\n} // namespace mylib\n\n\n\n#endif // _MYLIB_EXEC_SERVICE_HPP_\n"
  },
  {
    "path": "cpp/cpp-pthread/mylib-latch.hpp",
    "content": "/******************************************************\n*\n* File name:    mylib-latch.hpp\n*\n* Author:       Name:   Thanh Nguyen\n*               Email:  thanh.it1995(at)gmail(dot)com\n*\n* License:      3-Clause BSD License\n*\n* Description:  The count-down latch implementation in C++11 POSIX threading\n*\n******************************************************/\n\n\n\n#ifndef _MYLIB_COUNT_DOWN_LATCH_HPP_\n#define _MYLIB_COUNT_DOWN_LATCH_HPP_\n\n\n\n#include <pthread.h>\n\n\n\nnamespace mylib {\n\n\n\nclass CountDownLatch {\n\nprivate:\n    volatile int count;\n    pthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\n    pthread_cond_t cond = PTHREAD_COND_INITIALIZER;\n\n\npublic:\n    CountDownLatch(unsigned int count) {\n        this->count = count;\n    }\n\n\n    ~CountDownLatch() {\n        int ret;\n        ret = pthread_cond_destroy(&cond);\n        ret = pthread_mutex_destroy(&mut);\n    }\n\n\n    CountDownLatch(const CountDownLatch& other) = delete;\n    CountDownLatch(const CountDownLatch&& other) = delete;\n    void operator=(const CountDownLatch& other) = delete;\n    void operator=(const CountDownLatch&& other) = delete;\n\n\npublic:\n    int getCount() const {\n        return count;\n    }\n\n\n    void countDown() {\n        pthread_mutex_lock(&mut);\n\n        if (count <= 0) {\n            return;\n        }\n\n        --count;\n\n        if (count <= 0) {\n            pthread_cond_broadcast(&cond);\n        }\n\n        pthread_mutex_unlock(&mut);\n    }\n\n\n    void wait() {\n        pthread_mutex_lock(&mut);\n\n        while (count > 0) {\n            pthread_cond_wait(&cond, &mut);\n        }\n\n        pthread_mutex_unlock(&mut);\n    }\n\n}; // CountDownLatch\n\n\n\n} // namespace mylib\n\n\n\n#endif // _MYLIB_COUNT_DOWN_LATCH_HPP_\n"
  },
  {
    "path": "cpp/cpp-std/demo00.cpp",
    "content": "/*\nINTRODUCTION TO MULTITHREADING\nYou should try running this app several times and see results.\n*/\n\n\n#include <iostream>\n#include <thread>\nusing namespace std;\n\n\n\nvoid doTask() {\n    for (int i = 0; i < 300; ++i)\n        cout << \"B\";\n}\n\n\n\nint main() {\n    std::thread th(&doTask);\n\n    for (int i = 0; i < 300; ++i)\n        cout << \"A\";\n\n    th.join();\n\n    cout << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo01a01-hello.cpp",
    "content": "/*\nHELLO WORLD VERSION MULTITHREADING\nVersion A01: Using functions\n*/\n\n\n#include <iostream>\n#include <thread>\nusing namespace std;\n\n\n\nvoid doTask() {\n    cout << \"Hello from example thread\" << endl;\n}\n\n\n\nint main() {\n    std::thread th(&doTask);\n\n    cout << \"Hello from main thread\" << endl;\n\n    th.join();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo01a02-hello.cpp",
    "content": "/*\nHELLO WORLD VERSION MULTITHREADING\nVersion A02: Using functions allowing passing 2 arguments\n*/\n\n\n#include <iostream>\n#include <thread>\nusing namespace std;\n\n\n\nvoid doTask(char const* message, int number) {\n    cout << message << \" \" << number << endl;\n}\n\n\n\nint main() {\n    std::thread th(&doTask, \"Good day\", 19);\n    th.join();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo01b-hello-class01.cpp",
    "content": "/*\nHELLO WORLD VERSION MULTITHREADING\nVersion B: Using class methods\n*/\n\n\n#include <iostream>\n#include <string>\n#include <thread>\nusing namespace std;\n\n\n\nclass Example {\npublic:\n    void doTask(string message) {\n        cout << message << endl;\n    }\n};\n\n\n\nint main() {\n    Example example;\n\n    std::thread th(&Example::doTask, &example, \"Good day\");\n\n    th.join();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo01b-hello-class02.cpp",
    "content": "/*\nHELLO WORLD VERSION MULTITHREADING\nVersion B: Using class methods\n*/\n\n\n#include <iostream>\n#include <string>\n#include <thread>\nusing namespace std;\n\n\n\nclass Example {\npublic:\n    void run() {\n        std::thread th(&Example::doTask, this, \"Good day\");\n        th.join();\n    }\n\nprivate:\n    void doTask(string message) {\n        cout << message << endl;\n    }\n};\n\n\n\nint main() {\n    Example example;\n    example.run();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo01b-hello-class03.cpp",
    "content": "/*\nHELLO WORLD VERSION MULTITHREADING\nVersion B: Using class methods\n*/\n\n\n#include <iostream>\n#include <string>\n#include <thread>\nusing namespace std;\n\n\n\nclass Example {\npublic:\n    void run() {\n        std::thread th(&Example::doTask, \"Good day\");\n        th.join();\n    }\n\nprivate:\n    static void doTask(string message) {\n        cout << message << endl;\n    }\n};\n\n\n\nint main() {\n    Example example;\n    example.run();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo01b-hello-functor.cpp",
    "content": "/*\nHELLO WORLD VERSION MULTITHREADING\nVersion B: Using functors\n*/\n\n\n#include <iostream>\n#include <string>\n#include <thread>\nusing namespace std;\n\n\n\nclass Example {\npublic:\n    void operator()(string message) {\n        cout << message << endl;\n    }\n};\n\n\n\nint main() {\n    Example example;\n\n    std::thread th(example, \"Good day\");\n\n    th.join();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo01c-hello-lambda.cpp",
    "content": "/*\nHELLO WORLD VERSION MULTITHREADING\nVersion C: Using lambdas\n*/\n\n\n#include <iostream>\n#include <string>\n#include <thread>\nusing namespace std;\n\n\n\nint main() {\n    auto doTask = [](string message) {\n        cout << message << endl;\n    };\n\n    std::thread th(doTask, \"Good day\");\n\n    th.join();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo02-join.cpp",
    "content": "/*\nTHREAD JOINS\n*/\n\n\n#include <iostream>\n#include <thread>\nusing namespace std;\n\n\n\nvoid doHeavyTask() {\n    // Do a heavy task, which takes a little time\n    for (int i = 0; i < 2000000000; ++i);\n\n    cout << \"Done!\" << endl;\n}\n\n\n\nint main() {\n    std::thread th(&doHeavyTask);\n\n    th.join();\n\n    cout << \"Good bye!\" << endl;\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo03a-pass-arg.cpp",
    "content": "/*\nPASSING ARGUMENTS\nVersion A: Passing multiple arguments with various data types\n*/\n\n\n#include <iostream>\n#include <cstdio>\n#include <string>\n#include <thread>\nusing namespace std;\n\n\n\nstruct Point {\n    int x;\n    int y;\n\n    Point(int x, int y): x(x), y(y) { }\n};\n\n\n\nvoid doTask(int a, double b, string c, char const* d, Point e) {\n    char buffer[50] = { 0 };\n    std::sprintf(buffer, \"%d  %.1f  %s  %s  (%d %d)\", a, b, c.data(), d, e.x, e.y);\n    cout << buffer << endl;\n}\n\n\n\nint main() {\n    auto thFoo = std::thread(&doTask, 1, 2, \"red\", \"red\", Point(0, 0));\n    auto thBar = std::thread(&doTask, 3, 4, \"blue\", \"blue\", Point(9, 9));\n\n    thFoo.join();\n    thBar.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo03b-pass-arg.cpp",
    "content": "/*\nPASSING ARGUMENTS\nVersion B: Passing constant references\n*/\n\n\n#include <iostream>\n#include <string>\n#include <thread>\nusing namespace std;\n\n\n\nvoid doTask(const string& msg) {\n    cout << msg << endl;\n}\n\n\n\nint main() {\n    auto thFoo = std::thread(&doTask, \"foo\");\n    auto thBar = std::thread(&doTask, \"bar\");\n\n    thFoo.join();\n    thBar.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo03c-pass-arg.cpp",
    "content": "/*\nPASSING ARGUMENTS\nVersion C: Passing normal references\n*/\n\n\n#include <iostream>\n#include <string>\n#include <thread>\nusing namespace std;\n\n\n\n/*\nThe arguments to the thread function are moved or copied by value.\nIf a reference argument needs to be passed to the thread function,\nit has to be wrapped (e.g. with std::ref or std::cref).\n\nPassing references to thread functions may cause memory violation (e.g. when object is destroyed).\nBy wrapping reference arguments with the class template std::reference_wrapper\n(using the function templates std::ref and std::cref), you explicitly express your intentions.\n*/\n\n\n\nvoid doTask(string& msg) {\n    cout << msg << endl;\n}\n\n\n\nint main() {\n    string a = \"lorem ipsum\";\n    string b = \"dolor amet\";\n\n    // auto thFoo = std::thread(doTask, a); // error\n\n    auto thFoo = std::thread(&doTask, std::ref(a));\n    auto thBar = std::thread(&doTask, std::ref(b));\n\n    thFoo.join();\n    thBar.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo04a-sleep.cpp",
    "content": "/*\nSLEEP\nVersion A: Sleep for a specific duration\n*/\n\n\n#include <iostream>\n#include <string>\n#include <chrono>\n#include <thread>\nusing namespace std;\n\n\n\nvoid doTask(string name) {\n    cout << name << \" is sleeping\" << endl;\n    std::this_thread::sleep_for(std::chrono::seconds(3));\n    cout << name << \" wakes up\" << endl;\n}\n\n\n\nint main() {\n    auto thFoo = std::thread(&doTask, \"foo\");\n\n    thFoo.join();\n\n    cout << \"Good bye\" << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo04b-sleep.cpp",
    "content": "/*\nSLEEP\nVersion B: Sleep until a specific time point\n*/\n\n\n#include <iostream>\n#include <string>\n#include <chrono>\n#include <thread>\n#include \"mylib-time.hpp\"\nusing namespace std;\n\n\n\nusing sysclock = std::chrono::system_clock;\n\n\n\nvoid doTask(string name, sysclock::time_point tpWakeUp) {\n    std::this_thread::sleep_until(tpWakeUp);\n    cout << name << \" wakes up\" << endl;\n}\n\n\n\nint main() {\n    auto tpNow = sysclock::now();\n    auto tpWakeUpFoo = tpNow + std::chrono::seconds(7);\n    auto tpWakeUpBar = tpNow + std::chrono::seconds(3);\n\n    cout << \"foo will sleep until \" << mylib::getTimePointStr(tpWakeUpFoo) << endl;\n    cout << \"bar will sleep until \" << mylib::getTimePointStr(tpWakeUpBar) << endl;\n\n    auto thFoo = std::thread(&doTask, \"foo\", tpWakeUpFoo);\n    auto thBar = std::thread(&doTask, \"bar\", tpWakeUpBar);\n\n    thFoo.join();\n    thBar.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo05-id.cpp",
    "content": "/*\nGETTING THREAD'S ID\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\nusing namespace std;\n\n\n\nvoid doTask() {\n    std::this_thread::sleep_for(std::chrono::seconds(2));\n    cout << std::this_thread::get_id() << endl;\n}\n\n\n\nint main() {\n    auto thFoo = std::thread(&doTask);\n    auto thBar = std::thread(&doTask);\n\n    cout << \"foo's id: \" << thFoo.get_id() << endl;\n    cout << \"bar's id: \" << thBar.get_id() << endl;\n\n    thFoo.join();\n    thBar.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo06a-list-threads.cpp",
    "content": "/*\nLIST OF MULTIPLE THREADS\nVersion A: Using standard arrays\n*/\n\n\n#include <iostream>\n#include <thread>\nusing namespace std;\n\n\n\nvoid doTask(int index) {\n    cout << index;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 5;\n\n    std::thread lstTh[NUM_THREADS];\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh[i] = std::thread(&doTask, i);\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    cout << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo06b-list-threads.cpp",
    "content": "/*\nLIST OF MULTIPLE THREADS\nVersion B: Using the std::vector\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <thread>\nusing namespace std;\n\n\n\nvoid doTask(int index) {\n    cout << index;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 5;\n\n    vector<std::thread> lstTh;\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh.push_back(std::thread(&doTask, i));\n\n        // or...\n        // auto th = std::thread(&doTask, i);\n        // lstTh.push_back(std::move(th)); // Because std::thread does not have copy constructors\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    cout << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo07-terminate.cpp",
    "content": "/*\nFORCING A THREAD TO TERMINATE (i.e. killing the thread)\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\nusing namespace std;\n\n\n\nvolatile bool isRunning;\n\n\n\nvoid doTask() {\n    while (isRunning) {\n        cout << \"Running...\" << endl;\n        std::this_thread::sleep_for(std::chrono::seconds(2));\n    }\n}\n\n\n\nint main() {\n    isRunning = true;\n    auto th = std::thread(&doTask);\n\n    std::this_thread::sleep_for(std::chrono::seconds(6));\n    isRunning = false;\n\n    th.join();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo08a-return-value.cpp",
    "content": "/*\nGETTING RETURNED VALUES FROM THREADS\nVersion A: Using pointers or references (traditional way)\n*/\n\n\n#include <iostream>\n#include <thread>\nusing namespace std;\n\n\n\nvoid doubleValue(int arg, int* res) {\n    (*res) = arg * 2;\n}\n\n\n\nvoid squareValue(int arg, int& res) {\n    res = arg * arg;\n}\n\n\n\nint main() {\n    int result[3];\n\n    auto thFoo = std::thread(&doubleValue, 5, &result[0]);\n    auto thBar = std::thread(&doubleValue, 80, &result[1]);\n    auto thEgg = std::thread(&squareValue, 7, std::ref(result[2]));\n\n    thFoo.join();\n    thBar.join();\n    thEgg.join();\n\n    cout << result[0] << endl;\n    cout << result[1] << endl;\n    cout << result[2] << endl;\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo08b-return-value.cpp",
    "content": "/*\nGETTING RETURNED VALUES FROM THREADS\nVersion B: Using std::future with the promise\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include <future>\nusing namespace std;\n\n\n\nvoid doubleValue(int arg, std::promise<int> & prom) {\n    int result = arg * 2;\n    std::this_thread::sleep_for(std::chrono::seconds(2));\n\n    prom.set_value(result);\n\n    std::this_thread::sleep_for(std::chrono::seconds(2));\n    cout << \"This thread is exiting\" << endl;\n}\n\n\n\nint main() {\n    auto prom = std::promise<int>();\n    auto fut = prom.get_future(); // fut is std::future<int>\n\n    auto th = std::thread(&doubleValue, 5, std::ref(prom));\n\n    // Block until prom.set_value() executes\n    int result = fut.get();\n\n    cout << result << endl;\n\n    th.join();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo08c-return-value.cpp",
    "content": "/*\nGETTING RETURNED VALUES FROM THREADS\nVersion C: Using std::future with the packaged_task\n*/\n\n\n#include <iostream>\n#include <string>\n#include <utility>\n#include <chrono>\n#include <thread>\n#include <future>\nusing namespace std;\n\n\n\nstring doubleValue(int arg) {\n    int result = arg * 2;\n    std::this_thread::sleep_for(std::chrono::seconds(2));\n\n    cout << \"This thread is exiting\" << endl;\n    return to_string(result);\n}\n\n\n\nint main() {\n    auto ptask = std::packaged_task<string(int)>(doubleValue);\n    auto fut = ptask.get_future(); // fut is std::future<string>\n\n    auto th = std::thread(std::move(ptask), 5);\n\n    // Block until ptask finishes\n    string result = fut.get();\n\n    cout << result << endl;\n\n    th.join();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo09-detach.cpp",
    "content": "/*\nTHREAD DETACHING\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\nusing namespace std;\n\n\n\nvoid foo() {\n    cout << \"foo is starting...\" << endl;\n\n    std::this_thread::sleep_for(std::chrono::seconds(2));\n\n    cout << \"foo is exiting...\" << endl;\n}\n\n\n\nint main() {\n    auto thFoo = std::thread(&foo);\n    thFoo.detach();\n\n\n    // If I comment this statement,\n    // thFoo will be forced into terminating with main thread\n    std::this_thread::sleep_for(std::chrono::seconds(3));\n\n\n    cout << \"Main thread is exiting\" << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo10-yield.cpp",
    "content": "/*\nTHREAD YIELDING\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include \"mylib-time.hpp\"\nusing namespace std;\n\n\n\nusing chrmicro = std::chrono::microseconds;\nusing hrclock = mylib::HiResClock;\n\n\n\nvoid littleSleep(int us) {\n    auto tpStart = hrclock::now();\n    auto tpEnd = tpStart + chrmicro(us);\n\n    do {\n        std::this_thread::yield();\n    }\n    while (hrclock::now() < tpEnd);\n}\n\n\n\nint main() {\n    auto tpStartMeasure = hrclock::now();\n\n    littleSleep(130);\n\n    auto timeElapsed = hrclock::getTimeSpan<chrmicro>(tpStartMeasure);\n\n    cout << \"Elapsed time: \" << timeElapsed.count() << \" microseonds\" << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo11a-exec-service.cpp",
    "content": "/*\nEXECUTOR SERVICES AND THREAD POOLS\n\nExecutor services in C++ std threading are not supported by default.\nSo, I use mylib::ExecService for this demonstration.\n*/\n\n\n#include <iostream>\n#include \"mylib-execservice.hpp\"\nusing namespace std;\n\n\n\nvoid doTask() {\n    cout << \"Hello the Executor Service\" << endl;\n}\n\n\n\nclass MyFunctor {\npublic:\n    void operator()() {\n        cout << \"Hello Multithreading\" << endl;\n    }\n};\n\n\n\nint main() {\n    // INIT THE EXECUTOR SERVICE WITH 2 THREADS\n    auto execService = mylib::ExecService(2);\n\n\n    // SUBMIT\n    execService.submit([] { cout << \"Hello World\" << endl; });\n\n    execService.submit(&doTask);\n\n    execService.submit(MyFunctor());\n\n\n    // WAIT FOR THE COMPLETION OF ALL TASKS AND SHUTDOWN EXECUTOR SERVICE\n    execService.waitTaskDone();\n    execService.shutdown();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo11b-exec-service.cpp",
    "content": "/*\nEXECUTOR SERVICES AND THREAD POOLS\n\nExecutor services in C++ std threading are not supported by default.\nSo, I use mylib::ExecService for this demonstration.\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include \"mylib-execservice.hpp\"\nusing namespace std;\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 2;\n    constexpr int NUM_TASKS = 5;\n\n    auto execService = mylib::ExecService(NUM_THREADS);\n\n    for (int i = 0; i < NUM_TASKS; ++i) {\n        execService.submit([=] {\n            char id = 'A' + i;\n            cout << \"Task \" << id << \" is starting\" << endl;\n            std::this_thread::sleep_for(std::chrono::seconds(3));\n            cout << \"Task \" << id << \" is completed\" << endl;\n        });\n    }\n\n    cout << \"All tasks are submitted\" << endl;\n\n    execService.waitTaskDone();\n    cout << \"All tasks are completed\" << endl;\n\n    execService.shutdown();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo12a-race-condition.cpp",
    "content": "/*\nRACE CONDITIONS\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <chrono>\n#include <thread>\nusing namespace std;\n\n\n\nvoid doTask(int index) {\n    std::this_thread::sleep_for(std::chrono::seconds(1));\n    cout << index;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 4;\n    vector<std::thread> lstTh;\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh.push_back(\n            std::thread(&doTask, i)\n        );\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    cout << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo12b01-data-race-single.cpp",
    "content": "/*\nDATA RACES\nVersion 01: Without multithreading\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <numeric>\nusing namespace std;\n\n\n\nint getResult(int N) {\n    vector<bool> a;\n    a.resize(N + 1, false);\n\n    for (int i = 1; i <= N; ++i)\n        if (0 == i % 2 || 0 == i % 3)\n            a[i] = true;\n\n    // result = sum of a (i.e. counting number of true values in a)\n    int result = std::accumulate(a.begin(), a.end(), 0);\n    return result;\n}\n\n\n\nint main() {\n    constexpr int N = 8;\n\n    int result = getResult(N);\n\n    cout << \"Number of integers that are divisible by 2 or 3 is: \" << result << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo12b02-data-race-multi.cpp",
    "content": "/*\nDATA RACES\nVersion 02: Multithreading\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <numeric>\n#include <thread>\nusing namespace std;\n\n\n\nvoid markDiv2(vector<bool> & a, int N) {\n    for (int i = 2; i <= N; i += 2)\n        a[i] = true;\n}\n\n\n\nvoid markDiv3(vector<bool> & a, int N) {\n    for (int i = 3; i <= N; i += 3)\n        a[i] = true;\n}\n\n\n\nint main() {\n    constexpr int N = 8;\n\n    vector<bool> a;\n    a.resize(N + 1, false);\n\n    auto thDiv2 = std::thread(&markDiv2, std::ref(a), N);\n    auto thDiv3 = std::thread(&markDiv3, std::ref(a), N);\n    thDiv2.join();\n    thDiv3.join();\n\n    // result = sum of a (i.e. counting numbers of true values in a)\n    int result = std::accumulate(a.begin(), a.end(), 0);\n\n    cout << \"Number of integers that are divisible by 2 or 3 is: \" << result << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo12c01-race-cond-data-race.cpp",
    "content": "/*\nRACE CONDITIONS AND DATA RACES\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\nusing namespace std;\n\n\n\nint counter = 0;\n\n\n\nvoid increaseCounter() {\n    std::this_thread::sleep_for(std::chrono::seconds(1));\n\n    for (int i = 0; i < 1000; ++i) {\n        counter += 1;\n    }\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 16;\n    std::thread lstTh[NUM_THREADS];\n\n    for (auto&& th : lstTh) {\n        th = std::thread(&increaseCounter);\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    cout << \"counter = \" << counter << endl;\n    // We are NOT sure that counter = 16000\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo12c02-race-cond-data-race.cpp",
    "content": "/*\nRACE CONDITIONS AND DATA RACES\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\nusing namespace std;\n\n\n\nusing sysclock = std::chrono::system_clock;\n\n\n\nint counter = 0;\n\n\n\nvoid doTaskA(sysclock::time_point timePointWakeUp) {\n    std::this_thread::sleep_until(timePointWakeUp);\n\n    while (counter < 10)\n        ++counter;\n\n    cout << \"A won !!!\" << endl;\n}\n\n\n\nvoid doTaskB(sysclock::time_point timePointWakeUp) {\n    std::this_thread::sleep_until(timePointWakeUp);\n\n    while (counter > -10)\n        --counter;\n\n    cout << \"B won !!!\" << endl;\n}\n\n\n\nint main() {\n    auto tpNow = sysclock::now();\n    auto tpWakeUp = tpNow + std::chrono::seconds(1);\n\n    auto thA = std::thread(&doTaskA, tpWakeUp);\n    auto thB = std::thread(&doTaskB, tpWakeUp);\n\n    thA.join();\n    thB.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo13a-mutex.cpp",
    "content": "/*\nMUTEXES\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include <mutex>\nusing namespace std;\n\n\n\nstd::mutex mut;\nint counter = 0;\n\n\n\nvoid doTask() {\n    std::this_thread::sleep_for(std::chrono::seconds(1));\n\n    mut.lock();\n\n    for (int i = 0; i < 1000; ++i)\n        ++counter;\n\n    mut.unlock();\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 16;\n    std::thread lstTh[NUM_THREADS];\n\n    for (auto&& th : lstTh) {\n        th = std::thread(&doTask);\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    cout << \"counter = \" << counter << endl;\n    // We are sure that counter = 16000\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo13b01-mutex.cpp",
    "content": "/*\nMUTEXES\n\nstd::lock_guard is a class template, which implements the RAII for mutex.\nIt wraps the mutex inside it’s object and locks the attached mutex in its constructor.\nWhen it’s destructor is called it releases the mutex.\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include <mutex>\nusing namespace std;\n\n\n\nstd::mutex mut;\nint counter = 0;\n\n\n\nvoid doTask() {\n    std::this_thread::sleep_for(std::chrono::seconds(1));\n\n    std::lock_guard<std::mutex> lk(mut);\n\n    for (int i = 0; i < 1000; ++i)\n        ++counter;\n\n    // Once function exits, then destructor of lk object will be called.\n    // In destructor it unlocks the mutex.\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 16;\n    std::thread lstTh[NUM_THREADS];\n\n    for (auto&& th : lstTh) {\n        th = std::thread(&doTask);\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    cout << \"counter = \" << counter << endl;\n    // We are sure that counter = 16000\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo13b02-mutex.cpp",
    "content": "/*\nMUTEXES\n\n- std::lock_guard will be locked only once on construction and unlocked on destruction.\n- In contrast to std::lock_guard, std::unique_lock can be:\n  + created without immediately locking, and\n  + unlocked at any point in its existence.\n\nFurthermore, C++17 introduces a new lock class called std::scoped_lock.\nThe std::scoped_lock is a strictly superior version of std::lock_guard that\nlocks an arbitrary number of mutexes all at once.\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include <mutex>\nusing namespace std;\n\n\n\nstd::mutex mut;\nint counter = 0;\n\n\n\nvoid doTask() {\n    std::this_thread::sleep_for(std::chrono::seconds(1));\n\n    std::unique_lock<std::mutex> lk(mut);\n    // std::scoped_lock<std::mutex> lk(mut);\n\n    for (int i = 0; i < 1000; ++i)\n        ++counter;\n\n    lk.unlock();\n\n    // Do something...\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 16;\n    std::thread lstTh[NUM_THREADS];\n\n    for (auto&& th : lstTh) {\n        th = std::thread(&doTask);\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    cout << \"counter = \" << counter << endl;\n    // We are sure that counter = 16000\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo13c-mutex-trylock.cpp",
    "content": "/*\nMUTEXES\nLocking with a nonblocking mutex\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include <mutex>\nusing namespace std;\n\n\n\nstd::mutex mut;\nint counter = 0;\n\n\n\nvoid doTask() {\n    std::this_thread::sleep_for(std::chrono::seconds(1));\n\n    if (false == mut.try_lock()) {\n        return;\n    }\n\n    for (int i = 0; i < 10000; ++i)\n        ++counter;\n\n    mut.unlock();\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 3;\n    std::thread lstTh[NUM_THREADS];\n\n    for (auto&& th : lstTh) {\n        th = std::thread(&doTask);\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    cout << \"counter = \" << counter << endl;\n    // counter can be 10000, 20000 or 30000\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo14-synchronized-block.cpp",
    "content": "/*\nSYNCHRONIZED BLOCKS\n\nSynchronized blocks in C++ std threading are not supported by default.\nTo demonstate synchronized blocks, I use std::unique_lock (or std::lock_guard, std::scoped_lock).\n\nNow, let's see the code:\n    {\n        std::unique_lock lk(mut);\n        // Do something in the critical section\n    }\n\nThe code block above is protected by a lock/mutex. That means it is synchronized on thread execution.\nThis code block is called \"the synchronized block\".\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include <mutex>\nusing namespace std;\n\n\n\nstd::mutex mut;\nint counter = 0;\n\n\n\nvoid doTask() {\n    std::this_thread::sleep_for(std::chrono::seconds(1));\n\n    // This is the \"synchronized block\"\n    {\n        std::unique_lock<std::mutex> lk(mut);\n\n        for (int i = 0; i < 1000; ++i)\n            ++counter;\n    }\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 16;\n    std::thread lstTh[NUM_THREADS];\n\n    for (auto&& th : lstTh) {\n        th = std::thread(&doTask);\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    cout << \"counter = \" << counter << endl;\n    // We are sure that counter = 16000\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo15a-deadlock.cpp",
    "content": "/*\nDEADLOCK\nVersion A\n*/\n\n\n#include <iostream>\n#include <string>\n#include <thread>\n#include <mutex>\nusing namespace std;\n\n\n\nstd::mutex mut;\n\n\n\nvoid doTask(std::string name) {\n    mut.lock();\n\n    cout << name << \" acquired resource\" << endl;\n\n    // mut.unlock(); // Forget this statement ==> deadlock\n}\n\n\n\nint main() {\n    auto thFoo = std::thread(&doTask, \"foo\");\n    auto thBar = std::thread(&doTask, \"bar\");\n\n    thFoo.join();\n    thBar.join();\n\n    cout << \"You will never see this statement due to deadlock!\" << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo15b-deadlock.cpp",
    "content": "/*\nDEADLOCK\nVersion B\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include <mutex>\nusing namespace std;\n\n\n\nstd::mutex mutResourceA;\nstd::mutex mutResourceB;\n\n\n\nvoid foo() {\n    mutResourceA.lock();\n    cout << \"foo acquired resource A\" << endl;\n\n    std::this_thread::sleep_for(std::chrono::seconds(1));\n\n    mutResourceB.lock();\n    cout << \"foo acquired resource B\" << endl;\n    mutResourceB.unlock();\n\n    mutResourceA.unlock();\n}\n\n\n\nvoid bar() {\n    mutResourceB.lock();\n    cout << \"bar acquired resource B\" << endl;\n\n    std::this_thread::sleep_for(std::chrono::seconds(1));\n\n    mutResourceA.lock();\n    cout << \"bar acquired resource A\" << endl;\n    mutResourceA.unlock();\n\n    mutResourceB.unlock();\n}\n\n\n\nint main() {\n    auto thFoo = std::thread(&foo);\n    auto thBar = std::thread(&bar);\n\n    thFoo.join();\n    thBar.join();\n\n    cout << \"You will never see this statement due to deadlock!\" << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo16-monitor.cpp",
    "content": "/*\nMONITORS\nImplementation of a monitor for managing a counter\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include <mutex>\nusing namespace std;\n\n\n\nclass Monitor {\nprivate:\n    std::mutex mut;\n    int* pCounter = nullptr;\n\n\npublic:\n    // Should disable copy/move constructors, copy/move assignment operators\n\n\n    void init(int* pCounter) {\n        this->pCounter = pCounter;\n    }\n\n\n    void increaseCounter() {\n        mut.lock();\n        (*pCounter) += 1;\n        mut.unlock();\n    }\n};\n\n\n\nvoid doTask(Monitor* monitor) {\n    std::this_thread::sleep_for(std::chrono::seconds(1));\n\n    for (int i = 0; i < 1000; ++i)\n        monitor->increaseCounter();\n}\n\n\n\nint main() {\n    int counter = 0;\n    Monitor monitor;\n\n    constexpr int NUM_THREADS = 16;\n    std::thread lstTh[NUM_THREADS];\n\n    monitor.init(&counter);\n\n    for (auto&& th : lstTh) {\n        th = std::thread(&doTask, &monitor);\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    cout << \"counter = \" << counter << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo17a-reentrant-lock.cpp",
    "content": "/*\nREENTRANT LOCKS (RECURSIVE MUTEXES)\nVersion A: Introduction to reentrant locks\n*/\n\n\n#include <iostream>\n#include <thread>\n#include <mutex>\nusing namespace std;\n\n\n\nstd::mutex mut;\n\n\n\nvoid doTask() {\n    mut.lock();\n    cout << \"First time acquiring the resource\" << endl;\n\n    mut.lock();\n    cout << \"Second time acquiring the resource\" << endl;\n\n    mut.unlock();\n    mut.unlock();\n}\n\n\n\nint main() {\n    auto th = std::thread(&doTask);\n    /*\n    The thread th shall meet deadlock.\n    So, you will never get output \"Second time the acquiring resource\".\n    */\n    th.join();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo17b-reentrant-lock.cpp",
    "content": "/*\nREENTRANT LOCKS (RECURSIVE MUTEXES)\nVersion B: Solving the problem from version A\n*/\n\n\n#include <iostream>\n#include <thread>\n#include <mutex>\nusing namespace std;\n\n\n\nstd::recursive_mutex mut;\n\n\n\nvoid doTask() {\n    mut.lock();\n    cout << \"First time acquiring the resource\" << endl;\n\n    mut.lock();\n    cout << \"Second time acquiring the resource\" << endl;\n\n    mut.unlock();\n    mut.unlock();\n}\n\n\n\nvoid doTaskUsingSyncBlock() {\n    using uniquelk = std::unique_lock<std::recursive_mutex>;\n\n    uniquelk lk(mut);\n    cout << \"First time acquiring the resource\" << endl;\n\n    {\n        uniquelk lk(mut);\n        cout << \"Second time acquiring the resource\" << endl;\n    }\n}\n\n\n\nint main() {\n    auto th = std::thread(&doTask);\n    th.join();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo17c-reentrant-lock.cpp",
    "content": "/*\nREENTRANT LOCKS (RECURSIVE MUTEXES)\nVersion C: A multithreaded app example\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include <mutex>\nusing namespace std;\n\n\n\nstd::recursive_mutex mut;\n\n\n\nvoid doTask(char name) {\n    std::this_thread::sleep_for(std::chrono::seconds(1));\n\n    mut.lock();\n    cout << \"First time \" << name << \" acquiring the resource\" << endl;\n\n    mut.lock();\n    cout << \"Second time \" << name << \" acquiring the resource\" << endl;\n\n    mut.unlock();\n    mut.unlock();\n}\n\n\n\nvoid doTaskUsingSyncBlock(char name) {\n    using uniquelk = std::unique_lock<std::recursive_mutex>;\n\n    std::this_thread::sleep_for(std::chrono::seconds(1));\n\n    {\n        uniquelk lk(mut);\n        cout << \"First time \" << name << \" acquiring the resource\" << endl;\n\n        {\n            uniquelk lk(mut);\n            cout << \"Second time \" << name << \" acquiring the resource\" << endl;\n        }\n    }\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 3;\n    std::thread lstTh[NUM_THREADS];\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh[i] = std::thread(&doTask, char(i + 'A'));\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo18a01-barrier.cpp",
    "content": "/*\nBARRIERS AND LATCHES\nVersion A: Cyclic barriers\n*/\n\n\n#include <iostream>\n#include <string>\n#include <tuple>\n#include <chrono>\n#include <thread>\n#include <barrier>\nusing namespace std;\n\n\n\nauto syncPoint = std::barrier(3); // participant count = 3\n\n\n\nvoid processRequest(string userName, int waitTime) {\n    std::this_thread::sleep_for(std::chrono::seconds(waitTime));\n\n    cout << \"Get request from \" << userName << endl;\n    syncPoint.arrive_and_wait();\n\n    cout << \"Process request for \" << userName << endl;\n    syncPoint.arrive_and_wait();\n\n    cout << \"Done \" << userName << endl;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 3;\n    std::thread lstTh[NUM_THREADS];\n\n    // tuple<userName, waitTime>\n    tuple<string,int> lstArg[NUM_THREADS] = {\n        { \"lorem\", 1 },\n        { \"ipsum\", 2 },\n        { \"dolor\", 3 }\n    };\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        auto&& arg = lstArg[i];\n        lstTh[i] = std::thread(&processRequest, std::get<0>(arg), std::get<1>(arg));\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo18a02-barrier.cpp",
    "content": "/*\nBARRIERS AND LATCHES\nVersion A: Cyclic barriers\n*/\n\n\n#include <iostream>\n#include <string>\n#include <tuple>\n#include <chrono>\n#include <thread>\n#include <barrier>\nusing namespace std;\n\n\n\nauto syncPoint = std::barrier(2); // participant count = 2\n\n\n\nvoid processRequest(string userName, int waitTime) {\n    std::this_thread::sleep_for(std::chrono::seconds(waitTime));\n\n    cout << \"Get request from \" << userName << endl;\n    syncPoint.arrive_and_wait();\n\n    cout << \"Process request for \" << userName << endl;\n    syncPoint.arrive_and_wait();\n\n    cout << \"Done \" << userName << endl;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 4;\n    std::thread lstTh[NUM_THREADS];\n\n    // tuple<userName, waitTime>\n    tuple<string,int> lstArg[NUM_THREADS] = {\n        { \"lorem\", 1 },\n        { \"ipsum\", 3 },\n        { \"dolor\", 3 },\n        { \"amet\", 10 }\n    };\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        auto&& arg = lstArg[i];\n        lstTh[i] = std::thread(&processRequest, std::get<0>(arg), std::get<1>(arg));\n    }\n\n    // Thread with userName = \"amet\" shall be FREEZED\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo18a03-barrier.cpp",
    "content": "/*\nBARRIERS AND LATCHES\nVersion A: Cyclic barriers\n*/\n\n\n#include <iostream>\n#include <string>\n#include <tuple>\n#include <chrono>\n#include <thread>\n#include <barrier>\nusing namespace std;\n\n\n\nauto syncPointA = std::barrier(2);\nauto syncPointB = std::barrier(2);\n\n\n\nvoid processRequest(string userName, int waitTime) {\n    std::this_thread::sleep_for(std::chrono::seconds(waitTime));\n\n    cout << \"Get request from \" << userName << endl;\n    syncPointA.arrive_and_wait();\n\n    cout << \"Process request for \" << userName << endl;\n    syncPointB.arrive_and_wait();\n\n    cout << \"Done \" << userName << endl;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 4;\n    std::thread lstTh[NUM_THREADS];\n\n    // tuple<userName, waitTime>\n    tuple<string,int> lstArg[NUM_THREADS] = {\n        { \"lorem\", 1 },\n        { \"ipsum\", 3 },\n        { \"dolor\", 3 },\n        { \"amet\", 10 }\n    };\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        auto&& arg = lstArg[i];\n        lstTh[i] = std::thread(&processRequest, std::get<0>(arg), std::get<1>(arg));\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo18b01-latch.cpp",
    "content": "/*\nBARRIERS AND LATCHES\nVersion B: Count-down latches\n*/\n\n\n#include <iostream>\n#include <string>\n#include <tuple>\n#include <chrono>\n#include <thread>\n#include <latch>\nusing namespace std;\n\n\n\nauto syncPoint = std::latch(3); // participant count = 3\n\n\n\nvoid processRequest(string userName, int waitTime) {\n    std::this_thread::sleep_for(std::chrono::seconds(waitTime));\n\n    cout << \"Get request from \" << userName << endl;\n\n    syncPoint.arrive_and_wait();\n\n    cout << \"Done \" << userName << endl;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 3;\n    std::thread lstTh[NUM_THREADS];\n\n    // tuple<userName, waitTime>\n    tuple<string,int> lstArg[NUM_THREADS] = {\n        { \"lorem\", 1 },\n        { \"ipsum\", 2 },\n        { \"dolor\", 3 }\n    };\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        auto&& arg = lstArg[i];\n        lstTh[i] = std::thread(&processRequest, std::get<0>(arg), std::get<1>(arg));\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo18b02-latch.cpp",
    "content": "/*\nBARRIERS AND LATCHES\nVersion B: Count-down latches\n\nMain thread waits for 3 child threads to get enough data to progress.\n*/\n\n\n#include <iostream>\n#include <string>\n#include <tuple>\n#include <chrono>\n#include <thread>\n#include <latch>\nusing namespace std;\n\n\n\nconstexpr int NUM_THREADS = 3;\nauto syncPoint = std::latch(NUM_THREADS);\n\n\n\nvoid doTask(string message, int waitTime) {\n    std::this_thread::sleep_for(std::chrono::seconds(waitTime));\n\n    cout << message << endl;\n    syncPoint.count_down();\n\n    std::this_thread::sleep_for(std::chrono::seconds(8));\n    cout << \"Cleanup\" << endl;\n}\n\n\n\nint main() {\n    std::thread lstTh[NUM_THREADS];\n\n    // tuple<message, waitTime>\n    tuple<string,int> lstArg[NUM_THREADS] = {\n        { \"Send request to egg.net to get data\", 6 },\n        { \"Send request to foo.org to get data\", 2 },\n        { \"Send request to bar.com to get data\", 4 }\n    };\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        auto&& arg = lstArg[i];\n        lstTh[i] = std::thread(&doTask, std::get<0>(arg), std::get<1>(arg));\n    }\n\n    syncPoint.wait();\n    cout << \"\\nNow we have enough data to progress to next step\\n\" << endl;\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo19a-read-write-lock.cpp",
    "content": "/*\nREAD-WRITE LOCKS\n*/\n\n\n#include <iostream>\n#include <numeric>\n#include <chrono>\n#include <thread>\n#include <shared_mutex>\n#include \"mylib-random.hpp\"\nusing namespace std;\n\n\n\nvolatile int resource = 0;\nauto rwmut = std::shared_mutex();\n\n\n\nvoid readFunc(int waitTime) {\n    std::this_thread::sleep_for(std::chrono::seconds(waitTime));\n\n    rwmut.lock_shared();\n\n    cout << \"read: \" << resource << endl;\n\n    rwmut.unlock_shared();\n}\n\n\n\nvoid writeFunc(int waitTime) {\n    std::this_thread::sleep_for(std::chrono::seconds(waitTime));\n\n    rwmut.lock();\n\n    resource = mylib::RandInt::get(100);\n    cout << \"write: \" << resource << endl;\n\n    rwmut.unlock();\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS_READ = 10;\n    constexpr int NUM_THREADS_WRITE = 4;\n    constexpr int NUM_ARGS = 3;\n\n    std::thread lstThRead[NUM_THREADS_READ];\n    std::thread lstThWrite[NUM_THREADS_WRITE];\n    int lstArg[NUM_ARGS];\n\n\n    // INITIALIZE\n    // lstArg = { 0, 1, 2, ..., NUM_ARG - 1 }\n    std::iota(lstArg, lstArg + NUM_ARGS, 0);\n\n\n    // CREATE THREADS\n    for (auto&& th : lstThRead) {\n        int arg = lstArg[ mylib::RandInt::get(NUM_ARGS) ];\n        th = std::thread(&readFunc, arg);\n    }\n\n    for (auto&& th : lstThWrite) {\n        int arg = lstArg[ mylib::RandInt::get(NUM_ARGS) ];\n        th = std::thread(&writeFunc, arg);\n    }\n\n\n    // JOIN THREADS\n    for (auto&& th : lstThRead) {\n        th.join();\n    }\n\n    for (auto&& th : lstThWrite) {\n        th.join();\n    }\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo19b-read-write-lock.cpp",
    "content": "/*\nREAD-WRITE LOCKS\n*/\n\n\n#include <iostream>\n#include <numeric>\n#include <chrono>\n#include <thread>\n#include <shared_mutex>\n#include \"mylib-random.hpp\"\nusing namespace std;\n\n\n\nvolatile int resource = 0;\nauto rwmut = std::shared_mutex();\n\n\n\nvoid readFunc(int waitTime) {\n    std::this_thread::sleep_for(std::chrono::seconds(waitTime));\n\n    std::shared_lock lk(rwmut);\n\n    cout << \"read: \" << resource << endl;\n\n    // lk.unlock();\n}\n\n\n\nvoid writeFunc(int waitTime) {\n    std::this_thread::sleep_for(std::chrono::seconds(waitTime));\n\n    std::lock_guard lk(rwmut);\n    // std::unique_lock lk(rwmut);\n\n    resource = mylib::RandInt::get(100);\n    cout << \"write: \" << resource << endl;\n\n    // lk.unlock();\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS_READ = 10;\n    constexpr int NUM_THREADS_WRITE = 4;\n    constexpr int NUM_ARGS = 3;\n\n    std::thread lstThRead[NUM_THREADS_READ];\n    std::thread lstThWrite[NUM_THREADS_WRITE];\n    int lstArg[NUM_ARGS];\n\n\n    // INITIALIZE\n    // lstArg = { 0, 1, 2, ..., NUM_ARG - 1 }\n    std::iota(lstArg, lstArg + NUM_ARGS, 0);\n\n\n    // CREATE THREADS\n    for (auto&& th : lstThRead) {\n        int arg = lstArg[ mylib::RandInt::get(NUM_ARGS) ];\n        th = std::thread(&readFunc, arg);\n    }\n\n    for (auto&& th : lstThWrite) {\n        int arg = lstArg[ mylib::RandInt::get(NUM_ARGS) ];\n        th = std::thread(&writeFunc, arg);\n    }\n\n\n    // JOIN THREADS\n    for (auto&& th : lstThRead) {\n        th.join();\n    }\n\n    for (auto&& th : lstThWrite) {\n        th.join();\n    }\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo20a01-semaphore.cpp",
    "content": "/*\nSEMAPHORES\nVersion A: Paper sheets and packages\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include <semaphore>\nusing namespace std;\n\n\n\nauto semPackage = std::counting_semaphore(0);\n\n\n\nvoid makeOneSheet() {\n    for (int i = 0; i < 4; ++i) {\n        cout << \"Make 1 sheet\" << endl;\n        std::this_thread::sleep_for(std::chrono::seconds(1));\n        semPackage.release();\n    }\n}\n\n\n\nvoid combineOnePackage() {\n    for (int i = 0; i < 4; ++i) {\n        semPackage.acquire();\n        semPackage.acquire();\n        cout << \"Combine 2 sheets into 1 package\" << endl;\n    }\n}\n\n\n\nint main() {\n    auto thMakeSheetA = std::thread(&makeOneSheet);\n    auto thMakeSheetB = std::thread(&makeOneSheet);\n    auto thCombinePackage = std::thread(&combineOnePackage);\n\n    thMakeSheetA.join();\n    thMakeSheetB.join();\n    thCombinePackage.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo20a02-semaphore.cpp",
    "content": "/*\nSEMAPHORES\nVersion A: Paper sheets and packages\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include <semaphore>\nusing namespace std;\n\n\n\nauto semPackage = std::counting_semaphore(0);\nauto semSheet = std::counting_semaphore(2);\n\n\n\nvoid makeOneSheet() {\n    for (int i = 0; i < 2; ++i) {\n        semSheet.acquire();\n        cout << \"Make 1 sheet\" << endl;\n        semPackage.release();\n    }\n}\n\n\n\nvoid combineOnePackage() {\n    for (int i = 0; i < 2; ++i) {\n        semPackage.acquire();\n        semPackage.acquire();\n\n        cout << \"Combine 2 sheets into 1 package\" << endl;\n        std::this_thread::sleep_for(std::chrono::seconds(2));\n\n        semSheet.release();\n        semSheet.release();\n    }\n}\n\n\n\nint main() {\n    auto thMakeSheetA = std::thread(&makeOneSheet);\n    auto thMakeSheetB = std::thread(&makeOneSheet);\n    auto thCombinePackage = std::thread(&combineOnePackage);\n\n    thMakeSheetA.join();\n    thMakeSheetB.join();\n    thCombinePackage.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo20a03-semaphore-deadlock.cpp",
    "content": "/*\nSEMAPHORES\nVersion A: Paper sheets and packages\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include <semaphore>\nusing namespace std;\n\n\n\nauto semPackage = std::counting_semaphore(0);\nauto semSheet = std::counting_semaphore(2);\n\n\n\nvoid makeOneSheet() {\n    for (int i = 0; i < 4; ++i) {\n        semSheet.acquire();\n        cout << \"Make 1 sheet\" << endl;\n        semPackage.release();\n    }\n}\n\n\n\nvoid combineOnePackage() {\n    for (int i = 0; i < 4; ++i) {\n        semPackage.acquire();\n        semPackage.acquire();\n\n        cout << \"Combine 2 sheets into 1 package\" << endl;\n        std::this_thread::sleep_for(std::chrono::seconds(2));\n\n        semSheet.release();\n        // Missing one statement: semSheet.release() ==> deadlock\n    }\n}\n\n\n\nint main() {\n    auto thMakeSheetA = std::thread(&makeOneSheet);\n    auto thMakeSheetB = std::thread(&makeOneSheet);\n    auto thCombinePackage = std::thread(&combineOnePackage);\n\n    thMakeSheetA.join();\n    thMakeSheetB.join();\n    thCombinePackage.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo20b-semaphore.cpp",
    "content": "/*\nSEMAPHORES\nVersion B: Tires and chassis\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include <semaphore>\nusing namespace std;\n\n\n\nauto semTire = std::counting_semaphore(4);\nauto semChassis = std::counting_semaphore(0);\n\n\n\nvoid makeTire() {\n    for (int i = 0; i < 8; ++i) {\n        semTire.acquire();\n\n        cout << \"Make 1 tire\" << endl;\n        std::this_thread::sleep_for(std::chrono::seconds(1));\n\n        semChassis.release();\n    }\n}\n\n\n\nvoid makeChassis() {\n    for (int i = 0; i < 4; ++i) {\n        semChassis.acquire();\n        semChassis.acquire();\n        semChassis.acquire();\n        semChassis.acquire();\n\n        cout << \"Make 1 chassis\" << endl;\n        std::this_thread::sleep_for(std::chrono::seconds(3));\n\n        semTire.release();\n        semTire.release();\n        semTire.release();\n        semTire.release();\n    }\n}\n\n\n\nint main() {\n    auto thTireA = std::thread(&makeTire);\n    auto thTireB = std::thread(&makeTire);\n    auto thChassis = std::thread(&makeChassis);\n\n    thTireA.join();\n    thTireB.join();\n    thChassis.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo21a01-condition-variable.cpp",
    "content": "/*\nCONDITION VARIABLES\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include <mutex>\n#include <condition_variable>\nusing namespace std;\n\n\n\nstd::mutex mut;\nstd::condition_variable conditionVar;\n\n\n\nvoid foo() {\n    cout << \"foo is waiting...\" << endl;\n\n    std::unique_lock<std::mutex> mutLock(mut);\n    conditionVar.wait(mutLock);\n\n    cout << \"foo resumed\" << endl;\n}\n\n\n\nvoid bar() {\n    std::this_thread::sleep_for(std::chrono::seconds(3));\n    conditionVar.notify_one();\n}\n\n\n\nint main() {\n    auto thFoo = std::thread(&foo);\n    auto thBar = std::thread(&bar);\n\n    thFoo.join();\n    thBar.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo21a02-condition-variable.cpp",
    "content": "/*\nCONDITION VARIABLES\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include <mutex>\n#include <condition_variable>\nusing namespace std;\n\n\n\nstd::mutex mut;\nstd::condition_variable conditionVar;\n\nconstexpr int NUM_TH_FOO = 3;\n\n\n\nvoid foo() {\n    cout << \"foo is waiting...\" << endl;\n\n    std::unique_lock<std::mutex> mutLock(mut);\n    conditionVar.wait(mutLock);\n\n    cout << \"foo resumed\" << endl;\n}\n\n\n\nvoid bar() {\n    for (int i = 0; i < NUM_TH_FOO; ++i) {\n        std::this_thread::sleep_for(std::chrono::seconds(2));\n        conditionVar.notify_one();\n    }\n}\n\n\n\nint main() {\n    std::thread lstThFoo[NUM_TH_FOO];\n\n    for (auto&& thFoo : lstThFoo) {\n        thFoo = std::thread(&foo);\n    }\n\n    auto thBar = std::thread(&bar);\n\n    for (auto&& thFoo : lstThFoo) {\n        thFoo.join();\n    }\n\n    thBar.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo21a03-condition-variable.cpp",
    "content": "/*\nCONDITION VARIABLES\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include <mutex>\n#include <condition_variable>\nusing namespace std;\n\n\n\nstd::mutex mut;\nstd::condition_variable conditionVar;\n\nconstexpr int NUM_TH_FOO = 3;\n\n\n\nvoid foo() {\n    cout << \"foo is waiting...\" << endl;\n\n    std::unique_lock<std::mutex> mutLock(mut);\n    conditionVar.wait(mutLock);\n\n    cout << \"foo resumed\" << endl;\n}\n\n\n\nvoid bar() {\n    std::this_thread::sleep_for(std::chrono::seconds(3));\n    // Notify all waiting threads\n    conditionVar.notify_all();\n}\n\n\n\nint main() {\n    std::thread lstThFoo[NUM_TH_FOO];\n\n    for (auto&& thFoo : lstThFoo) {\n        thFoo = std::thread(&foo);\n    }\n\n    auto thBar = std::thread(&bar);\n\n    for (auto&& thFoo : lstThFoo) {\n        thFoo.join();\n    }\n\n    thBar.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo21b-condition-variable.cpp",
    "content": "/*\nCONDITION VARIABLES\n*/\n\n\n#include <iostream>\n#include <thread>\n#include <mutex>\n#include <condition_variable>\nusing namespace std;\n\n\n\nstd::mutex mut;\nstd::condition_variable conditionVar;\n\nint counter = 0;\n\nconstexpr int COUNT_HALT_01 = 3;\nconstexpr int COUNT_HALT_02 = 6;\nconstexpr int COUNT_DONE = 10;\n\n\n\n// Write numbers 1-3 and 8-10 as permitted by egg()\nvoid foo() {\n    for (;;) {\n        // Lock mutex and then wait for signal to relase mutex\n        std::unique_lock<std::mutex> lk(mut);\n\n        // Wait while egg() operates on counter,\n        // Mutex unlocked if condition variable in egg() signaled\n        conditionVar.wait(lk);\n\n        ++counter;\n        cout << \"foo counter = \" << counter << endl;\n\n        if (counter >= COUNT_DONE) {\n            return;\n        }\n    }\n}\n\n\n\n// Write numbers 4-7\nvoid egg() {\n    for (;;) {\n        std::unique_lock<std::mutex> lk(mut);\n\n        if (counter < COUNT_HALT_01 || counter > COUNT_HALT_02) {\n            // Signal to free waiting thread by freeing the mutex\n            // Note: foo() is now permitted to modify \"counter\"\n            conditionVar.notify_one();\n        }\n        else {\n            ++counter;\n            cout << \"egg counter = \" << counter << endl;\n        }\n\n        if (counter >= COUNT_DONE) {\n            return;\n        }\n    }\n}\n\n\n\nint main() {\n    auto thFoo = std::thread(&foo);\n    auto thEgg = std::thread(&egg);\n\n    thFoo.join();\n    thEgg.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo22a-blocking-queue.cpp",
    "content": "/*\nBLOCKING QUEUES\nVersion A: A slow producer and a fast consumer\n\nBlocking queues in C++ std threading are not supported by default.\nSo, I use mylib::BlockingQueue for this demonstration.\n*/\n\n\n#include <iostream>\n#include <string>\n#include <chrono>\n#include <thread>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\nusing namespace mylib;\n\n\n\nvoid producer(BlockingQueue<string>* blkQueue) {\n    std::this_thread::sleep_for(std::chrono::seconds(2));\n    blkQueue->put(\"Alice\");\n\n    std::this_thread::sleep_for(std::chrono::seconds(2));\n    blkQueue->put(\"likes\");\n\n    std::this_thread::sleep_for(std::chrono::seconds(2));\n    blkQueue->put(\"singing\");\n}\n\n\n\nvoid consumer(BlockingQueue<string>* blkQueue) {\n    string data;\n\n    for (int i = 0; i < 3; ++i) {\n        cout << \"\\nWaiting for data...\" << endl;\n        data = blkQueue->take();\n        cout << \"    \" << data << endl;\n    }\n}\n\n\n\nint main() {\n    auto blkQueue = BlockingQueue<string>();\n\n    auto thProducer = std::thread(&producer, &blkQueue);\n    auto thConsumer = std::thread(&consumer, &blkQueue);\n\n    thProducer.join();\n    thConsumer.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo22b-blocking-queue.cpp",
    "content": "/*\nBLOCKING QUEUES\nVersion B: A fast producer and a slow consumer\n\nBlocking queues in C++ std threading are not supported by default.\nSo, I use mylib::BlockingQueue for this demonstration.\n*/\n\n\n#include <iostream>\n#include <string>\n#include <chrono>\n#include <thread>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\nusing namespace mylib;\n\n\n\nvoid producer(BlockingQueue<string>* blkQueue) {\n    blkQueue->put(\"Alice\");\n    blkQueue->put(\"likes\");\n\n    /*\n    Due to reaching the maximum capacity = 2, when executing blkQueue->put(\"singing\"),\n    this thread is going to sleep until the queue removes an element.\n    */\n\n    blkQueue->put(\"singing\");\n}\n\n\n\nvoid consumer(BlockingQueue<string>* blkQueue) {\n    string data;\n    std::this_thread::sleep_for(std::chrono::seconds(2));\n\n    for (int i = 0; i < 3; ++i) {\n        cout << \"\\nWaiting for data...\" << endl;\n        data = blkQueue->take();\n        cout << \"    \" << data << endl;\n    }\n}\n\n\n\nint main() {\n    auto blkQueue = BlockingQueue<string>(2); // blocking queue with capacity = 2\n\n    auto thProducer = std::thread(&producer, &blkQueue);\n    auto thConsumer = std::thread(&consumer, &blkQueue);\n\n    thProducer.join();\n    thConsumer.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo23a01-thread-local.cpp",
    "content": "/*\nTHREAD-LOCAL STORAGE\nIntroduction\n    The basic way to use thread-local storage\n*/\n\n\n#include <iostream>\n#include <string>\n#include <thread>\nusing namespace std;\n\n\n\nthread_local string value = \"NOT SET\";\n\n\n\nvoid doTask() {\n    cout << value << endl;\n}\n\n\n\nint main() {\n    // Main thread sets value = \"APPLE\"\n    value = \"APPLE\";\n    cout << value << endl;\n\n    // Child thread gets value\n    // Expected output: \"NOT SET\"\n    auto th = std::thread(&doTask);\n    th.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo23a02-thread-local.cpp",
    "content": "/*\nTHREAD-LOCAL STORAGE\nIntroduction\n    The smart-pointer way to use thread-local storage\n*/\n\n\n#include <iostream>\n#include <string>\n#include <memory>\n#include <thread>\nusing namespace std;\n\n\n\nthread_local std::shared_ptr<string> value;\n\n\n\nstring getValue() {\n    if (nullptr == value.get()) {\n        value.reset(new string(\"NOT SET\"));\n    }\n\n    return *value.get();\n}\n\n\n\nvoid doTask() {\n    cout << getValue() << endl;\n}\n\n\n\nint main() {\n    // Main thread sets value = \"APPLE\"\n    value.reset(new string(\"APPLE\"));\n    cout << getValue() << endl;\n\n    // Child thread gets value\n    // Expected output: \"NOT SET\"\n    auto th = std::thread(&doTask);\n    th.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo23b-thread-local.cpp",
    "content": "/*\nTHREAD-LOCAL STORAGE\nAvoiding synchronization using thread-local storage\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <chrono>\n#include <thread>\nusing namespace std;\n\n\n\nthread_local int counter = 0;\n\n\n\nvoid doTask(int t) {\n    std::this_thread::sleep_for(std::chrono::seconds(1));\n\n    for (int i = 0; i < 1000; ++i)\n        ++counter;\n\n    cout << \"Thread \" << t << \" gives counter = \" << counter << endl;\n}\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 3;\n    vector<std::thread> lstTh;\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh.push_back(std::thread(&doTask, i));\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    cout << endl;\n\n    /*\n    By using thread-local storage, each thread has its own counter.\n    So, the counter in one thread is completely independent of each other.\n\n    Thread-local storage helps us to AVOID SYNCHRONIZATION.\n    */\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo24-volatile.cpp",
    "content": "/*\nTHE VOLATILE KEYWORD\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\nusing namespace std;\n\n\n\nvolatile bool isRunning;\n\n\n\nvoid doTask() {\n    while (isRunning) {\n        cout << \"Running...\" << endl;\n        std::this_thread::sleep_for(std::chrono::seconds(2));\n    }\n}\n\n\n\nint main() {\n    isRunning = true;\n    auto th = std::thread(&doTask);\n\n    std::this_thread::sleep_for(std::chrono::seconds(6));\n    isRunning = false;\n\n    th.join();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo25a-atomic.cpp",
    "content": "/*\nATOMIC ACCESS\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <chrono>\n#include <thread>\nusing namespace std;\n\n\n\nvolatile int counter = 0;\n\n\n\nvoid doTask() {\n    std::this_thread::sleep_for(std::chrono::seconds(1));\n    counter += 1;\n}\n\n\n\nint main() {\n    counter = 0;\n\n    vector<std::thread> lstTh;\n\n    for (int i = 0; i < 1000; ++i) {\n        lstTh.push_back(std::thread(&doTask));\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    // Unpredictable result\n    cout << \"counter = \" << counter << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo25b-atomic.cpp",
    "content": "/*\nATOMIC ACCESS\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <chrono>\n#include <thread>\n#include <atomic>\nusing namespace std;\n\n\n\n// std::atomic<int> counter;\nstd::atomic_int32_t counter;\n\n\n\nvoid doTask() {\n    std::this_thread::sleep_for(std::chrono::seconds(1));\n    counter += 1;\n}\n\n\n\nint main() {\n    counter = 0;\n\n    vector<std::thread> lstTh;\n\n    for (int i = 0; i < 1000; ++i) {\n        lstTh.push_back(std::thread(&doTask));\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    // counter = 1000\n    cout << \"counter = \" << counter << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demo25c-atomic-gcc.cpp",
    "content": "/*\nATOMIC ACCESS\ngcc builtins for atomic access\n\nSome functions:\n    type __atomic_load_n (type *ptr, int memorder)\n    void __atomic_store (type *ptr, type *val, int memorder)\n\n    type __atomic_add_fetch (type *ptr, type val, int memorder)\n    type __atomic_sub_fetch (type *ptr, type val, int memorder)\n    type __atomic_fetch_add (type *ptr, type val, int memorder)\n    type __atomic_fetch_sub (type *ptr, type val, int memorder)\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <chrono>\n#include <thread>\nusing namespace std;\n\n\n\nvolatile int counter = 0;\n\n\n\nvoid doTask() {\n    std::this_thread::sleep_for(std::chrono::seconds(1));\n\n    __atomic_add_fetch(&counter, 1, __ATOMIC_SEQ_CST);\n\n    //__sync_add_and_fetch(&counter, 1); // Before C++11\n}\n\n\n\nint main() {\n    counter = 0;\n\n    vector<std::thread> lstTh;\n\n    for (int i = 0; i < 1000; ++i) {\n        lstTh.push_back(std::thread(&doTask));\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    // counter = 1000\n    cout << \"counter = \" << counter << endl;\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demoex-async-future.cpp",
    "content": "/*\nASYNCHRONOUS PROGRAMMING WITH THE FUTURE\n*/\n\n\n#include <iostream>\n#include <thread>\n#include <future>\n\n\nint main() {\n    // future from a packaged_task\n    std::packaged_task<int()> task([]{ return 7; });    // wrap the function\n    std::future<int> fut1 = task.get_future();          // get a future\n    std::thread th(std::move(task));                    // launch on a thread\n\n\n    // future from an async()\n    std::future<int> fut2 = std::async(std::launch::async, []{ return 8; });\n\n\n    // future from a promise\n    std::promise<int> prom;\n    std::future<int> fut3 = prom.get_future();\n    std::thread( [&prom]{ prom.set_value_at_thread_exit(9); }).detach();\n\n\n    std::cout << \"Waiting...\" << std::endl;\n    fut1.wait();\n    fut2.wait();\n    fut3.wait();\n    th.join();\n\n\n    std::cout << \"Done!\" << std::endl;\n\n    std::cout << \"Results are: \"\n              << fut1.get() << ' ' << fut2.get() << ' ' << fut3.get() << std::endl;\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/demoex-jthread.cpp",
    "content": "/*\nJTHREAD\n\nstd::jthread has the same general behavior as std::thread,\nexcept that jthread automatically rejoins on destruction,\nand can be cancelled/stopped in certain situations.\n(From https://en.cppreference.com/w/cpp/thread/jthread)\n*/\n\n\n#include <iostream>\n#include <thread>\n#include <chrono>\n\n\n\nvoid sumIntegers(int a, int b) {\n    int t = a + b;\n    std::cout << \"Sum: \" << t << std::endl;\n}\n\n\n\nvoid iterateValues(std::stop_token stopTok, int startValue, int endValue) {\n    int i = startValue;\n\n    for (; i < endValue; ++i) {\n        if (stopTok.stop_requested()) {\n            break;\n        }\n        std::this_thread::sleep_for(std::chrono::milliseconds(100));\n    }\n\n    std::cout << \"End of function, i = \" << i << std::endl;\n}\n\n\n\nint main() {\n    // DEMO 01: Use std::jthread just like normal std::thread\n    std::cout << \"DEMO 01:\" << std::endl;\n    {\n        std::jthread thSumInt(&sumIntegers, 100, -30);\n        std::this_thread::sleep_for(std::chrono::seconds(2));\n    }\n\n\n    // DEMO 02: Pass the function that takes a std::stop_token as its first argument\n    std::cout << \"\\nDEMO 02:\" << std::endl;\n    {\n        std::jthread thIter(&iterateValues, 0, 1'000'000);\n\n        std::this_thread::sleep_for(std::chrono::seconds(2));\n        thIter.request_stop(); // or thIter.get_stop_source().request_stop();\n\n        std::this_thread::sleep_for(std::chrono::seconds(2));\n        std::cout << \"End of program\" << std::endl;\n    }\n\n\n    // No need to join thSumInt and thIter, because they auto-join on destruction\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer01a-max-div.cpp",
    "content": "/*\nMAXIMUM NUMBER OF DIVISORS\n*/\n\n\n#include <iostream>\n#include \"mylib-time.hpp\"\nusing namespace std;\n\n\n\nint main() {\n    constexpr int RANGE_START = 1;\n    constexpr int RANGE_END = 100000;\n\n    int resValue = 0;\n    int resNumDiv = 0;  // number of divisors of result\n\n    auto tpStart = mylib::HiResClock::now();\n\n\n    for (int i = RANGE_START; i <= RANGE_END; ++i) {\n        int numDiv = 0;\n\n        for (int j = i / 2; j > 0; --j)\n            if (i % j == 0)\n                ++numDiv;\n\n        if (resNumDiv < numDiv) {\n            resNumDiv = numDiv;\n            resValue = i;\n        }\n    }\n\n\n    auto timeElapsed = mylib::HiResClock::getTimeSpan(tpStart);\n\n    cout << \"The integer which has largest number of divisors is \" << resValue << endl;\n    cout << \"The largest number of divisor is \" << resNumDiv << endl;\n    cout << \"Time elapsed = \" << timeElapsed.count() << endl;\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer01b-max-div.cpp",
    "content": "/*\nMAXIMUM NUMBER OF DIVISORS\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <algorithm>\n#include <thread>\n#include \"mylib-time.hpp\"\nusing namespace std;\n\n\n\nstruct WorkerArg {\n    int iStart;\n    int iEnd;\n\n    WorkerArg(int iStart = 0, int iEnd = 0): iStart(iStart), iEnd(iEnd)\n    {\n    }\n};\n\n\n\nstruct WorkerResult {\n    int value;\n    int numDiv;\n\n    WorkerResult(int value = 0, int numDiv = 0): value(value), numDiv(numDiv)\n    {\n    }\n};\n\n\n\nvoid workerFunc(WorkerArg* arg, WorkerResult* res) {\n    int resValue = 0;\n    int resNumDiv = 0;\n\n    for (int i = arg->iStart; i <= arg->iEnd; ++i) {\n        int numDiv = 0;\n\n        for (int j = i / 2; j > 0; --j)\n            if (i % j == 0)\n                ++numDiv;\n\n        if (resNumDiv < numDiv) {\n            resNumDiv = numDiv;\n            resValue = i;\n        }\n    }\n\n    (*res) = WorkerResult(resValue, resNumDiv);\n}\n\n\n\nvoid prepare(\n    int rangeStart, int rangeEnd,\n    int numThreads,\n    vector<std::thread>& lstTh,\n    vector<WorkerArg>& lstWorkerArg,\n    vector<WorkerResult>& lstWorkerRes\n) {\n    lstTh.resize(numThreads);\n    lstWorkerArg.resize(numThreads);\n    lstWorkerRes.resize(numThreads);\n\n    int rangeA, rangeB, rangeBlock;\n\n    rangeBlock = (rangeEnd - rangeStart + 1) / numThreads;\n    rangeA = rangeStart;\n\n    for (int i = 0; i < numThreads; ++i, rangeA += rangeBlock) {\n        rangeB = rangeA + rangeBlock - 1;\n\n        if (i == numThreads - 1)\n            rangeB = rangeEnd;\n\n        lstWorkerArg[i] = WorkerArg(rangeA, rangeB);\n    }\n}\n\n\n\nint main() {\n    constexpr int RANGE_START = 1;\n    constexpr int RANGE_END = 100000;\n    constexpr int NUM_THREADS = 8;\n\n    vector<std::thread> lstTh;\n    vector<WorkerArg> lstWorkerArg;\n    vector<WorkerResult> lstWorkerRes;\n\n    prepare(RANGE_START, RANGE_END, NUM_THREADS, lstTh, lstWorkerArg, lstWorkerRes);\n\n    auto tpStart = mylib::HiResClock::now();\n\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh[i] = std::thread(&workerFunc, &lstWorkerArg[i], &lstWorkerRes[i]);\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    // for (auto&& res: lstWorkerRes) {\n    //     cout << res.value << \"  \" << res.numDiv << endl;\n    // }\n\n    auto finalRes = *max_element(lstWorkerRes.begin(), lstWorkerRes.end(),\n        [](const WorkerResult &lhs, const WorkerResult &rhs) -> bool {\n            return lhs.numDiv < rhs.numDiv;\n        }\n    );\n\n\n    auto timeElapsed = mylib::HiResClock::getTimeSpan(tpStart);\n\n    cout << \"The integer which has largest number of divisors is \" << finalRes.value << endl;\n    cout << \"The largest number of divisor is \" << finalRes.numDiv << endl;\n    cout << \"Time elapsed = \" << timeElapsed.count() << endl;\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer01c-max-div.cpp",
    "content": "/*\nMAXIMUM NUMBER OF DIVISORS\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <algorithm>\n#include <thread>\n#include <mutex>\n#include \"mylib-time.hpp\"\nusing namespace std;\n\n\n\nstruct WorkerArg {\n    int iStart;\n    int iEnd;\n\n    WorkerArg(int iStart = 0, int iEnd = 0): iStart(iStart), iEnd(iEnd)\n    {\n    }\n};\n\n\n\nclass FinalResult {\npublic:\n    int value = 0;\n    int numDiv = 0;\n\nprivate:\n    std::mutex mut;\n\n\npublic:\n    void update(int value, int numDiv) {\n        // Synchronize whole function\n        std::unique_lock<std::mutex> lk(mut);\n\n        if (this->numDiv < numDiv) {\n            this->numDiv = numDiv;\n            this->value = value;\n        }\n    }\n};\n\n\n\nvoid workerFunc(WorkerArg* arg, FinalResult* res) {\n    int resValue = 0;\n    int resNumDiv = 0;\n\n    for (int i = arg->iStart; i <= arg->iEnd; ++i) {\n        int numDiv = 0;\n\n        for (int j = i / 2; j > 0; --j)\n            if (i % j == 0)\n                ++numDiv;\n\n        if (resNumDiv < numDiv) {\n            resNumDiv = numDiv;\n            resValue = i;\n        }\n    }\n\n    res->update(resValue, resNumDiv);\n}\n\n\n\nvoid prepare(\n    int rangeStart, int rangeEnd,\n    int numThreads,\n    vector<std::thread>& lstTh,\n    vector<WorkerArg>& lstWorkerArg\n) {\n    lstTh.resize(numThreads);\n    lstWorkerArg.resize(numThreads);\n\n    int rangeA, rangeB, rangeBlock;\n\n    rangeBlock = (rangeEnd - rangeStart + 1) / numThreads;\n    rangeA = rangeStart;\n\n    for (int i = 0; i < numThreads; ++i, rangeA += rangeBlock) {\n        rangeB = rangeA + rangeBlock - 1;\n\n        if (i == numThreads - 1)\n            rangeB = rangeEnd;\n\n        lstWorkerArg[i] = WorkerArg(rangeA, rangeB);\n    }\n}\n\n\n\nint main() {\n    constexpr int RANGE_START = 1;\n    constexpr int RANGE_END = 100000;\n    constexpr int NUM_THREADS = 8;\n\n    vector<std::thread> lstTh;\n    vector<WorkerArg> lstWorkerArg;\n\n    FinalResult finalRes;\n\n    prepare(RANGE_START, RANGE_END, NUM_THREADS, lstTh, lstWorkerArg);\n\n    auto tpStart = mylib::HiResClock::now();\n\n\n    for (int i = 0; i < NUM_THREADS; ++i) {\n        lstTh[i] = std::thread(&workerFunc, &lstWorkerArg[i], &finalRes);\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n\n    auto timeElapsed = mylib::HiResClock::getTimeSpan(tpStart);\n\n    cout << \"The integer which has largest number of divisors is \" << finalRes.value << endl;\n    cout << \"The largest number of divisor is \" << finalRes.numDiv << endl;\n    cout << \"Time elapsed = \" << timeElapsed.count() << endl;\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer02a01-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE A: USING BLOCKING QUEUES\n    Version A01: 1 slow producer, 1 fast consumer\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\nusing namespace mylib;\n\n\n\nvoid producer(BlockingQueue<int>* blkq) {\n    int i = 1;\n\n    for (;; ++i) {\n        blkq->put(i);\n        std::this_thread::sleep_for(std::chrono::seconds(1));\n    }\n}\n\n\n\nvoid consumer(BlockingQueue<int>* blkq) {\n    int data = 0;\n\n    for (;;) {\n        data = blkq->take();\n        cout << \"Consumer \" << data << endl;\n    }\n}\n\n\n\nint main() {\n    auto blkq = BlockingQueue<int>();\n\n    auto thProducer = std::thread(&producer, &blkq);\n    auto thConsumer = std::thread(&consumer, &blkq);\n\n    thProducer.join();\n    thConsumer.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer02a02-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE A: USING BLOCKING QUEUES\n    Version A02: 2 slow producers, 1 fast consumer\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\nusing namespace mylib;\n\n\n\nvoid producer(BlockingQueue<int>* blkq) {\n    int i = 1;\n\n    for (;; ++i) {\n        blkq->put(i);\n        std::this_thread::sleep_for(std::chrono::seconds(1));\n    }\n}\n\n\n\nvoid consumer(BlockingQueue<int>* blkq) {\n    int data = 0;\n\n    for (;;) {\n        data = blkq->take();\n        cout << \"Consumer \" << data << endl;\n    }\n}\n\n\n\nint main() {\n    auto blkq = BlockingQueue<int>();\n\n    auto thProducerA = std::thread(&producer, &blkq);\n    auto thProducerB = std::thread(&producer, &blkq);\n    auto thConsumer = std::thread(&consumer, &blkq);\n\n    thProducerA.join();\n    thProducerB.join();\n    thConsumer.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer02a03-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE A: USING BLOCKING QUEUES\n    Version A03: 1 slow producer, 2 fast consumers\n*/\n\n\n#include <iostream>\n#include <string>\n#include <chrono>\n#include <thread>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\nusing namespace mylib;\n\n\n\nvoid producer(BlockingQueue<int>* blkq) {\n    int i = 1;\n\n    for (;; ++i) {\n        blkq->put(i);\n        std::this_thread::sleep_for(std::chrono::seconds(1));\n    }\n}\n\n\n\nvoid consumer(string name, BlockingQueue<int>* blkq) {\n    int data = 0;\n\n    for (;;) {\n        data = blkq->take();\n        cout << \"Consumer \" << name << \": \" << data << endl;\n    }\n}\n\n\n\nint main() {\n    auto blkq = BlockingQueue<int>();\n\n    auto thProducer = std::thread(&producer, &blkq);\n    auto thConsumerFoo = std::thread(&consumer, \"foo\", &blkq);\n    auto thConsumerBar = std::thread(&consumer, \"bar\", &blkq);\n\n    thProducer.join();\n    thConsumerFoo.join();\n    thConsumerBar.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer02a04-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE A: USING BLOCKING QUEUES\n    Version A04: Multiple fast producers, multiple slow consumers\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\nusing namespace mylib;\n\n\n\nvoid producer(BlockingQueue<int>* blkq, int startValue) {\n    int i = 1;\n\n    for (;; ++i) {\n        blkq->put(i + startValue);\n    }\n}\n\n\n\nvoid consumer(BlockingQueue<int>* blkq) {\n    int data = 0;\n\n    for (;;) {\n        data = blkq->take();\n        cout << \"Consumer \" << data << endl;\n        std::this_thread::sleep_for(std::chrono::seconds(1));\n    }\n}\n\n\n\nint main() {\n    auto blkq = BlockingQueue<int>(5);\n\n\n    constexpr int NUM_PRODUCERS = 3;\n    constexpr int NUM_CONSUMERS = 2;\n\n    std::thread lstThProducer[NUM_PRODUCERS];\n    std::thread lstThConsumer[NUM_CONSUMERS];\n\n\n    // CREATE THREADS\n    for (int i = 0; i < NUM_PRODUCERS; ++i) {\n        lstThProducer[i] = std::thread(&producer, &blkq, i * 1000);\n    }\n\n    for (auto&& th : lstThConsumer) {\n        th = std::thread(&consumer, &blkq);\n    }\n\n\n    // JOIN THREADS\n    for (auto&& th : lstThProducer) {\n        th.join();\n    }\n\n    for (auto&& th : lstThConsumer) {\n        th.join();\n    }\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer02b01-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE B: USING SEMAPHORES\n    Version B01: 1 slow producer, 1 fast consumer\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <chrono>\n#include <thread>\n#include <semaphore>\nusing namespace std;\n\n\n\nusing cntsemaphore = std::counting_semaphore<>;\n\n\n\nvoid producer(\n    cntsemaphore* semFill,\n    cntsemaphore* semEmpty,\n    queue<int>* q\n) {\n    int i = 1;\n\n    for (;; ++i) {\n        semEmpty->acquire();\n\n        q->push(i);\n        std::this_thread::sleep_for(std::chrono::seconds(1));\n\n        semFill->release();\n    }\n}\n\n\n\nvoid consumer(\n    cntsemaphore* semFill,\n    cntsemaphore* semEmpty,\n    queue<int>* q\n) {\n    int data = 0;\n\n    for (;;) {\n        semFill->acquire();\n\n        data = q->front();\n        q->pop();\n\n        cout << \"Consumer \" << data << endl;\n\n        semEmpty->release();\n    }\n}\n\n\n\nint main() {\n    cntsemaphore semFill(0);   // item produced\n    cntsemaphore semEmpty(1);  // remaining space in queue\n\n    queue<int> q;\n\n    auto thProducer = std::thread(&producer, &semFill, &semEmpty, &q);\n    auto thConsumer = std::thread(&consumer, &semFill, &semEmpty, &q);\n\n    thProducer.join();\n    thConsumer.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer02b02-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE B: USING SEMAPHORES\n    Version B02: 2 slow producers, 1 fast consumer\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <chrono>\n#include <thread>\n#include <semaphore>\nusing namespace std;\n\n\n\nusing cntsemaphore = std::counting_semaphore<>;\n\n\n\nvoid producer(\n    cntsemaphore* semFill,\n    cntsemaphore* semEmpty,\n    queue<int>* q,\n    int startValue\n) {\n    int i = 1;\n\n    for (;; ++i) {\n        semEmpty->acquire();\n\n        q->push(i + startValue);\n        std::this_thread::sleep_for(std::chrono::seconds(1));\n\n        semFill->release();\n    }\n}\n\n\n\nvoid consumer(\n    cntsemaphore* semFill,\n    cntsemaphore* semEmpty,\n    queue<int>* q\n) {\n    int data = 0;\n\n    for (;;) {\n        semFill->acquire();\n\n        data = q->front();\n        q->pop();\n\n        cout << \"Consumer \" << data << endl;\n\n        semEmpty->release();\n    }\n}\n\n\n\nint main() {\n    cntsemaphore semFill(0);   // item produced\n    cntsemaphore semEmpty(1);  // remaining space in queue\n\n    queue<int> q;\n\n    auto thProducerA = std::thread(&producer, &semFill, &semEmpty, &q, 0);\n    auto thProducerB = std::thread(&producer, &semFill, &semEmpty, &q, 1000);\n    auto thConsumer = std::thread(&consumer, &semFill, &semEmpty, &q);\n\n    thProducerA.join();\n    thProducerB.join();\n    thConsumer.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer02b03-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE B: USING SEMAPHORES\n    Version B03: 2 fast producers, 1 slow consumer\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <chrono>\n#include <thread>\n#include <semaphore>\nusing namespace std;\n\n\n\nusing cntsemaphore = std::counting_semaphore<>;\n\n\n\nvoid producer(\n    cntsemaphore* semFill,\n    cntsemaphore* semEmpty,\n    queue<int>* q,\n    int startValue\n) {\n    int i = 1;\n\n    for (;; ++i) {\n        semEmpty->acquire();\n        q->push(i + startValue);\n        semFill->release();\n    }\n}\n\n\n\nvoid consumer(\n    cntsemaphore* semFill,\n    cntsemaphore* semEmpty,\n    queue<int>* q\n) {\n    int data = 0;\n\n    for (;;) {\n        semFill->acquire();\n\n        data = q->front();\n        q->pop();\n\n        cout << \"Consumer \" << data << endl;\n        std::this_thread::sleep_for(std::chrono::seconds(1));\n\n        semEmpty->release();\n    }\n}\n\n\n\nint main() {\n    cntsemaphore semFill(0);   // item produced\n    cntsemaphore semEmpty(1);  // remaining space in queue\n\n    queue<int> q;\n\n    auto thProducerA = std::thread(&producer, &semFill, &semEmpty, &q, 0);\n    auto thProducerB = std::thread(&producer, &semFill, &semEmpty, &q, 1000);\n    auto thConsumer = std::thread(&consumer, &semFill, &semEmpty, &q);\n\n    thProducerA.join();\n    thProducerB.join();\n    thConsumer.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer02b04-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE B: USING SEMAPHORES\n    Version B04: Multiple fast producers, multiple slow consumers\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <chrono>\n#include <thread>\n#include <semaphore>\nusing namespace std;\n\n\n\nusing cntsemaphore = std::counting_semaphore<>;\n\n\n\nvoid producer(\n    cntsemaphore* semFill,\n    cntsemaphore* semEmpty,\n    queue<int>* q,\n    int startValue\n) {\n    int i = 1;\n\n    for (;; ++i) {\n        semEmpty->acquire();\n        q->push(i + startValue);\n        semFill->release();\n    }\n}\n\n\n\nvoid consumer(\n    cntsemaphore* semFill,\n    cntsemaphore* semEmpty,\n    queue<int>* q\n) {\n    int data = 0;\n\n    for (;;) {\n        semFill->acquire();\n\n        data = q->front();\n        q->pop();\n\n        cout << \"Consumer \" << data << endl;\n        std::this_thread::sleep_for(std::chrono::seconds(1));\n\n        semEmpty->release();\n    }\n}\n\n\n\nint main() {\n    cntsemaphore semFill(0);   // item produced\n    cntsemaphore semEmpty(1);  // remaining space in queue\n\n    queue<int> q;\n\n\n    constexpr int NUM_PRODUCERS = 3;\n    constexpr int NUM_CONSUMERS = 2;\n\n    std::thread lstThProducer[NUM_PRODUCERS];\n    std::thread lstThConsumer[NUM_CONSUMERS];\n\n\n    // CREATE THREADS\n    for (int i = 0; i < NUM_PRODUCERS; ++i) {\n        lstThProducer[i] = std::thread(&producer, &semFill, &semEmpty, &q, i * 1000);\n    }\n\n    for (auto&& th : lstThConsumer) {\n        th = std::thread(&consumer, &semFill, &semEmpty, &q);\n    }\n\n\n    // JOIN THREADS\n    for (auto&& th : lstThProducer) {\n        th.join();\n    }\n\n    for (auto&& th : lstThConsumer) {\n        th.join();\n    }\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer02c-producer-consumer.cpp",
    "content": "/*\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE C: USING CONDITION VARIABLES & MONITORS\n    Multiple fast producers, multiple slow consumers\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <chrono>\n#include <thread>\n#include <mutex>\n#include <condition_variable>\nusing namespace std;\n\n\n\ntemplate <typename T>\nclass Monitor {\nprivate:\n    std::queue<T>* q = nullptr;\n    int maxQueueSize = 0;\n\n    std::condition_variable condFull;\n    std::condition_variable condEmpty;\n    std::mutex mut;\n\n\npublic:\n    Monitor() = default;\n    Monitor(const Monitor& other) = delete;\n    Monitor(const Monitor&& other) = delete;\n    void operator=(const Monitor& other) = delete;\n    void operator=(const Monitor&& other) = delete;\n\n\n    void init(int maxQueueSize, std::queue<T>* q) {\n        this->q = q;\n        this->maxQueueSize = maxQueueSize;\n    }\n\n\n    void add(const T& item) {\n        std::unique_lock<std::mutex> mutLock(mut);\n\n        while (q->size() == maxQueueSize) {\n            condFull.wait(mutLock);\n        }\n\n        q->push(item);\n\n        if (q->size() == 1) {\n            condEmpty.notify_one();\n        }\n\n        // mutLock.unlock();\n    }\n\n\n    T remove() {\n        std::unique_lock<std::mutex> mutLock(mut);\n\n        while (q->size() == 0) {\n            condEmpty.wait(mutLock);\n        }\n\n        T item = q->front();\n        q->pop();\n\n        if (q->size() == maxQueueSize - 1) {\n            condFull.notify_one();\n        }\n\n        // mutLock.unlock();\n\n        return item;\n    }\n};\n\n\n\ntemplate <typename T>\nvoid producer(Monitor<T>* monitor, int startValue) {\n    T i = 1;\n\n    for (;; ++i) {\n        monitor->add(i + startValue);\n    }\n}\n\n\n\ntemplate <typename T>\nvoid consumer(Monitor<T>* monitor) {\n    T data;\n\n    for (;;) {\n        data = monitor->remove();\n        cout << \"Consumer \" << data << endl;\n        std::this_thread::sleep_for(std::chrono::seconds(1));\n    }\n}\n\n\n\nint main() {\n    Monitor<int> monitor;\n    queue<int> q;\n\n    constexpr int MAX_QUEUE_SIZE = 6;\n    constexpr int NUM_PRODUCERS = 3;\n    constexpr int NUM_CONSUMERS = 2;\n\n    std::thread lstThProducer[NUM_PRODUCERS];\n    std::thread lstThConsumer[NUM_CONSUMERS];\n\n\n    // PREPARE ARGUMENTS\n    monitor.init(MAX_QUEUE_SIZE, &q);\n\n\n    // CREATE THREADS\n    for (int i = 0; i < NUM_PRODUCERS; ++i) {\n        lstThProducer[i] = std::thread(&producer<int>, &monitor, i * 1000);\n    }\n\n    for (auto&& th : lstThConsumer) {\n        th = std::thread(&consumer<int>, &monitor);\n    }\n\n\n    // JOIN THREADS\n    for (auto&& th : lstThProducer) {\n        th.join();\n    }\n\n    for (auto&& th : lstThConsumer) {\n        th.join();\n    }\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer03a-readers-writers.cpp",
    "content": "/*\nTHE READERS-WRITERS PROBLEM\nSolution for the first readers-writers problem\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include <mutex>\n#include \"mylib-random.hpp\"\nusing namespace std;\n\n\n\nstruct GlobalData {\n    volatile int resource = 0;\n    int readerCount = 0;\n\n    std::mutex mutResource;\n    std::mutex mutReaderCount;\n};\n\n\n\nvoid doTaskWriter(GlobalData* g, int delayTime) {\n    std::this_thread::sleep_for(std::chrono::seconds(delayTime));\n\n    g->mutResource.lock();\n\n    g->resource = mylib::RandInt::get(100);\n    cout << \"Write \" << g->resource << endl;\n\n    g->mutResource.unlock();\n}\n\n\n\nvoid doTaskReader(GlobalData* g, int delayTime) {\n    std::this_thread::sleep_for(std::chrono::seconds(delayTime));\n\n\n    // Increase reader count\n    g->mutReaderCount.lock();\n    g->readerCount += 1;\n\n    if (1 == g->readerCount)\n        g->mutResource.lock();\n\n    g->mutReaderCount.unlock();\n\n\n    // Do the reading\n    cout << \"Read \" << g->resource << endl;\n\n\n    // Decrease reader count\n    g->mutReaderCount.lock();\n    g->readerCount -= 1;\n\n    if (0 == g->readerCount)\n        g->mutResource.unlock();\n\n    g->mutReaderCount.unlock();\n}\n\n\n\nint main() {\n    GlobalData globalData;\n\n\n    constexpr int NUM_READERS = 8;\n    constexpr int NUM_WRITERS = 6;\n\n    std::thread lstThReader[NUM_READERS];\n    std::thread lstThWriter[NUM_WRITERS];\n\n\n    // CREATE THREADS\n    for (auto&& th: lstThReader) {\n        th = std::thread(&doTaskReader, &globalData, mylib::RandInt::get(3));\n    }\n\n    for (auto&& th: lstThWriter) {\n        th = std::thread(&doTaskWriter, &globalData, mylib::RandInt::get(3));\n    }\n\n\n    // JOIN THREADS\n    for (auto&& th: lstThReader) {\n        th.join();\n    }\n\n    for (auto&& th: lstThWriter) {\n        th.join();\n    }\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer03b-readers-writers.cpp",
    "content": "/*\nTHE READERS-WRITERS PROBLEM\nSolution for the third readers-writers problem\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include <mutex>\n#include \"mylib-random.hpp\"\nusing namespace std;\n\n\n\nstruct GlobalData {\n    volatile int resource = 0;\n    int readerCount = 0;\n\n    std::mutex mutResource;\n    std::mutex mutReaderCount;\n\n    std::mutex mutServiceQueue;\n};\n\n\n\nvoid doTaskWriter(GlobalData* g, int delayTime) {\n    std::this_thread::sleep_for(std::chrono::seconds(delayTime));\n\n    g->mutServiceQueue.lock();\n\n    g->mutResource.lock();\n\n    g->mutServiceQueue.unlock();\n\n    g->resource = mylib::RandInt::get(100);\n    cout << \"Write \" << g->resource << endl;\n\n    g->mutResource.unlock();\n}\n\n\n\nvoid doTaskReader(GlobalData* g, int delayTime) {\n    std::this_thread::sleep_for(std::chrono::seconds(delayTime));\n\n\n    g->mutServiceQueue.lock();\n\n\n    // Increase reader count\n    g->mutReaderCount.lock();\n    g->readerCount += 1;\n\n    if (1 == g->readerCount)\n        g->mutResource.lock();\n\n    g->mutReaderCount.unlock();\n\n\n    g->mutServiceQueue.unlock();\n\n\n    // Do the reading\n    cout << \"Read \" << g->resource << endl;\n\n\n    // Decrease reader count\n    g->mutReaderCount.lock();\n    g->readerCount -= 1;\n\n    if (0 == g->readerCount)\n        g->mutResource.unlock();\n\n    g->mutReaderCount.unlock();\n}\n\n\n\nint main() {\n    GlobalData globalData;\n\n\n    constexpr int NUM_READERS = 8;\n    constexpr int NUM_WRITERS = 6;\n\n    std::thread lstThReader[NUM_READERS];\n    std::thread lstThWriter[NUM_WRITERS];\n\n\n    // CREATE THREADS\n    for (auto&& th: lstThReader) {\n        th = std::thread(&doTaskReader, &globalData, mylib::RandInt::get(3));\n    }\n\n    for (auto&& th: lstThWriter) {\n        th = std::thread(&doTaskWriter, &globalData, mylib::RandInt::get(3));\n    }\n\n\n    // JOIN THREADS\n    for (auto&& th: lstThReader) {\n        th.join();\n    }\n\n    for (auto&& th: lstThWriter) {\n        th.join();\n    }\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer04-dining-philosophers.cpp",
    "content": "/*\nTHE DINING PHILOSOPHERS PROBLEM\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include <mutex>\nusing namespace std;\n\n\n\nvoid doTaskPhilosopher(std::mutex chopstick[], int numPhilo, int idPhilo) {\n    int n = numPhilo;\n    int i = idPhilo;\n\n    std::this_thread::sleep_for(std::chrono::seconds(1));\n\n    chopstick[i].lock();\n    chopstick[(i + 1) % n].lock();\n\n    cout << \"Philosopher #\" << i << \" is eating the rice\" << endl;\n\n    chopstick[(i + 1) % n].unlock();\n    chopstick[i].unlock();\n}\n\n\n\nvoid doTaskPhilosopherUsingSyncBlock(std::mutex chopstick[], int numPhilo, int idPhilo) {\n    int n = numPhilo;\n    int i = idPhilo;\n\n    std::this_thread::sleep_for(std::chrono::seconds(1));\n\n    {\n        std::unique_lock<std::mutex> ( chopstick[i] );\n        std::unique_lock<std::mutex> ( chopstick[(i + 1) % n] );\n        cout << \"Philosopher #\" << i << \" is eating the rice\" << endl;\n    }\n}\n\n\n\nint main() {\n    constexpr int NUM_PHILOSOPHERS = 5;\n\n    std::mutex chopstick[NUM_PHILOSOPHERS];\n    std::thread lstTh[NUM_PHILOSOPHERS];\n\n    // CREATE THREADS\n    for (int i = 0; i < NUM_PHILOSOPHERS; ++i) {\n        lstTh[i] = std::thread(&doTaskPhilosopher, chopstick, NUM_PHILOSOPHERS, i);\n    }\n\n    // JOIN THREADS\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer05-util.hpp",
    "content": "#ifndef _EXER05_UTIL_HPP_\n#define _EXER05_UTIL_HPP_\n\n\n\nvoid getScalarProduct(double const* u, double const* v, int sizeVector, double* res) {\n    double sum = 0;\n\n    for (int i = sizeVector - 1; i >= 0; --i) {\n        sum += u[i] * v[i];\n    }\n\n    (*res) = sum;\n}\n\n\n\n#endif // _EXER05_UTIL_HPP_\n"
  },
  {
    "path": "cpp/cpp-std/exer05a-product-matrix-vector.cpp",
    "content": "/*\nMATRIX-VECTOR MULTIPLICATION\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <thread>\n#include \"exer05-util.hpp\"\nusing namespace std;\n\n\n\nusing vectord = std::vector<double>;\nusing matrix = std::vector<vectord>;\n\n\n\nvoid getProduct(const matrix& mat, const vectord& vec, vectord& result) {\n    // Assume that size of mat and vec are both eligible\n    int sizeRowMat = mat.size();\n    int sizeColMat = mat[0].size();\n    int sizeVec = vec.size();\n\n    result.clear();\n    result.resize(sizeRowMat, 0);\n\n    std::vector<std::thread> lstTh(sizeRowMat);\n\n    for (int i = 0; i < sizeRowMat; ++i) {\n        auto&& u = mat[i].data();\n        auto&& v = vec.data();\n        lstTh[i] = std::thread(&getScalarProduct, u, v, sizeVec, &result[i]);\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n}\n\n\n\nint main() {\n    matrix A = {\n        { 1, 2, 3 },\n        { 4, 5, 6 },\n        { 7, 8, 9 }\n    };\n\n    vectord b = {\n        3,\n        -1,\n        0\n    };\n\n    vectord result;\n    getProduct(A, b, result);\n\n    for (int i = 0; i < result.size(); ++i) {\n        cout << result[i] << endl;\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer05b-product-matrix-matrix.cpp",
    "content": "/*\nMATRIX-MATRIX MULTIPLICATION (DOT PRODUCT)\n*/\n\n\n#include <iostream>\n#include <vector>\n#include <thread>\n#include \"exer05-util.hpp\"\nusing namespace std;\n\n\n\nusing vectord = std::vector<double>;\nusing matrix = std::vector<vectord>;\n\n\n\nvoid getTransposeMatrix(const matrix& input, matrix& output) {\n    int numRow = input.size();\n    int numCol = input[0].size();\n\n    output.clear();\n    output.assign(numCol, vectord(numRow, 0));\n\n    for (int i = 0; i < numRow; ++i)\n        for (int j = 0; j < numCol; ++j)\n            output[j][i] = input[i][j];\n}\n\n\n\nvoid displayMatrix(const matrix& mat) {\n    int numRow = mat.size();\n    int numCol = mat[0].size();\n\n    for (int i = 0; i < numRow; ++i) {\n        for (int j = 0; j < numCol; ++j)\n            cout << \"\\t\" << mat[i][j];\n\n        cout << endl;\n    }\n}\n\n\n\nvoid getProduct(const matrix& matA, const matrix& matB, matrix& result) {\n    // Assume that size of matA and matB are both eligible\n    int sizeRowA = matA.size();\n    int sizeColA = matA[0].size();\n    int sizeColB = matB[0].size();\n    int sizeTotal = sizeRowA * sizeColB;\n\n    result.clear();\n    result.assign(sizeRowA, vectord(sizeColB, 0));\n\n    matrix matBT;\n    getTransposeMatrix(matB, matBT);\n\n    vector<std::thread> lstTh(sizeTotal);\n    int iSca = 0;\n\n    for (int i = 0; i < sizeRowA; ++i) {\n        for (int j = 0; j < sizeColB; ++j) {\n            auto&& u = matA[i].data();\n            auto&& v = matBT[j].data();\n            auto&& sizeVector = sizeColA;\n\n            lstTh[iSca] = std::thread(&getScalarProduct, u, v, sizeVector, &result[i][j]);\n            ++iSca;\n        }\n    }\n\n    for (auto&& th : lstTh) {\n        th.join();\n    }\n}\n\n\n\nint main() {\n    matrix A = {\n        { 1, 3, 5 },\n        { 2, 4, 6 },\n    };\n\n    matrix B = {\n        { 1, 0, 1, 0 },\n        { 0, 1, 0, 1 },\n        { 1, 0, 0, -2 }\n    };\n\n    matrix result;\n    getProduct(A, B, result);\n\n    displayMatrix(result);\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer06a-blocking-queue.cpp",
    "content": "/*\nBLOCKING QUEUE IMPLEMENTATION\nVersion A: Synchronous queues\n*/\n\n\n#include <iostream>\n#include <string>\n#include <chrono>\n#include <thread>\n#include <semaphore>\nusing namespace std;\n\n\n\nusing cntsemaphore = std::counting_semaphore<>;\n\n\n\ntemplate <typename T>\nclass SynchronousQueue {\n\nprivate:\n    cntsemaphore semPut = cntsemaphore(1);\n    cntsemaphore semTake = cntsemaphore(0);\n    T element;\n\n\npublic:\n    void put(const T& value) {\n        semPut.acquire();\n        element = value;\n        semTake.release();\n    }\n\n\n    T take() {\n        semTake.acquire();\n        T result = element;\n        semPut.release();\n        return result;\n    }\n\n};\n\n\n\nvoid producer(SynchronousQueue<std::string>* syncQueue) {\n    auto arr = { \"lorem\", \"ipsum\", \"dolor\" };\n\n    for (auto&& data : arr) {\n        cout << \"Producer: \" << data << endl;\n        syncQueue->put(data);\n        cout << \"Producer: \" << data << \"\\t\\t\\t[done]\" << endl;\n    }\n}\n\n\n\nvoid consumer(SynchronousQueue<std::string>* syncQueue) {\n    std::string data;\n    std::this_thread::sleep_for(std::chrono::seconds(5));\n\n    for (int i = 0; i < 3; ++i) {\n        data = syncQueue->take();\n        cout << \"\\tConsumer: \" << data << endl;\n    }\n}\n\n\n\nint main() {\n    SynchronousQueue<std::string> syncQueue;\n\n    auto thProducer = std::thread(&producer, &syncQueue);\n    auto thConsumer = std::thread(&consumer, &syncQueue);\n\n    thProducer.join();\n    thConsumer.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer06b01-blocking-queue.cpp",
    "content": "/*\nBLOCKING QUEUE IMPLEMENTATION\nVersion B01: General blocking queues\n             Underlying mechanism: Semaphores\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <string>\n#include <chrono>\n#include <stdexcept>\n#include <thread>\n#include <mutex>\n#include <semaphore>\nusing namespace std;\n\n\n\nusing cntsemaphore = std::counting_semaphore<>;\n\n\n\ntemplate <typename T>\nclass BlockingQueue {\n\nprivate:\n    int capacity;\n\n    cntsemaphore semRemain;\n    cntsemaphore semFill;\n    std::mutex mut;\n\n    std::queue<T> q;\n\n\npublic:\n    BlockingQueue(int capacity) : capacity(capacity), semRemain(capacity), semFill(0) {\n        if (this->capacity <= 0)\n            throw std::invalid_argument(\"capacity must be a positive integer\");\n    }\n\n\n    void put(const T& value) {\n        semRemain.acquire();\n\n        {\n            std::unique_lock<std::mutex> lk(mut);\n            q.push(value);\n        }\n\n        semFill.release();\n    }\n\n\n    T take() {\n        T result;\n        semFill.acquire();\n\n        {\n            std::unique_lock<std::mutex> lk(mut);\n            result = q.front();\n            q.pop();\n        }\n\n        semRemain.release();\n        return result;\n    }\n\n};\n\n\n\nvoid producer(BlockingQueue<std::string>* blkQueue) {\n    auto arr = { \"nice\", \"to\", \"meet\", \"you\" };\n\n    for (auto&& data : arr) {\n        cout << \"Producer: \" << data << endl;\n        blkQueue->put(data);\n        cout << \"Producer: \" << data << \"\\t\\t\\t[done]\" << endl;\n    }\n}\n\n\n\nvoid consumer(BlockingQueue<std::string>* blkQueue) {\n    std::string data;\n    std::this_thread::sleep_for(std::chrono::seconds(5));\n\n    for (int i = 0; i < 4; ++i) {\n        data = blkQueue->take();\n        cout << \"\\tConsumer: \" << data << endl;\n\n        if (0 == i)\n            std::this_thread::sleep_for(std::chrono::seconds(5));\n    }\n}\n\n\n\nint main() {\n    BlockingQueue<std::string> blkQueue(2); // capacity = 2\n\n    auto thProducer = std::thread(&producer, &blkQueue);\n    auto thConsumer = std::thread(&consumer, &blkQueue);\n\n    thProducer.join();\n    thConsumer.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer06b02-blocking-queue.cpp",
    "content": "/*\nBLOCKING QUEUE IMPLEMENTATION\nVersion B02: General blocking queues\n             Underlying mechanism: Condition variables\n*/\n\n\n#include <iostream>\n#include <queue>\n#include <string>\n#include <chrono>\n#include <stdexcept>\n#include <thread>\n#include <mutex>\n#include <condition_variable>\nusing namespace std;\n\n\n\ntemplate <typename T>\nclass BlockingQueue {\n\nprivate:\n    std::condition_variable condEmpty;\n    std::condition_variable condFull;\n    std::mutex mut;\n\n    int capacity = 0;\n    std::queue<T> q;\n\n\npublic:\n    BlockingQueue(int capacity) {\n        if (capacity <= 0)\n            throw std::invalid_argument(\"capacity must be a positive integer\");\n\n        this->capacity = capacity;\n    }\n\n\n    void put(const T& value) {\n        {\n            std::unique_lock<std::mutex> lk(mut);\n\n            while ((int)q.size() >= capacity) {\n                // Queue is full, must wait for 'take'\n                condFull.wait(lk);\n            }\n\n            q.push(value);\n        }\n\n        condEmpty.notify_one();\n    }\n\n\n    T take() {\n        T result;\n\n        {\n            std::unique_lock<std::mutex> lk(mut);\n\n            while (q.empty()) {\n                // Queue is empty, must wait for 'put'\n                condEmpty.wait(lk);\n            }\n\n            result = q.front();\n            q.pop();\n        }\n\n        condFull.notify_one();\n        return result;\n    }\n\n};\n\n\n\nvoid producer(BlockingQueue<std::string>* blkQueue) {\n    auto arr = { \"nice\", \"to\", \"meet\", \"you\" };\n\n    for (auto&& data : arr) {\n        cout << \"Producer: \" << data << endl;\n        blkQueue->put(data);\n        cout << \"Producer: \" << data << \"\\t\\t\\t[done]\" << endl;\n    }\n}\n\n\n\nvoid consumer(BlockingQueue<std::string>* blkQueue) {\n    std::string data;\n    std::this_thread::sleep_for(std::chrono::seconds(5));\n\n    for (int i = 0; i < 4; ++i) {\n        data = blkQueue->take();\n        cout << \"\\tConsumer: \" << data << endl;\n\n        if (0 == i)\n            std::this_thread::sleep_for(std::chrono::seconds(5));\n    }\n}\n\n\n\nint main() {\n    BlockingQueue<std::string> blkQueue(2); // capacity = 2\n\n    auto thProducer = std::thread(&producer, &blkQueue);\n    auto thConsumer = std::thread(&consumer, &blkQueue);\n\n    thProducer.join();\n    thConsumer.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer07a-data-server.cpp",
    "content": "/*\nTHE DATA SERVER PROBLEM\nVersion A: Solving the problem using a condition variable\n*/\n\n\n#include <iostream>\n#include <string>\n#include <vector>\n#include <chrono>\n#include <thread>\n#include <mutex>\n#include <condition_variable>\nusing namespace std;\n\n\n\n#define sleepsec(secs) \\\n    do { std::this_thread::sleep_for(std::chrono::seconds(secs)); } while (0)\n\n\n\nstruct Counter {\n    int value;\n    std::mutex mut;\n    std::condition_variable cond;\n    Counter(int value) : value(value) { }\n};\n\n\n\nvoid checkAuthUser() {\n    cout << \"[   Auth   ] Start\" << endl;\n    // Send request to authenticator, check permissions, encrypt, decrypt...\n    sleepsec(20);\n    cout << \"[   Auth   ] Done\" << endl;\n}\n\n\n\nvoid processFiles(const vector<string>& lstFileName, Counter& counter) {\n    for (auto&& fileName : lstFileName) {\n        // Read file\n        cout << \"[ ReadFile ] Start \" << fileName << endl;\n        sleepsec(10);\n        cout << \"[ ReadFile ] Done  \" << fileName << endl;\n\n        {\n            std::unique_lock<std::mutex>(counter.mut);\n            --counter.value;\n            counter.cond.notify_one();\n        }\n\n        // Write log into disk\n        sleepsec(5);\n        cout << \"[ WriteLog ]\" << endl;\n    }\n}\n\n\n\nvoid processRequest() {\n    const vector<string> lstFileName = { \"foo.html\", \"bar.json\" };\n    Counter counter(lstFileName.size());\n\n    // The server checks auth user while reading files, concurrently\n    std::thread th(&processFiles, std::cref(lstFileName), std::ref(counter));\n    checkAuthUser();\n\n    // The server waits for completion of loading files\n    {\n        std::unique_lock<std::mutex> lk(counter.mut);\n        while (counter.value > 0) {\n            counter.cond.wait_for(lk, std::chrono::seconds(10)); // timeout = 10 seconds\n        }\n    }\n\n    cout << \"\\nNow user is authorized and files are loaded\" << endl;\n    cout << \"Do other tasks...\\n\" << endl;\n\n    th.join();\n}\n\n\n\nint main() {\n    processRequest();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer07b-data-server.cpp",
    "content": "/*\nTHE DATA SERVER PROBLEM\nVersion B: Solving the problem using a semaphore\n*/\n\n\n#include <iostream>\n#include <string>\n#include <vector>\n#include <chrono>\n#include <thread>\n#include <semaphore>\nusing namespace std;\n\n\n\n#define sleepsec(secs) \\\n    do { std::this_thread::sleep_for(std::chrono::seconds(secs)); } while (0)\n\nusing cntsemaphore = std::counting_semaphore<>;\n\n\n\nvoid checkAuthUser() {\n    cout << \"[   Auth   ] Start\" << endl;\n    // Send request to authenticator, check permissions, encrypt, decrypt...\n    sleepsec(20);\n    cout << \"[   Auth   ] Done\" << endl;\n}\n\n\n\nvoid processFiles(const vector<string>& lstFileName, cntsemaphore& sem) {\n    for (auto&& fileName : lstFileName) {\n        // Read file\n        cout << \"[ ReadFile ] Start \" << fileName << endl;\n        sleepsec(10);\n        cout << \"[ ReadFile ] Done  \" << fileName << endl;\n\n        sem.release();\n\n        // Write log into disk\n        sleepsec(5);\n        cout << \"[ WriteLog ]\" << endl;\n    }\n}\n\n\n\nvoid processRequest() {\n    const vector<string> lstFileName = { \"foo.html\", \"bar.json\" };\n    cntsemaphore sem(0);\n\n    // The server checks auth user while reading files, concurrently\n    std::thread th(&processFiles, std::cref(lstFileName), std::ref(sem));\n    checkAuthUser();\n\n    // The server waits for completion of loading files\n    for (size_t i = lstFileName.size(); i > 0; --i) {\n        sem.acquire();\n    }\n\n    cout << \"\\nNow user is authorized and files are loaded\" << endl;\n    cout << \"Do other tasks...\\n\" << endl;\n\n    th.join();\n}\n\n\n\nint main() {\n    processRequest();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer07c-data-server.cpp",
    "content": "/*\nTHE DATA SERVER PROBLEM\nVersion C: Solving the problem using a count-down latch\n*/\n\n\n#include <iostream>\n#include <string>\n#include <vector>\n#include <chrono>\n#include <thread>\n#include <latch>\nusing namespace std;\n\n\n\n#define sleepsec(secs) \\\n    do { std::this_thread::sleep_for(std::chrono::seconds(secs)); } while (0)\n\n\n\nvoid checkAuthUser() {\n    cout << \"[   Auth   ] Start\" << endl;\n    // Send request to authenticator, check permissions, encrypt, decrypt...\n    sleepsec(20);\n    cout << \"[   Auth   ] Done\" << endl;\n}\n\n\n\nvoid processFiles(const vector<string>& lstFileName, std::latch& rdLatch) {\n    for (auto&& fileName : lstFileName) {\n        // Read file\n        cout << \"[ ReadFile ] Start \" << fileName << endl;\n        sleepsec(10);\n        cout << \"[ ReadFile ] Done  \" << fileName << endl;\n\n        rdLatch.count_down();\n\n        // Write log into disk\n        sleepsec(5);\n        cout << \"[ WriteLog ]\" << endl;\n    }\n}\n\n\n\nvoid processRequest() {\n    const vector<string> lstFileName = { \"foo.html\", \"bar.json\" };\n    std::latch readFileLatch(lstFileName.size());\n\n    // The server checks auth user while reading files, concurrently\n    std::thread th(&processFiles, std::cref(lstFileName), std::ref(readFileLatch));\n    checkAuthUser();\n\n    // The server waits for completion of loading files\n    readFileLatch.wait();\n\n    cout << \"\\nNow user is authorized and files are loaded\" << endl;\n    cout << \"Do other tasks...\\n\" << endl;\n\n    th.join();\n}\n\n\n\nint main() {\n    processRequest();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer07d-data-server.cpp",
    "content": "/*\nTHE DATA SERVER PROBLEM\nVersion D: Solving the problem using a blocking queue\n*/\n\n\n#include <iostream>\n#include <string>\n#include <vector>\n#include <chrono>\n#include <thread>\n#include \"mylib-blockingqueue.hpp\"\nusing namespace std;\n\n\n\n#define sleepsec(secs) \\\n    do { std::this_thread::sleep_for(std::chrono::seconds(secs)); } while (0)\n\n\n\nvoid checkAuthUser() {\n    cout << \"[   Auth   ] Start\" << endl;\n    // Send request to authenticator, check permissions, encrypt, decrypt...\n    sleepsec(20);\n    cout << \"[   Auth   ] Done\" << endl;\n}\n\n\n\nvoid processFiles(const vector<string>& lstFileName, mylib::BlockingQueue<string>& blkq) {\n    for (auto&& fileName : lstFileName) {\n        // Read file\n        cout << \"[ ReadFile ] Start \" << fileName << endl;\n        sleepsec(10);\n        cout << \"[ ReadFile ] Done  \" << fileName << endl;\n\n        blkq.put(fileName); // You may put file data here\n\n        // Write log into disk\n        sleepsec(5);\n        cout << \"[ WriteLog ]\" << endl;\n    }\n}\n\n\n\nvoid processRequest() {\n    const vector<string> lstFileName = { \"foo.html\", \"bar.json\" };\n    mylib::BlockingQueue<string> blkq;\n\n    // The server checks auth user while reading files, concurrently\n    std::thread th(&processFiles, std::cref(lstFileName), std::ref(blkq));\n    checkAuthUser();\n\n    // The server waits for completion of loading files\n    for (size_t i = lstFileName.size(); i > 0; --i) {\n        blkq.take();\n    }\n\n    cout << \"\\nNow user is authorized and files are loaded\" << endl;\n    cout << \"Do other tasks...\\n\" << endl;\n\n    th.join();\n}\n\n\n\nint main() {\n    processRequest();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer08-exec-service-itask.hpp",
    "content": "#ifndef _MY_EXEC_SERVICE_ITASK_HPP_\n#define _MY_EXEC_SERVICE_ITASK_HPP_\n\n\n\n// interface ITask\nclass ITask {\npublic:\n    virtual ~ITask() = default;\n    virtual void run() = 0;\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_ITASK_HPP_\n"
  },
  {
    "path": "cpp/cpp-std/exer08-exec-service-main.cpp",
    "content": "/*\nEXECUTOR SERVICE & THREAD POOL IMPLEMENTATION\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include \"exer08-exec-service-itask.hpp\"\n#include \"exer08-exec-service-v0a.hpp\"\n#include \"exer08-exec-service-v0b.hpp\"\n#include \"exer08-exec-service-v1a.hpp\"\n#include \"exer08-exec-service-v1b.hpp\"\n#include \"exer08-exec-service-v2a.hpp\"\n#include \"exer08-exec-service-v2b.hpp\"\n\n\n\nclass MyTask : public ITask {\npublic:\n    char id;\n\npublic:\n    void run() override {\n        std::cout << \"Task \" << id << \" is starting\" << std::endl;\n        std::this_thread::sleep_for(std::chrono::seconds(3));\n        std::cout << \"Task \" << id << \" is completed\" << std::endl;\n    }\n};\n\n\n\nint main() {\n    constexpr int NUM_THREADS = 2;\n    constexpr int NUM_TASKS = 5;\n\n\n    MyExecServiceV0A execService(NUM_THREADS);\n\n\n    auto lstTask = std::vector<MyTask>(NUM_TASKS);\n\n    for (int i = 0; i < NUM_TASKS; ++i)\n        lstTask[i].id = 'A' + i;\n\n\n    for (auto&& task : lstTask)\n        execService.submit(&task);\n\n    std::cout << \"All tasks are submitted\" << std::endl;\n\n\n    execService.waitTaskDone();\n    std::cout << \"All tasks are completed\" << std::endl;\n\n\n    execService.shutdown();\n\n\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/exer08-exec-service-v0a.hpp",
    "content": "/*\nMY EXECUTOR SERVICE\n\nVersion 0A: The easiest executor service\n- It uses a blocking queue as underlying mechanism.\n*/\n\n\n\n#ifndef _MY_EXEC_SERVICE_V0A_HPP_\n#define _MY_EXEC_SERVICE_V0A_HPP_\n\n\n\n#include <iostream>\n#include <vector>\n#include <chrono>\n#include <thread>\n#include \"mylib-blockingqueue.hpp\"\n#include \"exer08-exec-service-itask.hpp\"\n\n\n\nclass MyExecServiceV0A {\n\nprivate:\n    int numThreads = 0;\n    std::vector<std::thread> lstTh;\n    mylib::BlockingQueue<ITask*> taskPending;\n\n\npublic:\n    MyExecServiceV0A(int numThreads) {\n        init(numThreads);\n    }\n\n\n    MyExecServiceV0A(const MyExecServiceV0A& other) = delete;\n    MyExecServiceV0A(const MyExecServiceV0A&& other) = delete;\n    void operator=(const MyExecServiceV0A& other) = delete;\n    void operator=(const MyExecServiceV0A&& other) = delete;\n\n\nprivate:\n    void init(int numThreads) {\n        this->numThreads = numThreads;\n        lstTh.resize(numThreads);\n\n        for (auto&& th : lstTh) {\n            th = std::thread(&threadWorkerFunc, this);\n        }\n    }\n\n\npublic:\n    void submit(ITask* task) {\n        taskPending.add(task);\n    }\n\n\n    void waitTaskDone() {\n        // This ExecService is too simple,\n        // so there is no implementation for waitTaskDone()\n        std::this_thread::sleep_for(std::chrono::seconds(11)); // fake behaviour\n    }\n\n\n    void shutdown() {\n        // This ExecService is too simple,\n        // so there is no implementation for shutdown()\n        std::cout << \"No implementation for shutdown().\" << std::endl;\n        std::cout << \"You need to exit the app manually.\" << std::endl;\n    }\n\n\nprivate:\n    static void threadWorkerFunc(MyExecServiceV0A* thisPtr) {\n        auto&& taskPending = thisPtr->taskPending;\n        ITask* task = nullptr;\n\n        for (;;) {\n            // WAIT FOR AN AVAILABLE PENDING TASK\n            task = taskPending.take();\n\n            // DO THE TASK\n            task->run();\n        }\n    }\n\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_V0A_HPP_\n"
  },
  {
    "path": "cpp/cpp-std/exer08-exec-service-v0b.hpp",
    "content": "/*\nMY EXECUTOR SERVICE\n\nVersion 0B: The easiest executor service\n- It uses a blocking queue as underlying mechanism.\n- It supports waitTaskDone() and shutdown().\n*/\n\n\n\n#ifndef _MY_EXEC_SERVICE_V0B_HPP_\n#define _MY_EXEC_SERVICE_V0B_HPP_\n\n\n\n#include <iostream>\n#include <vector>\n#include <chrono>\n#include <thread>\n#include <atomic>\n#include \"mylib-blockingqueue.hpp\"\n#include \"exer08-exec-service-itask.hpp\"\n\n\n\nclass MyExecServiceV0B {\n\nprivate:\n    int numThreads = 0;\n    std::vector<std::thread> lstTh;\n\n    mylib::BlockingQueue<ITask*> taskPending;\n    std::atomic_int32_t counterTaskRunning;\n\n    volatile bool forceThreadShutdown;\n\n    const class : ITask {\n        void run() override { }\n    }\n    emptyTask;\n\n\npublic:\n    MyExecServiceV0B(int numThreads) {\n        init(numThreads);\n    }\n\n\n    MyExecServiceV0B(const MyExecServiceV0B& other) = delete;\n    MyExecServiceV0B(const MyExecServiceV0B&& other) = delete;\n    void operator=(const MyExecServiceV0B& other) = delete;\n    void operator=(const MyExecServiceV0B&& other) = delete;\n\n\nprivate:\n    void init(int numThreads) {\n        this->numThreads = numThreads;\n        lstTh.resize(numThreads);\n        counterTaskRunning = 0;\n        forceThreadShutdown = false;\n\n        for (auto&& th : lstTh) {\n            th = std::thread(&threadWorkerFunc, this);\n        }\n    }\n\n\npublic:\n    void submit(ITask* task) {\n        taskPending.add(task);\n    }\n\n\n    void waitTaskDone() {\n        // This ExecService is too simple,\n        // so there is no good implementation for waitTaskDone()\n        while (false == taskPending.empty() || counterTaskRunning > 0) {\n            std::this_thread::sleep_for(std::chrono::seconds(1));\n            // std::this_thread::yield();\n        }\n    }\n\n\n    void shutdown() {\n        forceThreadShutdown = true;\n        taskPending.clear();\n\n        // Invoke blocked threads by adding \"empty\" tasks\n        for (int i = 0; i < numThreads; ++i) {\n            taskPending.put( (ITask* const) &emptyTask );\n        }\n\n        for (auto&& th : lstTh) {\n            th.join();\n        }\n\n        numThreads = 0;\n        lstTh.clear();\n    }\n\n\nprivate:\n    static void threadWorkerFunc(MyExecServiceV0B* thisPtr) {\n        auto&& taskPending = thisPtr->taskPending;\n        auto&& counterTaskRunning = thisPtr->counterTaskRunning;\n        auto&& forceThreadShutdown = thisPtr->forceThreadShutdown;\n\n        ITask* task = nullptr;\n\n        for (;;) {\n            // WAIT FOR AN AVAILABLE PENDING TASK\n            task = taskPending.take();\n\n            // If shutdown() was called, then exit the function\n            if (forceThreadShutdown) {\n                break;\n            }\n\n            // DO THE TASK\n            ++counterTaskRunning;\n            task->run();\n            --counterTaskRunning;\n        }\n    }\n\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_V0B_HPP_\n"
  },
  {
    "path": "cpp/cpp-std/exer08-exec-service-v1a.hpp",
    "content": "/*\nMY EXECUTOR SERVICE\n\nVersion 1A: Simple executor service\n- Method \"waitTaskDone\" invokes thread sleeps in loop (which can cause performance problems).\n*/\n\n\n\n#ifndef _MY_EXEC_SERVICE_V1A_HPP_\n#define _MY_EXEC_SERVICE_V1A_HPP_\n\n\n\n#include <vector>\n#include <queue>\n#include <chrono>\n#include <thread>\n#include <mutex>\n#include <condition_variable>\n#include <atomic>\n#include \"exer08-exec-service-itask.hpp\"\n\n\n\nclass MyExecServiceV1A {\n\nprivate:\n    using uniquelk = std::unique_lock<std::mutex>;\n\n\nprivate:\n    int numThreads = 0;\n    std::vector<std::thread> lstTh;\n\n    std::queue<ITask*> taskPending;\n    std::mutex mutTaskPending;\n    std::condition_variable condTaskPending;\n\n    std::atomic_int32_t counterTaskRunning;\n\n    volatile bool forceThreadShutdown;\n\n\npublic:\n    MyExecServiceV1A(int numThreads) {\n        init(numThreads);\n    }\n\n\n    MyExecServiceV1A(const MyExecServiceV1A& other) = delete;\n    MyExecServiceV1A(const MyExecServiceV1A&& other) = delete;\n    void operator=(const MyExecServiceV1A& other) = delete;\n    void operator=(const MyExecServiceV1A&& other) = delete;\n\n\nprivate:\n    void init(int numThreads) {\n        // shutdown();\n\n        this->numThreads = numThreads;\n        lstTh.resize(numThreads);\n        counterTaskRunning = 0;\n        forceThreadShutdown = false;\n\n        for (auto&& th : lstTh) {\n            th = std::thread(&threadWorkerFunc, this);\n        }\n    }\n\n\npublic:\n    void submit(ITask* task) {\n        {\n            uniquelk lk(mutTaskPending);\n            taskPending.push(task);\n        }\n\n        condTaskPending.notify_one();\n    }\n\n\n    void waitTaskDone() {\n        bool done = false;\n\n        for (;;) {\n            {\n                uniquelk lk(mutTaskPending);\n\n                if (taskPending.empty() && 0 == counterTaskRunning) {\n                    done = true;\n                }\n            }\n\n            if (done) {\n                break;\n            }\n\n            std::this_thread::sleep_for(std::chrono::seconds(1));\n            // std::this_thread::yield();\n        }\n    }\n\n\n    void shutdown() {\n        {\n            uniquelk lk(mutTaskPending);\n            forceThreadShutdown = true;\n            std::queue<ITask*>().swap(taskPending);\n        }\n\n        condTaskPending.notify_all();\n\n        for (auto&& th : lstTh) {\n            th.join();\n        }\n\n        numThreads = 0;\n        lstTh.clear();\n    }\n\n\nprivate:\n    static void threadWorkerFunc(MyExecServiceV1A* thisPtr) {\n        auto&& taskPending = thisPtr->taskPending;\n        auto&& mutTaskPending = thisPtr->mutTaskPending;\n        auto&& condTaskPending = thisPtr->condTaskPending;\n\n        auto&& counterTaskRunning = thisPtr->counterTaskRunning;\n        auto&& forceThreadShutdown = thisPtr->forceThreadShutdown;\n\n        ITask* task = nullptr;\n\n\n        for (;;) {\n            {\n                // WAIT FOR AN AVAILABLE PENDING TASK\n                uniquelk lkPending(mutTaskPending);\n\n                while (taskPending.empty() && false == forceThreadShutdown) {\n                    condTaskPending.wait(lkPending);\n                }\n\n                if (forceThreadShutdown) {\n                    // lkPending.unlock(); // remember this statement\n                    break;\n                }\n\n                // GET THE TASK FROM THE PENDING QUEUE\n                task = taskPending.front();\n                taskPending.pop();\n\n                ++counterTaskRunning;\n            }\n\n            // DO THE TASK\n            task->run();\n\n            --counterTaskRunning;\n        }\n    }\n\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_V1A_HPP_\n"
  },
  {
    "path": "cpp/cpp-std/exer08-exec-service-v1b.hpp",
    "content": "/*\nMY EXECUTOR SERVICE\n\nVersion 1B: Simple executor service\n- Method \"waitTaskDone\" uses a condition variable to synchronize.\n*/\n\n\n\n#ifndef _MY_EXEC_SERVICE_V1B_HPP_\n#define _MY_EXEC_SERVICE_V1B_HPP_\n\n\n\n#include <vector>\n#include <queue>\n#include <thread>\n#include <mutex>\n#include <condition_variable>\n#include \"exer08-exec-service-itask.hpp\"\n\n\n\nclass MyExecServiceV1B {\n\nprivate:\n    using uniquelk = std::unique_lock<std::mutex>;\n\n\nprivate:\n    int numThreads = 0;\n    std::vector<std::thread> lstTh;\n\n    std::queue<ITask*> taskPending;\n    std::mutex mutTaskPending;\n    std::condition_variable condTaskPending;\n\n    int counterTaskRunning;\n    std::mutex mutTaskRunning;\n    std::condition_variable condTaskRunning;\n\n    volatile bool forceThreadShutdown;\n\n\npublic:\n    MyExecServiceV1B(int numThreads) {\n        init(numThreads);\n    }\n\n\n    MyExecServiceV1B(const MyExecServiceV1B& other) = delete;\n    MyExecServiceV1B(const MyExecServiceV1B&& other) = delete;\n    void operator=(const MyExecServiceV1B& other) = delete;\n    void operator=(const MyExecServiceV1B&& other) = delete;\n\n\nprivate:\n    void init(int numThreads) {\n        // shutdown();\n\n        this->numThreads = numThreads;\n        lstTh.resize(numThreads);\n        counterTaskRunning = 0;\n        forceThreadShutdown = false;\n\n        for (auto&& th : lstTh) {\n            th = std::thread(&threadWorkerFunc, this);\n        }\n    }\n\n\npublic:\n    void submit(ITask* task) {\n        {\n            uniquelk lk(mutTaskPending);\n            taskPending.push(task);\n        }\n\n        condTaskPending.notify_one();\n    }\n\n\n    void waitTaskDone() {\n        for (;;) {\n            uniquelk lkPending(mutTaskPending);\n\n            if (taskPending.empty()) {\n                uniquelk lkRunning(mutTaskRunning);\n\n                while (counterTaskRunning > 0)\n                    condTaskRunning.wait(lkRunning);\n\n                // no pending task and no running task\n                break;\n            }\n        }\n    }\n\n\n    void shutdown() {\n        {\n            uniquelk lk(mutTaskPending);\n            forceThreadShutdown = true;\n            std::queue<ITask*>().swap(taskPending);\n        }\n\n        condTaskPending.notify_all();\n\n        for (auto&& th : lstTh) {\n            th.join();\n        }\n\n        numThreads = 0;\n        lstTh.clear();\n    }\n\n\nprivate:\n    static void threadWorkerFunc(MyExecServiceV1B* thisPtr) {\n        auto&& taskPending = thisPtr->taskPending;\n        auto&& mutTaskPending = thisPtr->mutTaskPending;\n        auto&& condTaskPending = thisPtr->condTaskPending;\n\n        auto&& counterTaskRunning = thisPtr->counterTaskRunning;\n        auto&& mutTaskRunning = thisPtr->mutTaskRunning;\n        auto&& condTaskRunning = thisPtr->condTaskRunning;\n\n        auto&& forceThreadShutdown = thisPtr->forceThreadShutdown;\n\n        ITask* task = nullptr;\n\n\n        for (;;) {\n            {\n                // WAIT FOR AN AVAILABLE PENDING TASK\n                uniquelk lkPending(mutTaskPending);\n\n                while (taskPending.empty() && false == forceThreadShutdown) {\n                    condTaskPending.wait(lkPending);\n                }\n\n                if (forceThreadShutdown) {\n                    // lkPending.unlock(); // remember this statement\n                    break;\n                }\n\n                // GET THE TASK FROM THE PENDING QUEUE\n                task = taskPending.front();\n                taskPending.pop();\n\n                ++counterTaskRunning;\n            }\n\n            // DO THE TASK\n            task->run();\n\n            {\n                uniquelk lkRunning(mutTaskRunning);\n                --counterTaskRunning;\n\n                if (0 == counterTaskRunning) {\n                    condTaskRunning.notify_one();\n                }\n            }\n        }\n    }\n\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_V1B_HPP_\n"
  },
  {
    "path": "cpp/cpp-std/exer08-exec-service-v2a.hpp",
    "content": "/*\nMY EXECUTOR SERVICE\n\nVersion 2A: The executor service storing running tasks\n- Method \"waitTaskDone\" uses a semaphore to synchronize.\n*/\n\n\n\n#ifndef _MY_EXEC_SERVICE_V2A_HPP_\n#define _MY_EXEC_SERVICE_V2A_HPP_\n\n\n\n#include <vector>\n#include <list>\n#include <queue>\n#include <thread>\n#include <mutex>\n#include <condition_variable>\n#include <semaphore>\n#include \"exer08-exec-service-itask.hpp\"\n\n\n\nclass MyExecServiceV2A {\n\nprivate:\n    using cntsemaphore = std::counting_semaphore<>;\n    using uniquelk = std::unique_lock<std::mutex>;\n\n\nprivate:\n    int numThreads = 0;\n    std::vector<std::thread> lstTh;\n\n    std::queue<ITask*> taskPending;\n    std::mutex mutTaskPending;\n    std::condition_variable condTaskPending;\n\n    std::list<ITask*> taskRunning;\n    std::mutex mutTaskRunning;\n    cntsemaphore counterTaskRunning = cntsemaphore(0);\n\n    volatile bool forceThreadShutdown;\n\n\npublic:\n    MyExecServiceV2A(int numThreads) {\n        init(numThreads);\n    }\n\n\n    MyExecServiceV2A(const MyExecServiceV2A& other) = delete;\n    MyExecServiceV2A(const MyExecServiceV2A&& other) = delete;\n    void operator=(const MyExecServiceV2A& other) = delete;\n    void operator=(const MyExecServiceV2A&& other) = delete;\n\n\nprivate:\n    void init(int numThreads) {\n        // shutdown();\n\n        this->numThreads = numThreads;\n        lstTh.resize(numThreads);\n        forceThreadShutdown = false;\n\n        for (auto&& th : lstTh) {\n            th = std::thread(&threadWorkerFunc, this);\n        }\n    }\n\n\npublic:\n    void submit(ITask* task) {\n        {\n            uniquelk lk(mutTaskPending);\n            taskPending.push(task);\n        }\n\n        condTaskPending.notify_one();\n    }\n\n\n    void waitTaskDone() {\n        for (;;) {\n            counterTaskRunning.acquire();\n\n            {\n                uniquelk lkPending(mutTaskPending);\n                uniquelk lkRunning(mutTaskRunning);\n\n                if (taskPending.empty() && taskRunning.empty()) {\n                    break;\n                }\n            }\n        }\n    }\n\n\n    void shutdown() {\n        {\n            uniquelk lk(mutTaskPending);\n            forceThreadShutdown = true;\n            std::queue<ITask*>().swap(taskPending);\n        }\n\n        condTaskPending.notify_all();\n\n        for (auto&& th : lstTh) {\n            th.join();\n        }\n\n        numThreads = 0;\n        lstTh.clear();\n    }\n\n\nprivate:\n    static void threadWorkerFunc(MyExecServiceV2A* thisPtr) {\n        auto&& taskPending = thisPtr->taskPending;\n        auto&& mutTaskPending = thisPtr->mutTaskPending;\n        auto&& condTaskPending = thisPtr->condTaskPending;\n\n        auto&& taskRunning = thisPtr->taskRunning;\n        auto&& mutTaskRunning = thisPtr->mutTaskRunning;\n        auto&& counterTaskRunning = thisPtr->counterTaskRunning;\n\n        auto&& forceThreadShutdown = thisPtr->forceThreadShutdown;\n\n        ITask* task = nullptr;\n\n\n        for (;;) {\n            {\n                // WAIT FOR AN AVAILABLE PENDING TASK\n                uniquelk lkPending(mutTaskPending);\n\n                while (taskPending.empty() && false == forceThreadShutdown) {\n                    condTaskPending.wait(lkPending);\n                }\n\n                if (forceThreadShutdown) {\n                    // lkPending.unlock(); // remember this statement\n                    break;\n                }\n\n                // GET THE TASK FROM THE PENDING QUEUE\n                task = taskPending.front();\n                taskPending.pop();\n\n                // PUSH IT TO THE RUNNING QUEUE\n                {\n                    uniquelk lkRunning(mutTaskRunning);\n                    taskRunning.push_back(task);\n                }\n            }\n\n            // DO THE TASK\n            task->run();\n\n            // REMOVE IT FROM THE RUNNING QUEUE\n            {\n                uniquelk lkRunning(mutTaskRunning);\n                taskRunning.remove(task);\n                counterTaskRunning.release();\n            }\n        }\n    }\n\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_V2A_HPP_\n"
  },
  {
    "path": "cpp/cpp-std/exer08-exec-service-v2b.hpp",
    "content": "/*\nMY EXECUTOR SERVICE\n\nVersion 2B: The executor service storing running tasks\n- Method \"waitTaskDone\" uses a condition variable to synchronize.\n*/\n\n\n\n#ifndef _MY_EXEC_SERVICE_V2B_HPP_\n#define _MY_EXEC_SERVICE_V2B_HPP_\n\n\n\n#include <vector>\n#include <list>\n#include <queue>\n#include <thread>\n#include <mutex>\n#include <condition_variable>\n#include \"exer08-exec-service-itask.hpp\"\n\n\n\nclass MyExecServiceV2B {\n\nprivate:\n    using uniquelk = std::unique_lock<std::mutex>;\n\n\nprivate:\n    int numThreads = 0;\n    std::vector<std::thread> lstTh;\n\n    std::queue<ITask*> taskPending;\n    std::mutex mutTaskPending;\n    std::condition_variable condTaskPending;\n\n    std::list<ITask*> taskRunning;\n    std::mutex mutTaskRunning;\n    std::condition_variable condTaskRunning;\n\n    volatile bool forceThreadShutdown;\n\n\npublic:\n    MyExecServiceV2B(int numThreads) {\n        init(numThreads);\n    }\n\n\n    MyExecServiceV2B(const MyExecServiceV2B& other) = delete;\n    MyExecServiceV2B(const MyExecServiceV2B&& other) = delete;\n    void operator=(const MyExecServiceV2B& other) = delete;\n    void operator=(const MyExecServiceV2B&& other) = delete;\n\n\nprivate:\n    void init(int numThreads) {\n        // shutdown();\n\n        this->numThreads = numThreads;\n        lstTh.resize(numThreads);\n        forceThreadShutdown = false;\n\n        for (auto&& th : lstTh) {\n            th = std::thread(&threadWorkerFunc, this);\n        }\n    }\n\n\npublic:\n    void submit(ITask* task) {\n        {\n            uniquelk lk(mutTaskPending);\n            taskPending.push(task);\n        }\n\n        condTaskPending.notify_one();\n    }\n\n\n    void waitTaskDone() {\n        for (;;) {\n            uniquelk lkPending(mutTaskPending);\n\n            if (taskPending.empty()) {\n                uniquelk lkRunning(mutTaskRunning);\n\n                while (false == taskRunning.empty())\n                    condTaskRunning.wait(lkRunning);\n\n                // no pending task and no running task\n                break;\n            }\n        }\n    }\n\n\n    void shutdown() {\n        {\n            uniquelk lk(mutTaskPending);\n            forceThreadShutdown = true;\n            std::queue<ITask*>().swap(taskPending);\n        }\n\n        condTaskPending.notify_all();\n\n        for (auto&& th : lstTh) {\n            th.join();\n        }\n\n        numThreads = 0;\n        lstTh.clear();\n    }\n\n\nprivate:\n    static void threadWorkerFunc(MyExecServiceV2B* thisPtr) {\n        auto&& taskPending = thisPtr->taskPending;\n        auto&& mutTaskPending = thisPtr->mutTaskPending;\n        auto&& condTaskPending = thisPtr->condTaskPending;\n\n        auto&& taskRunning = thisPtr->taskRunning;\n        auto&& mutTaskRunning = thisPtr->mutTaskRunning;\n        auto&& condTaskRunning = thisPtr->condTaskRunning;\n\n        auto&& forceThreadShutdown = thisPtr->forceThreadShutdown;\n\n        ITask* task = nullptr;\n\n\n        for (;;) {\n            {\n                // WAIT FOR AN AVAILABLE PENDING TASK\n                uniquelk lkPending(mutTaskPending);\n\n                while (taskPending.empty() && false == forceThreadShutdown) {\n                    condTaskPending.wait(lkPending);\n                }\n\n                if (forceThreadShutdown) {\n                    // lkPending.unlock(); // remember this statement\n                    break;\n                }\n\n                // GET THE TASK FROM THE PENDING QUEUE\n                task = taskPending.front();\n                taskPending.pop();\n\n                // PUSH IT TO THE RUNNING QUEUE\n                {\n                    uniquelk lkRunning(mutTaskRunning);\n                    taskRunning.push_back(task);\n                }\n            }\n\n            // DO THE TASK\n            task->run();\n\n            // REMOVE IT FROM THE RUNNING QUEUE\n            {\n                uniquelk lkRunning(mutTaskRunning);\n                taskRunning.remove(task);\n                condTaskRunning.notify_one();\n            }\n        }\n    }\n\n};\n\n\n\n#endif // _MY_EXEC_SERVICE_V2B_HPP_\n"
  },
  {
    "path": "cpp/cpp-std/exerex-countdown-timer.cpp",
    "content": "/*\nCOUNTDOWN TIMER\n*/\n\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n#include <mutex>\n#include <condition_variable>\nusing namespace std;\n\n\n\nvoid doUserInput(char* buffer, std::condition_variable* cv) {\n    cin.getline(buffer, 1024);\n    cv->notify_one();\n}\n\n\n\n/*\nReturn true if no timeout. Otherwise, return false.\n*/\nbool waitForTime(const int waitTime, std::condition_variable& cv, std::mutex& mut) {\n    std::unique_lock<std::mutex> lk(mut);\n    std::cv_status status = cv.wait_for(lk, std::chrono::seconds(waitTime));\n\n    if (std::cv_status::no_timeout == status)\n        return true;\n    else\n        return false;\n}\n\n\n\nint main() {\n    std::condition_variable cv;\n    std::mutex mut;\n\n    constexpr int SECONDS = 5;\n\n    char buffer[1024] = { 0 };\n\n\n    cout << \"You have \" << SECONDS << \" seconds to write anything you like in one line.\" << endl;\n    cout << \"Press enter to start.\" << endl;\n    cin.getline(buffer, 1024);\n    cout << \"START!!!\" << endl << endl;\n\n\n    auto th = std::thread(&doUserInput, buffer, &cv);\n\n\n    if (waitForTime(SECONDS, cv, mut)) {\n        cout << \"\\nYou completed before the deadline.\" << endl;\n    }\n    else {\n        cout << \"\\n\\nTIMEOUT!!!\" << endl;\n    }\n\n\n    th.join();\n    return 0;\n}\n"
  },
  {
    "path": "cpp/cpp-std/mylib-blockingqueue.hpp",
    "content": "/******************************************************\n*\n* File name:    mylib-blockingqueue.hpp\n*\n* Author:       Name:   Thanh Nguyen\n*               Email:  thanh.it1995(at)gmail(dot)com\n*\n* License:      3-Clause BSD License\n*\n* Description:  The blocking queue implementation in C++11 std threading\n*\n******************************************************/\n\n\n\n#ifndef _MYLIB_BLOCKING_QUEUE_HPP_\n#define _MYLIB_BLOCKING_QUEUE_HPP_\n\n\n\n#include <limits>\n#include <queue>\n#include <thread>\n#include <mutex>\n#include <condition_variable>\n\n\n\nnamespace mylib\n{\n\n\n\ntemplate <typename T>\nclass BlockingQueue {\n\nprivate:\n    using uniquelk = std::unique_lock<std::mutex>;\n\n\nprivate:\n    std::condition_variable condEmpty;\n    std::condition_variable condFull;\n    std::mutex mut;\n\n    size_t capacity;\n    std::queue<T> q;\n\n\npublic:\n    BlockingQueue() : capacity(std::numeric_limits<size_t>::max()) {\n    }\n\n\n    BlockingQueue(size_t capacity) : capacity(capacity) {\n    }\n\n\n    BlockingQueue(const BlockingQueue& other) = delete;\n    BlockingQueue(const BlockingQueue&& other) = delete;\n    void operator=(const BlockingQueue& other) = delete;\n    void operator=(const BlockingQueue&& other) = delete;\n\n\n    bool empty() const {\n        return q.empty();\n    }\n\n\n    size_t size() const {\n        return q.size();\n    }\n\n\n    // sync enqueue\n    void put(const T& value) {\n        uniquelk lk(mut);\n        condFull.wait(lk, [&] { return q.size() < capacity; });\n        q.push(value);\n        condEmpty.notify_one();\n    }\n\n\n    // sync dequeue\n    T take() {\n        uniquelk lk(mut);\n        condEmpty.wait(lk, [&] { return !q.empty(); });\n        T result = q.front();\n        q.pop();\n        condFull.notify_one();\n        return result;\n    }\n\n\n    // async enqueue\n    void add(const T& value) {\n        // Note: For asynchronous operations, we should use a long-live background thread\n        // instead of using a temporary thread\n        std::thread(&BlockingQueue<T>::put, this, value).detach();\n    }\n\n\n    // returns false if queue is empty, otherwise returns true and assigns the result\n    bool peek(T& result) const {\n        uniquelk lk(mut);\n        if (q.empty()) {\n            return false;\n        }\n\n        result = q.front();\n        return true;\n    }\n\n\n    void clear() {\n        uniquelk lk(mut);\n        std::queue<T>().swap(q);\n    }\n\n}; // BlockingQueue\n\n\n\n} // namespace mylib\n\n\n\n#endif // _MYLIB_BLOCKING_QUEUE_HPP_\n"
  },
  {
    "path": "cpp/cpp-std/mylib-execservice.hpp",
    "content": "/******************************************************\n*\n* File name:    mylib-execservice.hpp\n*\n* Author:       Name:   Thanh Nguyen\n*               Email:  thanh.it1995(at)gmail(dot)com\n*\n* License:      3-Clause BSD License\n*\n* Description:  The executor service implementation in C++11 std threading\n*\n******************************************************/\n\n\n\n/*\nCopy code from \"MyExecServiceV1B\"\n*/\n\n\n#ifndef _MYLIB_EXEC_SERVICE_HPP_\n#define _MYLIB_EXEC_SERVICE_HPP_\n\n\n\n#include <vector>\n#include <queue>\n#include <functional>\n#include <thread>\n#include <mutex>\n#include <condition_variable>\n\n\n\nnamespace mylib {\n\n\n\nclass ExecService {\n\nprivate:\n    using uniquelk = std::unique_lock<std::mutex>;\n\n\npublic:\n    using taskFunc = std::function<void()>;\n\n\nprivate:\n    int numThreads = 0;\n    std::vector<std::thread> lstTh;\n\n    std::queue<taskFunc> taskPending;\n    std::mutex mutTaskPending;\n    std::condition_variable condTaskPending;\n\n    int counterTaskRunning;\n    std::mutex mutTaskRunning;\n    std::condition_variable condTaskRunning;\n\n    volatile bool forceThreadShutdown;\n\n\npublic:\n    ExecService(int numThreads) {\n        init(numThreads);\n    }\n\n\n    ExecService(const ExecService& other) = delete;\n    ExecService(const ExecService&& other) = delete;\n    void operator=(const ExecService& other) = delete;\n    void operator=(const ExecService&& other) = delete;\n\n\nprivate:\n    void init(int numThreads) {\n        // shutdown();\n\n        this->numThreads = numThreads;\n        lstTh.resize(numThreads);\n        counterTaskRunning = 0;\n        forceThreadShutdown = false;\n\n        for (auto&& th : lstTh) {\n            th = std::thread(&threadWorkerFunc, this);\n        }\n    }\n\n\npublic:\n    void submit(taskFunc task) {\n        {\n            uniquelk lk(mutTaskPending);\n            taskPending.push(task);\n        }\n\n        condTaskPending.notify_one();\n    }\n\n\n    void waitTaskDone() {\n        for (;;) {\n            uniquelk lkPending(mutTaskPending);\n\n            if (taskPending.empty()) {\n                uniquelk lkRunning(mutTaskRunning);\n\n                condTaskRunning.wait(lkRunning, [&] { return counterTaskRunning <= 0; });\n\n                // no pending task and no running task\n                break;\n            }\n        }\n    }\n\n\n    void shutdown() {\n        {\n            uniquelk lk(mutTaskPending);\n            forceThreadShutdown = true;\n            std::queue<taskFunc>().swap(taskPending);\n        }\n\n        condTaskPending.notify_all();\n\n        for (auto&& th : lstTh) {\n            th.join();\n        }\n\n        numThreads = 0;\n        lstTh.clear();\n    }\n\n\nprivate:\n    static void threadWorkerFunc(ExecService* thisPtr) {\n        auto&& taskPending = thisPtr->taskPending;\n        auto&& mutTaskPending = thisPtr->mutTaskPending;\n        auto&& condTaskPending = thisPtr->condTaskPending;\n\n        auto&& counterTaskRunning = thisPtr->counterTaskRunning;\n        auto&& mutTaskRunning = thisPtr->mutTaskRunning;\n        auto&& condTaskRunning = thisPtr->condTaskRunning;\n\n        auto&& forceThreadShutdown = thisPtr->forceThreadShutdown;\n\n        taskFunc task = nullptr;\n\n\n        for (;;) {\n            {\n                // WAIT FOR AN AVAILABLE PENDING TASK\n                uniquelk lkPending(mutTaskPending);\n\n                condTaskPending.wait(lkPending, [&] {\n                    return forceThreadShutdown || !taskPending.empty();\n                });\n\n                if (forceThreadShutdown) {\n                    break;\n                }\n\n                // GET THE TASK FROM THE PENDING QUEUE\n                task = taskPending.front();\n                taskPending.pop();\n\n                ++counterTaskRunning;\n            }\n\n            // DO THE TASK\n            task();\n\n            {\n                uniquelk lkRunning(mutTaskRunning);\n                --counterTaskRunning;\n\n                if (0 == counterTaskRunning) {\n                    condTaskRunning.notify_one();\n                }\n            }\n        }\n    }\n\n}; // ExecService\n\n\n\n} // namespace mylib\n\n\n\n#endif // _MYLIB_EXEC_SERVICE_HPP_\n"
  },
  {
    "path": "cpp/cpp-std/mylib-random.hpp",
    "content": "/******************************************************\n*\n* File name:    mylib-random.hpp\n*\n* Author:       Name:   Thanh Nguyen\n*               Email:  thanh.it1995(at)gmail(dot)com\n*\n* License:      3-Clause BSD License\n*\n* Description:  The random utility in C++11 std\n*\n******************************************************/\n\n\n\n#ifndef _MYLIB_RANDOM_HPP_\n#define _MYLIB_RANDOM_HPP_\n\n\n\n#include <limits>\n#include <random>\n\n\n\nnamespace mylib {\n\n\n\nclass RandInt {\n\nprivate:\n    std::random_device rd;\n    std::mt19937 mt;\n    std::uniform_int_distribution<int> dist;\n\n\npublic:\n    RandInt() {\n        init(0, std::numeric_limits<int>::max());\n    }\n\n\n    RandInt(int minValue, int maxValueInclusive) {\n        init(minValue, maxValueInclusive);\n    }\n\n\n    void init(int minValue, int maxValueInclusive) {\n        dist = std::uniform_int_distribution<int>(minValue, maxValueInclusive);\n        mt.seed(rd());\n    }\n\n\n    int next() {\n        return dist(mt);\n    }\n\n\n    RandInt(const RandInt& other) = default;\n    RandInt(RandInt&& other) = default;\n    RandInt& operator=(const RandInt& other) = default;\n    RandInt& operator=(RandInt&& other) = default;\n\n\n// STATIC\nprivate:\n    static RandInt publicRandInt;\n\npublic:\n    static int get(int maxExclusive) {\n        return publicRandInt.next() % maxExclusive;\n    }\n\n}; // RandInt\n\n\n\nRandInt RandInt::publicRandInt;\n\n\n\n} // namespace mylib\n\n\n\n#endif // _MYLIB_RANDOM_HPP_\n"
  },
  {
    "path": "cpp/cpp-std/mylib-time.hpp",
    "content": "/******************************************************\n*\n* File name:    mylib-time.hpp\n*\n* Author:       Name:   Thanh Nguyen\n*               Email:  thanh.it1995(at)gmail(dot)com\n*\n* License:      3-Clause BSD License\n*\n* Description:  The time utility in C++11 std\n*\n******************************************************/\n\n\n\n#ifndef _MYLIB_TIME_HPP_\n#define _MYLIB_TIME_HPP_\n\n\n\n#include <ctime>\n#include <chrono>\n\n\n\nnamespace mylib {\n\n\n\nnamespace chro = std::chrono;\nusing sysclock = chro::system_clock;\n\n\n\nclass HiResClock {\n\nprivate:\n    using stdhrc = chro::high_resolution_clock;\n\n\npublic:\n    static inline stdhrc::time_point now()\n    {\n        return stdhrc::now();\n    }\n\n\n    template< typename duType=chro::duration<double> >\n    static inline\n    duType\n    getTimeSpan(\n        const stdhrc::time_point& tp1,\n        const stdhrc::time_point& tp2)\n    {\n        auto res = chro::duration_cast<duType>(tp2 - tp1);\n        return res;\n    }\n\n\n    template< typename duType=chro::duration<double> >\n    static inline\n    duType\n    getTimeSpan(const stdhrc::time_point& tpBefore)\n    {\n        auto tpCurrent = HiResClock::now();\n        auto res = HiResClock::getTimeSpan<duType>(tpBefore, tpCurrent);\n        return res;\n    }\n\n}; // HiResClock\n\n\n\nchar* getTimePointStr(const sysclock::time_point& tp) {\n    std::time_t timeStamp = sysclock::to_time_t(tp);\n    return std::ctime(&timeStamp);\n}\n\n\n\ntemplate<class clock = sysclock>\nclass clock::time_point getTimePoint(\n    int year, int month, int day,\n    int hour, int minute, int second)\n{\n    std::tm t{};\n    t.tm_year = year - 1900;\n    t.tm_mon = month - 1;\n    t.tm_mday = day;\n    t.tm_hour = hour;\n    t.tm_min = minute;\n    t.tm_sec = second;\n    return clock::from_time_t(std::mktime(&t));\n}\n\n\n\n// tp += numSeconds * 2;\n// tp -= (x % numSeconds)\ntemplate<class clock = sysclock>\nchro::time_point<clock>\ngetTimePointFutureFloor(const chro::time_point<clock>& tp, int numSeconds) {\n    // auto tpFuture = tp + chro::seconds(2 * numSeconds);\n    // auto durationFuture = tpFuture.time_since_epoch();\n\n    // durationFuture = durationFuture - (durationFuture % numSeconds);\n\n    // tpFuture = chro::time_point<clock>(durationFuture);\n    // return tpFuture;\n\n    auto duSeconds = chro::seconds(numSeconds);\n\n    auto durationFromTp = chro::time_point_cast<chro::seconds>(tp).time_since_epoch();\n\n    auto durationFuture = durationFromTp + (duSeconds * 2);\n    durationFuture = durationFuture - (durationFuture % duSeconds);\n\n    auto tpFuture = chro::time_point<clock>(durationFuture);\n    return tpFuture;\n}\n\n\n\n} // namespace mylib\n\n\n\n#endif // _MYLIB_TIME_HPP_\n"
  },
  {
    "path": "csharp/.gitignore",
    "content": "bin/\nobj/\n.vs/\n"
  },
  {
    "path": "csharp/IRunnable.cs",
    "content": "interface IRunnable\r\n{\r\n    public abstract void run();\r\n}\r\n"
  },
  {
    "path": "csharp/Program.cs",
    "content": "﻿class Program\r\n{\r\n    static void Main(string[] args)\r\n    {\r\n        new Demo00().run();\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo00-intro.cs",
    "content": "﻿/*\r\n * INTRODUCTION TO MULTITHREADING\r\n * You should try running this app several times and see results.\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo00 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        Thread th = new Thread(doTask);\r\n\r\n        th.Start();\r\n\r\n        for (int i = 0; i < 300; ++i)\r\n            Console.Write(\"A\");\r\n    }\r\n\r\n    private void doTask()\r\n    {\r\n        for (int i = 0; i < 300; ++i)\r\n            Console.Write(\"B\");\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo01a-hello.cs",
    "content": "/*\r\n * HELLO WORLD VERSION MULTITHREADING\r\n * Version A: Using functions\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo01A : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        Thread th = new Thread(doTask);\r\n        // or\r\n        // Thread th = new Thread(new ThreadStart(doTask));\r\n\r\n        th.Start();\r\n        Console.WriteLine(\"Hello from main thread\");\r\n    }\r\n\r\n    private void doTask()\r\n    {\r\n        Console.WriteLine(\"Hello from example thread\");\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo01b01-hello.cs",
    "content": "/*\r\n * HELLO WORLD VERSION MULTITHREADING\r\n * Version B01: Using lambdas with ThreadStart\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo01B01 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        ThreadStart doTask = new ThreadStart(() =>\r\n        {\r\n            Console.WriteLine(\"Hello from example thread\");\r\n        });\r\n\r\n        Thread th1 = new Thread(doTask);\r\n        Thread th2 = new Thread(doTask);\r\n\r\n        th1.Start();\r\n        th2.Start();\r\n\r\n        Console.WriteLine(\"Hello from main thread\");\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo01b02-hello.cs",
    "content": "﻿/*\r\n * HELLO WORLD VERSION MULTITHREADING\r\n * Version B02: Using lambdas\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo01B02 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        Thread th = new Thread(() => Console.WriteLine(\"Hello from example thread\"));\r\n\r\n        th.Start();\r\n\r\n        Console.WriteLine(\"Hello from main thread\");\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo01ex-name.cs",
    "content": "﻿/*\r\n * HELLO WORLD VERSION MULTITHREADING\r\n * Version extra: Getting thread's name\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo01ExtraName : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        Thread thFoo = new Thread(doTask)\r\n        {\r\n            Name = \"foo\"\r\n        };\r\n\r\n        Thread thBar = new Thread(doTask);\r\n        thBar.Name = \"bar\";\r\n\r\n        thFoo.Start();\r\n        thBar.Start();\r\n    }\r\n\r\n    private void doTask()\r\n    {\r\n        Console.WriteLine($\"My name is {Thread.CurrentThread.Name}\");\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo02a-join.cs",
    "content": "﻿/*\r\n * THREAD JOINS\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo02A : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        Thread th = new Thread(doHeavyTask);\r\n\r\n        th.Start();\r\n        th.Join();\r\n\r\n        Console.WriteLine(\"Good bye!\");\r\n    }\r\n\r\n    private void doHeavyTask() {\r\n        // Do a heavy task, which takes a little time\r\n        for (int i = 0; i < 2000000000; ++i);\r\n\r\n        Console.WriteLine(\"Done!\");\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo02b-join.cs",
    "content": "﻿/*\r\n * THREAD JOINS\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo02B : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        Thread thFoo = new Thread(() => Console.WriteLine(\"foo\"));\r\n        Thread thBar = new Thread(() => Console.WriteLine(\"bar\"));\r\n\r\n        thFoo.Start();\r\n        thBar.Start();\r\n\r\n        // thFoo.Join();\r\n        // thBar.Join();\r\n\r\n        /*\r\n         * We do not need to call thFoo.Join() and thBar.Join().\r\n         * The reason is main thread will wait for the completion of all threads before app exits.\r\n         */\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo03a-pass-arg.cs",
    "content": "﻿/*\r\n * PASSING ARGUMENTS\r\n * Version A: Using lambdas\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo03C : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        Thread thFoo = new Thread(() => doTask(1, 2, \"red\"));\r\n        Thread thBar = new Thread(() => doTask(3, 4, \"blue\"));\r\n\r\n        thFoo.Start();\r\n        thBar.Start();\r\n    }\r\n\r\n\r\n    private void doTask(int a, double b, string c)\r\n    {\r\n        Console.WriteLine($\"{a} {b} {c}\");\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo03b-pass-arg.cs",
    "content": "﻿/*\r\n * PASSING ARGUMENTS\r\n * Version B: Traditional way with a single argument of type object\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo03A : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        Thread thFoo = new Thread(doTask);\r\n        Thread thBar = new Thread(doTask);\r\n\r\n        // or\r\n        // Thread thFoo = new Thread(new ParameterizedThreadStart(doTask));\r\n        // Thread thBar = new Thread(new ParameterizedThreadStart(doTask));\r\n\r\n        thFoo.Start(new object[] { 1, 2.0, \"red\" });\r\n        thBar.Start(new object[] { 3, 4.0, \"blue\" });\r\n    }\r\n\r\n    private void doTask(object arg)\r\n    {\r\n        object[] array = (object[]) arg;\r\n\r\n        int a = (int) array[0];\r\n        double b = (double) array[1];\r\n        string c = (string) array[2];\r\n\r\n        Console.WriteLine($\"{a} {b} {c}\\n\");\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo03c-pass-arg.cs",
    "content": "﻿/*\r\n * PASSING ARGUMENTS\r\n * Version C: Traditional way + dynamic data type\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo03B : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        Thread thFoo = new Thread(doTask);\r\n        Thread thBar = new Thread(doTask);\r\n\r\n        thFoo.Start(new object[] { 1, 2, \"red\" });\r\n        thBar.Start(new object[] { 3, 4, \"blue\" });\r\n    }\r\n\r\n    private void doTask(dynamic arg)\r\n    {\r\n        int a = arg[0];\r\n        double b = arg[1];\r\n        string c = arg[2];\r\n        Console.WriteLine($\"{a} {b} {c}\\n\");\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo03d-pass-arg.cs",
    "content": "﻿/*\r\n * PASSING ARGUMENTS\r\n * Version D: Passing arguments by capturing them\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo03D : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        const int COUNT = 10;\r\n\r\n        new Thread(() =>\r\n        {\r\n\r\n            for (int i = 1; i <= COUNT; ++i)\r\n                Console.WriteLine(\"Foo \" + i);\r\n\r\n        }).Start();\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo04-sleep.cs",
    "content": "﻿/*\r\n * SLEEP\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo04 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var thFoo = new Thread(() =>\r\n        {\r\n            Console.WriteLine(\"foo is sleeping\");\r\n            Thread.Sleep(3000);\r\n            Console.WriteLine(\"foo wakes up\");\r\n        });\r\n\r\n        thFoo.Start();\r\n        thFoo.Join();\r\n\r\n        Console.WriteLine(\"Good bye\");\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo05-id.cs",
    "content": "﻿/*\r\n * GETTING THREAD'S ID\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo05 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        ThreadStart doTask = () =>\r\n        {\r\n            int id = Thread.CurrentThread.ManagedThreadId;\r\n            Console.WriteLine(id);\r\n        };\r\n\r\n        Thread thFoo = new Thread(doTask);\r\n        Thread thBar = new Thread(doTask);\r\n\r\n        Console.WriteLine(\"foo's id: \" + thFoo.ManagedThreadId);\r\n        Console.WriteLine(\"bar's id: \" + thBar.ManagedThreadId);\r\n\r\n        thFoo.Start();\r\n        thBar.Start();\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo06a-list-threads.cs",
    "content": "﻿/*\r\n * LIST OF MULTIPLE THREADS\r\n * Version A: Using List\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo06A : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        const int NUM_THREADS = 5;\r\n        var lstTh = new List<Thread>();\r\n\r\n\r\n        for (int i = 0; i < NUM_THREADS; ++i)\r\n        {\r\n            /*\r\n             * Due to the reference mechanism,\r\n             * if you remove the line \"int ith = i\", the result will be wrong.\r\n             */\r\n            int ith = i;\r\n\r\n            lstTh.Add(new Thread(() =>\r\n            {\r\n                Thread.Sleep(500);\r\n                Console.WriteLine(ith);\r\n            }));\r\n        }\r\n\r\n\r\n        lstTh.ForEach(th => th.Start());\r\n        // foreach (var th in lstTh)\r\n        // {\r\n        //     th.Start();\r\n        // }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo06b-list-threads.cs",
    "content": "﻿/*\r\n * LIST OF MULTIPLE THREADS\r\n * Version B: Using Linq\r\n */\r\nusing System;\r\nusing System.Linq;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo06B : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var lstTh = Enumerable.Range(0, 5).Select(index => new Thread(() =>\r\n        {\r\n            Thread.Sleep(500);\r\n            Console.WriteLine(index);\r\n        })).ToList();\r\n\r\n        lstTh.ForEach(th => th.Start());\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo07-terminate.cs",
    "content": "﻿/*\r\n * FORCING A THREAD TO TERMINATE (i.e. killing the thread)\r\n * Using a flag to notify the thread\r\n *\r\n * Note: The \"volatile\" keyword is explained in another demo.\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo07 : IRunnable\r\n{\r\n    private volatile bool flagStop;\r\n\r\n    public void run()\r\n    {\r\n        flagStop = false;\r\n\r\n        var th = new Thread(() =>\r\n        {\r\n\r\n            while (true)\r\n            {\r\n                if (flagStop)\r\n                    break;\r\n\r\n                Console.WriteLine(\"Running...\");\r\n\r\n                Thread.Sleep(1000);\r\n            }\r\n\r\n        });\r\n\r\n        th.Start();\r\n\r\n        Thread.Sleep(3000);\r\n        flagStop = true;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo08a-return-value.cs",
    "content": "﻿/*\r\n * GETTING RETURNED VALUES FROM THREADS\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo08A : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        int resFoo = 0, resBar = 0;\r\n\r\n        var thFoo = new Thread(() => resFoo = doubleValue(5));\r\n        var thBar = new Thread(() => resBar = doubleValue(80));\r\n\r\n        thFoo.Start();\r\n        thBar.Start();\r\n\r\n        // Wait until thFoo and thBar finish\r\n        thFoo.Join();\r\n        thBar.Join();\r\n\r\n        Console.WriteLine(resFoo);\r\n        Console.WriteLine(resBar);\r\n    }\r\n\r\n\r\n    private int doubleValue(int value)\r\n    {\r\n        return value * 2;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo08b-return-value.cs",
    "content": "/*\r\n * GETTING RETURNED VALUES FROM THREADS\r\n * Using AutoResetEvent to notify the thread which is waiting for returned values\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo08B : IRunnable\r\n{\r\n    private AutoResetEvent re;\r\n    private int result;\r\n\r\n\r\n    public void run()\r\n    {\r\n        re = new AutoResetEvent(false);\r\n\r\n        new Thread(() => doubleValue(5)).Start();\r\n\r\n        // Wait until we receive a notification from re\r\n        re.WaitOne();\r\n\r\n        Console.WriteLine(result);\r\n    }\r\n\r\n\r\n    private void doubleValue(int value)\r\n    {\r\n        result = value * 2;\r\n\r\n        Thread.Sleep(2000);\r\n        re.Set(); // Notify the threads that are waiting for re\r\n\r\n        Thread.Sleep(2000);\r\n        Console.WriteLine(\"This thread is exiting\");\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo09-detach.cs",
    "content": "﻿/*\r\n * THREAD DETACHING\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo09 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var thFoo = new Thread(() => {\r\n            Console.WriteLine(\"foo is starting...\");\r\n            Thread.Sleep(2000);\r\n            Console.WriteLine(\"foo is exiting...\");\r\n        });\r\n\r\n\r\n        thFoo.IsBackground = true;\r\n        thFoo.Start();\r\n\r\n\r\n        // If I comment this statement,\r\n        // thFoo will be forced into terminating with main thread\r\n        Thread.Sleep(3000);\r\n\r\n\r\n        Console.WriteLine(\"Main thread is exiting\");\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo10-yield.cs",
    "content": "﻿/*\r\n * THREAD YIELDING\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo10 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        DateTime tpStartMeasure = DateTime.Now;\r\n\r\n        littleSleep(1);\r\n\r\n        var timeElapsed = DateTime.Now.Subtract(tpStartMeasure).TotalMilliseconds;\r\n\r\n        Console.WriteLine($\"Elapsed time: {timeElapsed} miliseonds\");\r\n    }\r\n\r\n\r\n    private void littleSleep(double miliseconds)\r\n    {\r\n        DateTime tpEnd = DateTime.Now.AddMilliseconds(miliseconds);\r\n\r\n        do\r\n        {\r\n            Thread.Yield();\r\n        }\r\n        while (DateTime.Now.CompareTo(tpEnd) < 0);\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo11a01-exec-service.cs",
    "content": "﻿/*\r\n * EXECUTOR SERVICES AND THREAD POOLS\r\n * Version A01: System.Threading.ThreadPool\r\n *              Introduction\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo11A01 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        WaitCallback wcb = (arg) =>\r\n        {\r\n            Console.WriteLine(\"Hello Multithreading\");\r\n        };\r\n\r\n\r\n        ThreadPool.QueueUserWorkItem(wcb);\r\n        ThreadPool.QueueUserWorkItem(doTask);\r\n        ThreadPool.QueueUserWorkItem(arg => Console.WriteLine(\"Hello World\"));\r\n\r\n\r\n        // Wait one second for task completion\r\n        Thread.Sleep(1000);\r\n    }\r\n\r\n\r\n    private void doTask(object arg)\r\n    {\r\n        Console.WriteLine(\"Hello the Executor Service\");\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo11a02-exec-service.cs",
    "content": "﻿/*\r\n * EXECUTOR SERVICES AND THREAD POOLS\r\n * Version A02: System.Threading.ThreadPool\r\n *              Passing arguments\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo11A02 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        WaitCallback wcb = (arg) =>\r\n        {\r\n            Console.WriteLine($\"Hello Multithreading {arg}\");\r\n        };\r\n\r\n\r\n        ThreadPool.QueueUserWorkItem(wcb, 1);\r\n        ThreadPool.QueueUserWorkItem(doTask, 2);\r\n\r\n        ThreadPool.QueueUserWorkItem(\r\n            arg => Console.WriteLine($\"Hello World {arg}\"),\r\n            3\r\n        );\r\n\r\n\r\n        // Wait one second for task completion\r\n        Thread.Sleep(1000);\r\n    }\r\n\r\n\r\n    private void doTask(object arg)\r\n    {\r\n        Console.WriteLine($\"Hello the Executor Service {arg}\");\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo11a03-exec-service.cs",
    "content": "﻿/*\r\n * EXECUTOR SERVICES AND THREAD POOLS\r\n * Version A03: System.Threading.ThreadPool\r\n *              Returning values\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo11A03 : IRunnable\r\n{\r\n    private AutoResetEvent re;\r\n\r\n\r\n    public void run()\r\n    {\r\n        re = new AutoResetEvent(false);\r\n\r\n        int[] arg = new int[2];\r\n        arg[0] = 7; // input\r\n\r\n        ThreadPool.QueueUserWorkItem(getSquared, arg);\r\n\r\n        // Wait until the thread completes\r\n        re.WaitOne();\r\n\r\n        int result = arg[1];\r\n        Console.WriteLine(result);\r\n    }\r\n\r\n\r\n    private void getSquared(dynamic arg)\r\n    {\r\n        int i = arg[0];\r\n        arg[1] = i * i;\r\n        re.Set();\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo11a04-exec-service.cs",
    "content": "﻿/*\r\n * EXECUTOR SERVICES AND THREAD POOLS\r\n * Version A04: System.Threading.ThreadPool\r\n *              Waiting for task completion by AutoResetEvent\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo11A04 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        // INIT\r\n        const int N = 5;\r\n\r\n        var lstResult = new int[N];\r\n        var lstEvent = new AutoResetEvent[N];\r\n\r\n        for (int i = 0; i < N; ++i)\r\n            lstEvent[i] = new AutoResetEvent(false);\r\n\r\n\r\n        // RUN\r\n        for (int i = 0; i < N; ++i)\r\n        {\r\n            int ith = i;\r\n\r\n            ThreadPool.QueueUserWorkItem(_ =>\r\n                {\r\n                    lstResult[ith] = ith * ith;\r\n                    lstEvent[ith].Set();\r\n                }\r\n            );\r\n        }\r\n\r\n\r\n        WaitHandle.WaitAll(lstEvent);\r\n\r\n\r\n        // PRINT RESULTS\r\n        Array.ForEach(lstResult, res => Console.WriteLine(res));\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo11a05-exec-service.cs",
    "content": "/*\r\n * EXECUTOR SERVICES AND THREAD POOLS\r\n * Version A05: System.Threading.ThreadPool\r\n *              Waiting for task completion by CountdownEvent\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo11A05 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        // INIT\r\n        const int N = 5;\r\n\r\n        var lstResult = new int[N];\r\n        var cde = new CountdownEvent(N);\r\n\r\n\r\n        // RUN\r\n        for (int i = 0; i < N; ++i)\r\n        {\r\n            int ith = i;\r\n\r\n            ThreadPool.QueueUserWorkItem(_ =>\r\n                {\r\n                    lstResult[ith] = ith * ith;\r\n                    cde.Signal();\r\n                }\r\n            );\r\n        }\r\n\r\n\r\n        cde.Wait();\r\n\r\n\r\n        // PRINT RESULTS\r\n        Array.ForEach(lstResult, res => Console.WriteLine(res));\r\n\r\n\r\n        cde.Dispose();\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo11b01-exec-service-parallel.cs",
    "content": "/*\r\n * EXECUTOR SERVICES AND THREAD POOLS\r\n * Version B01: System.Threading.Tasks.Parallel\r\n *              Introduction\r\n */\r\nusing System;\r\nusing System.Threading.Tasks;\r\n\r\n\r\n\r\nclass Demo11B01 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        Action<int> action = arg =>\r\n        {\r\n            Console.WriteLine($\"Hello Multithreading {arg}\");\r\n        };\r\n\r\n\r\n        Parallel.For(0, 4, action);\r\n        // Main thread shall pause until all threads are completed\r\n\r\n        Parallel.For(4, 8, doTask);\r\n        // Main thread shall pause until all threads are completed\r\n\r\n        Parallel.For(8, 12, i => Console.WriteLine($\"Hello World {i}\"));\r\n        // Main thread shall pause until all threads are completed\r\n    }\r\n\r\n\r\n    private void doTask(int arg)\r\n    {\r\n        Console.WriteLine($\"Hello the Executor Service {arg}\");\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo11b02-exec-service-parallel.cs",
    "content": "/*\r\n * EXECUTOR SERVICES AND THREAD POOLS\r\n * Version B02: System.Threading.Tasks.Parallel\r\n */\r\nusing System;\r\nusing System.Threading.Tasks;\r\n\r\n\r\n\r\nclass Demo11B02 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        // EXAMPLE 1\r\n        const int N = 5;\r\n        var lstResult = new int[N];\r\n\r\n        Parallel.For(0, N, i =>\r\n        {\r\n            lstResult[i] = i * i;\r\n        });\r\n\r\n        Array.ForEach(lstResult, res => Console.Write(res + \"\\t\"));\r\n        Console.WriteLine();\r\n\r\n\r\n        // EXAMPLE 2\r\n        var lstArg = new double[] { -3.14, -9.8, 0, 1, -6 };\r\n\r\n        Parallel.ForEach(lstArg, arg =>\r\n        {\r\n            Console.Write(Math.Abs(arg) + \"\\t\");\r\n        });\r\n\r\n        Console.WriteLine();\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo11c-exec-service.cs",
    "content": "/*\r\n * EXECUTOR SERVICES AND THREAD POOLS\r\n * Version C: Fixed thread pools\r\n */\r\nusing System;\r\nusing System.Threading;\r\nusing System.Threading.Tasks;\r\n\r\n\r\n\r\nclass Demo11C : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        const int N = 5;\r\n\r\n        var prlOptions = new ParallelOptions { MaxDegreeOfParallelism = 2 };\r\n\r\n        Parallel.For(0, N, prlOptions,\r\n            i =>\r\n            {\r\n                char name = (char)(i + 'A');\r\n                Console.WriteLine($\"Task {name} is starting\");\r\n                Thread.Sleep(3000);\r\n                Console.WriteLine($\"Task {name} is completed\");\r\n            }\r\n        );\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo12a-race-condition.cs",
    "content": "﻿/*\r\n * RACE CONDITIONS\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo12A : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        const int NUM_THREADS = 5;\r\n        var lstTh = new List<Thread>();\r\n\r\n\r\n        for (int i = 0; i < NUM_THREADS; ++i)\r\n        {\r\n            int ith = i;\r\n\r\n            lstTh.Add(new Thread(() =>\r\n            {\r\n                Thread.Sleep(1000);\r\n                Console.WriteLine(ith);\r\n            }));\r\n        }\r\n\r\n\r\n        lstTh.ForEach(th => th.Start());\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo12b01-data-race-single.cs",
    "content": "﻿/*\r\n * DATA RACES\r\n * Version 01: Without multithreading\r\n */\r\nusing System;\r\nusing System.Linq;\r\n\r\n\r\n\r\nclass Demo12B01 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        const int N = 8;\r\n        int result = getResult(N);\r\n        Console.WriteLine(\"Numbers of integers that are divisible by 2 or 3 is: \" + result);\r\n    }\r\n\r\n\r\n    private int getResult(int N)\r\n    {\r\n        var a = Enumerable.Repeat(false, N + 1).ToArray();\r\n\r\n        for (int i = 1; i <= N; ++i)\r\n            if (i % 2 == 0 || i % 3 == 0)\r\n                a[i] = true;\r\n\r\n        // res = number of true values in array\r\n        int res = a.Count(val => val);\r\n        return res;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo12b02-data-race-multi.cs",
    "content": "﻿/*\r\n * DATA RACES\r\n * Version 02: Multithreading\r\n */\r\nusing System;\r\nusing System.Threading;\r\nusing System.Linq;\r\n\r\n\r\n\r\nclass Demo12B02 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        const int N = 8;\r\n        var a = Enumerable.Repeat(false, N + 1).ToArray();\r\n\r\n\r\n        var thDiv2 = new Thread(() =>\r\n        {\r\n            for (int i = 2; i <= N; i += 2)\r\n                a[i] = true;\r\n        });\r\n\r\n        var thDiv3 = new Thread(() =>\r\n        {\r\n            for (int i = 3; i <= N; i += 3)\r\n                a[i] = true;\r\n        });\r\n\r\n\r\n        thDiv2.Start();\r\n        thDiv3.Start();\r\n        thDiv2.Join();\r\n        thDiv3.Join();\r\n\r\n\r\n        int result = a.Count(val => val);\r\n        Console.WriteLine(\"Numbers of integers that are divisible by 2 or 3 is: \" + result);\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo12c01-race-cond-data-race.cs",
    "content": "﻿/*\r\n * RACE CONDITIONS AND DATA RACES\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo12C01 : IRunnable\r\n{\r\n    private int counter;\r\n\r\n\r\n    public void run()\r\n    {\r\n        const int NUM_THREADS = 16;\r\n\r\n        counter = 0;\r\n\r\n        var lstTh = new List<Thread>();\r\n\r\n        for (int i = 0; i < NUM_THREADS; ++i)\r\n        {\r\n            lstTh.Add(new Thread(doTask));\r\n        }\r\n\r\n        lstTh.ForEach(th => th.Start());\r\n        lstTh.ForEach(th => th.Join());\r\n\r\n        Console.WriteLine(\"counter = \" + counter);\r\n        // We are NOT sure that counter = 16000\r\n    }\r\n\r\n\r\n    private void doTask()\r\n    {\r\n        Thread.Sleep(1000);\r\n\r\n        for (int i = 0; i < 1000; ++i)\r\n            ++counter;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo12c02-race-cond-data-race.cs",
    "content": "﻿/*\r\n * RACE CONDITIONS AND DATA RACES\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo12C02 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var thA = new Thread(() =>\r\n        {\r\n            Thread.Sleep(30);\r\n\r\n            while (Global.counter < 10)\r\n                ++Global.counter;\r\n\r\n            Console.WriteLine(\"A won !!!\");\r\n        });\r\n\r\n\r\n        var thB = new Thread(() =>\r\n        {\r\n            Thread.Sleep(30);\r\n\r\n            while (Global.counter > -10)\r\n                --Global.counter;\r\n\r\n            Console.WriteLine(\"B won !!!\");\r\n        });\r\n\r\n\r\n        thA.Start();\r\n        thB.Start();\r\n    }\r\n\r\n\r\n    class Global\r\n    {\r\n        public static int counter = 0;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo13a-mutex.cs",
    "content": "﻿/*\r\n * MUTEXES\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo13A : IRunnable\r\n{\r\n    private Mutex mut;\r\n    private int counter;\r\n\r\n\r\n    public void run()\r\n    {\r\n        const int NUM_THREADS = 16;\r\n\r\n        mut = new Mutex();\r\n        counter = 0;\r\n\r\n        var lstTh = new List<Thread>();\r\n\r\n        for (int i = 0; i < NUM_THREADS; ++i)\r\n        {\r\n            lstTh.Add(new Thread(doTask));\r\n        }\r\n\r\n        lstTh.ForEach(th => th.Start());\r\n        lstTh.ForEach(th => th.Join());\r\n\r\n        mut.Dispose();\r\n\r\n        Console.WriteLine(\"counter = \" + counter);\r\n        // We are sure that counter = 16000\r\n    }\r\n\r\n\r\n    private void doTask()\r\n    {\r\n        Thread.Sleep(1000);\r\n\r\n        mut.WaitOne();\r\n\r\n        for (int i = 0; i < 1000; ++i)\r\n            ++counter;\r\n\r\n        mut.ReleaseMutex();\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo13b-mutex-trylock.cs",
    "content": "﻿/*\r\n * MUTEXES\r\n * Locking with a nonblocking mutex\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo13B : IRunnable\r\n{\r\n    private Mutex mut;\r\n    private int counter;\r\n\r\n\r\n    public void run()\r\n    {\r\n        const int NUM_THREADS = 90;\r\n\r\n        mut = new Mutex();\r\n        counter = 0;\r\n\r\n        var lstTh = new List<Thread>();\r\n\r\n        for (int i = 0; i < NUM_THREADS; ++i)\r\n        {\r\n            lstTh.Add(new Thread(routineCounter));\r\n        }\r\n\r\n        lstTh.ForEach(th => th.Start());\r\n        lstTh.ForEach(th => th.Join());\r\n\r\n        mut.Dispose();\r\n\r\n        Console.WriteLine(\"counter = \" + counter);\r\n    }\r\n\r\n\r\n    private void routineCounter()\r\n    {\r\n        Thread.Sleep(1000);\r\n\r\n        if (false == mut.WaitOne(1))\r\n        {\r\n            return;\r\n        }\r\n\r\n        for (int i = 0; i < 1000; ++i)\r\n            ++counter;\r\n\r\n        mut.ReleaseMutex();\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo14-synchronized-block.cs",
    "content": "﻿/*\r\n * SYNCHRONIZED BLOCKS\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo14 : IRunnable\r\n{\r\n    private object theLock = new object();\r\n    private int counter;\r\n\r\n\r\n    public void run()\r\n    {\r\n        const int NUM_THREADS = 16;\r\n\r\n        counter = 0;\r\n\r\n        var lstTh = new List<Thread>();\r\n\r\n        for (int i = 0; i < NUM_THREADS; ++i)\r\n        {\r\n            lstTh.Add(new Thread(doTask));\r\n        }\r\n\r\n        lstTh.ForEach(th => th.Start());\r\n        lstTh.ForEach(th => th.Join());\r\n\r\n        Console.WriteLine(\"counter = \" + counter);\r\n        // We are sure that counter = 16000\r\n    }\r\n\r\n\r\n    private void doTask()\r\n    {\r\n        Thread.Sleep(1000);\r\n\r\n        lock (theLock)\r\n        {\r\n            for (int i = 0; i < 1000; ++i)\r\n                ++counter;\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo15a-deadlock.cs",
    "content": "﻿/*\r\n * DEADLOCK\r\n * Version A\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo15A : IRunnable\r\n{\r\n    private Mutex mut;\r\n\r\n\r\n    public void run()\r\n    {\r\n        mut = new Mutex();\r\n\r\n        var thFoo = new Thread(() => doTask(\"foo\"));\r\n        var thBar = new Thread(() => doTask(\"bar\"));\r\n\r\n        thFoo.Start();\r\n        thBar.Start();\r\n\r\n        thFoo.Join();\r\n        thBar.Join();\r\n\r\n        // The app may throw System.Threading.AbandonedMutexException\r\n        Console.WriteLine(\"You will never see this statement due to deadlock!\");\r\n    }\r\n\r\n\r\n    private void doTask(string name)\r\n    {\r\n        mut.WaitOne();\r\n\r\n        Console.WriteLine(name + \" acquired resource\");\r\n\r\n        // mut.ReleaseMutex(); // Forget this statement ==> deadlock\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo15b-deadlock.cs",
    "content": "﻿/*\r\n * DEADLOCK\r\n * Version B\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo15B : IRunnable\r\n{\r\n    private object resourceA = \"resourceA\";\r\n    private object resourceB = \"resourceB\";\r\n\r\n\r\n    public void run()\r\n    {\r\n        var thFoo = new Thread(() =>\r\n        {\r\n            lock (resourceA)\r\n            {\r\n                Console.WriteLine(\"foo acquired resource A\");\r\n                Thread.Sleep(1000);\r\n\r\n                lock (resourceB)\r\n                {\r\n                    Console.WriteLine(\"foo acquired resource B\");\r\n                }\r\n            }\r\n        });\r\n\r\n\r\n        var thBar = new Thread(() =>\r\n        {\r\n            lock (resourceB)\r\n            {\r\n                Console.WriteLine(\"bar acquired resource B\");\r\n                Thread.Sleep(1000);\r\n\r\n                lock (resourceA)\r\n                {\r\n                    Console.WriteLine(\"bar acquired resource A\");\r\n                }\r\n            }\r\n        });\r\n\r\n\r\n        thFoo.Start();\r\n        thBar.Start();\r\n        thFoo.Join();\r\n        thBar.Join();\r\n\r\n\r\n        Console.WriteLine(\"You will never see this statement due to deadlock!\");\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo16-monitor.cs",
    "content": "﻿/*\r\n * MONITORS\r\n * Implementation of a monitor for managing a counter\r\n *\r\n * Notes:\r\n * - In C#, Monitor is already available (class System.Threading.Monitor).\r\n * - From a simple point of view in C#,\r\n *      \"lock\" keyword is shorthand for using System.Threading.Monitor + try/finally.\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo16 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var counter = new Counter();\r\n        var monitor = new MyMonitor();\r\n        monitor.init(counter);\r\n\r\n\r\n        const int NUM_THREADS = 16;\r\n        var lstTh = new List<Thread>();\r\n\r\n        for (int i = 0; i < NUM_THREADS; ++i)\r\n        {\r\n            lstTh.Add(new Thread(() =>\r\n            {\r\n                Thread.Sleep(1000);\r\n\r\n                for (int j = 0; j < 1000; ++j)\r\n                    monitor.increaseCounter();\r\n            }));\r\n        }\r\n\r\n\r\n        lstTh.ForEach(th => th.Start());\r\n        lstTh.ForEach(th => th.Join());\r\n\r\n\r\n        Console.WriteLine(\"counter = \" + counter.value);\r\n    }\r\n\r\n\r\n\r\n    class Counter\r\n    {\r\n        public int value = 0;\r\n    }\r\n\r\n\r\n\r\n    class MyMonitor\r\n    {\r\n        private Counter counter = null;\r\n\r\n        public void init(Counter counter)\r\n        {\r\n            this.counter = counter;\r\n        }\r\n\r\n        public void increaseCounter()\r\n        {\r\n            lock (counter)\r\n            {\r\n                ++counter.value;\r\n            }\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo17a-reentrant-lock.cs",
    "content": "﻿/*\r\n * REENTRANT LOCKS (RECURSIVE MUTEXES)\r\n *\r\n * In C#, \"lock\" keyword and System.Threading.Monitor class allow re-entrancy.\r\n * That means they are reentrant locks by default.\r\n *\r\n * Version A: A simple example\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo17A : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        const string resource = \"resource\";\r\n\r\n        new Thread(() =>\r\n        {\r\n            lock (resource)\r\n            {\r\n                Console.WriteLine(\"First time acquiring the resource\");\r\n\r\n                lock (resource)\r\n                {\r\n                    Console.WriteLine(\"Second time acquiring the resource\");\r\n                }\r\n            }\r\n        }).Start();\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo17b-reentrant-lock.cs",
    "content": "/*\r\n * REENTRANT LOCKS (RECURSIVE MUTEXES)\r\n * Version A: A multithreaded app example\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo17B : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        const int NUM_THREADS = 3;\r\n        var lstWk = new List<Worker>();\r\n\r\n        for (int i = 0; i < NUM_THREADS; ++i)\r\n        {\r\n            lstWk.Add(new Worker((char)(i + 'A')));\r\n        }\r\n\r\n        lstWk.ForEach(wk => wk.start());\r\n    }\r\n\r\n\r\n\r\n    class Worker\r\n    {\r\n        private static object lk = new object();\r\n\r\n        private Thread th;\r\n        private char name;\r\n\r\n        public Worker(char name)\r\n        {\r\n            this.name = name;\r\n            this.th = new Thread(doTask);\r\n        }\r\n\r\n        private void doTask()\r\n        {\r\n            Thread.Sleep(1000);\r\n\r\n            lock (lk)\r\n            {\r\n                Console.WriteLine($\"First time {name} acquiring the resource\");\r\n\r\n                lock (lk)\r\n                {\r\n                    Console.WriteLine($\"Second time {name} acquiring the resource\");\r\n                }\r\n            }\r\n        }\r\n\r\n        public void start()\r\n        {\r\n            th.Start();\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo18a01-barrier.cs",
    "content": "﻿/*\r\n * BARRIERS AND LATCHES\r\n * Version A: Barriers\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo18A01 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var syncPoint = new Barrier(participantCount: 3);\r\n\r\n\r\n        var lstArg = new List<ThreadArg>\r\n        {\r\n            new ThreadArg{ userName = \"lorem\", waitTime = 1 },\r\n            new ThreadArg{ userName = \"ipsum\", waitTime = 2 },\r\n            new ThreadArg{ userName = \"dolor\", waitTime = 3 }\r\n        };\r\n\r\n\r\n        lstArg.ForEach(arg => new Thread(() =>\r\n        {\r\n\r\n            Thread.Sleep(1000 * arg.waitTime);\r\n\r\n            Console.WriteLine(\"Get request from \" + arg.userName);\r\n            syncPoint.SignalAndWait();\r\n\r\n            Console.WriteLine(\"Process request for \" + arg.userName);\r\n            syncPoint.SignalAndWait();\r\n\r\n            Console.WriteLine(\"Done \" + arg.userName);\r\n\r\n        }).Start());\r\n    }\r\n\r\n\r\n\r\n    class ThreadArg\r\n    {\r\n        public string userName;\r\n        public int waitTime;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo18a03-barrier.cs",
    "content": "﻿/*\r\n * BARRIERS AND LATCHES\r\n * Version A: Barriers\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo18A03 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var syncPointA = new Barrier(participantCount: 2);\r\n        var syncPointB = new Barrier(participantCount: 2);\r\n\r\n\r\n        var lstArg = new List<ThreadArg>\r\n        {\r\n            new ThreadArg{ userName = \"lorem\", waitTime = 1 },\r\n            new ThreadArg{ userName = \"ipsum\", waitTime = 3 }\r\n        };\r\n\r\n\r\n        lstArg.ForEach(arg => new Thread(() =>\r\n        {\r\n\r\n            Thread.Sleep(1000 * arg.waitTime);\r\n\r\n            Console.WriteLine(\"Get request from \" + arg.userName);\r\n            syncPointA.SignalAndWait();\r\n\r\n            Thread.Sleep(4000);\r\n\r\n            Console.WriteLine(\"Process request for \" + arg.userName);\r\n            syncPointB.SignalAndWait();\r\n\r\n            Console.WriteLine(\"Done \" + arg.userName);\r\n\r\n        }).Start());\r\n    }\r\n\r\n\r\n\r\n    class ThreadArg\r\n    {\r\n        public string userName;\r\n        public int waitTime;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo18b01-latch.cs",
    "content": "﻿/*\r\n * BARRIERS AND LATCHES\r\n * Version B: Count-down latches\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo18B01 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var syncPoint = new CountdownEvent(3);\r\n\r\n\r\n        var lstArg = new List<ThreadArg>\r\n        {\r\n            new ThreadArg{ userName = \"lorem\", waitTime = 1 },\r\n            new ThreadArg{ userName = \"ipsum\", waitTime = 2 },\r\n            new ThreadArg{ userName = \"dolor\", waitTime = 3 }\r\n        };\r\n\r\n\r\n        lstArg.ForEach(arg => new Thread(() =>\r\n        {\r\n\r\n            Thread.Sleep(1000 * arg.waitTime);\r\n\r\n            Console.WriteLine(\"Get request from \" + arg.userName);\r\n\r\n            syncPoint.Signal();\r\n            syncPoint.Wait();\r\n\r\n            Console.WriteLine(\"Done \" + arg.userName);\r\n\r\n        }).Start());\r\n    }\r\n\r\n\r\n\r\n    class ThreadArg\r\n    {\r\n        public string userName;\r\n        public int waitTime;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo18b02-latch.cs",
    "content": "﻿/*\r\n * BARRIERS AND LATCHES\r\n * Version B: Count-down latches\r\n *\r\n * Main thread waits for 3 child threads to get enough data to progress.\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo18B02 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var lstArg = new List<ThreadArg>\r\n        {\r\n            new ThreadArg{ message = \"Send request to egg.net to get data\", waitTime = 6 },\r\n            new ThreadArg{ message = \"Send request to foo.org to get data\", waitTime = 2 },\r\n            new ThreadArg{ message = \"Send request to bar.com to get data\", waitTime = 4 }\r\n        };\r\n\r\n\r\n        var syncPoint = new CountdownEvent(lstArg.Count);\r\n\r\n\r\n        lstArg.ForEach(arg => new Thread(() =>\r\n        {\r\n\r\n            Thread.Sleep(1000 * arg.waitTime);\r\n\r\n            Console.WriteLine(arg.message);\r\n            syncPoint.Signal();\r\n\r\n            Thread.Sleep(8000);\r\n            Console.WriteLine(\"Cleanup\");\r\n\r\n        }).Start());\r\n\r\n\r\n        syncPoint.Wait();\r\n        Console.WriteLine(\"\\nNow we have enough data to progress to next step\\n\");\r\n    }\r\n\r\n\r\n\r\n    class ThreadArg\r\n    {\r\n        public string message;\r\n        public int waitTime;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo19-read-write-lock.cs",
    "content": "﻿/*\r\n * READ-WRITE LOCKS\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Linq;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo19 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var rwlk = new ReaderWriterLock();\r\n\r\n\r\n        const int NUM_THREADS_READ = 10;\r\n        const int NUM_THREADS_WRITE = 4;\r\n        const int NUM_ARGS = 3;\r\n\r\n\r\n        var arg = Enumerable.Range(0, NUM_ARGS).ToArray();\r\n        var rand = new Random();\r\n\r\n        var lstThRead = new List<Thread>();\r\n        var lstThWrite = new List<Thread>();\r\n\r\n\r\n        for (int i = 0; i < NUM_THREADS_READ; ++i)\r\n        {\r\n            lstThRead.Add(new Thread(() =>\r\n            {\r\n                int waitTime = arg[ rand.Next(arg.Length) ];\r\n                Thread.Sleep(1000 * waitTime);\r\n\r\n                // Should catch exception\r\n                rwlk.AcquireReaderLock(1000);\r\n\r\n                Console.WriteLine(\"read: \" + Resource.value);\r\n\r\n                rwlk.ReleaseReaderLock();\r\n            }));\r\n        }\r\n\r\n\r\n        for (int i = 0; i < NUM_THREADS_WRITE; ++i)\r\n        {\r\n            lstThWrite.Add(new Thread(() =>\r\n            {\r\n                int waitTime = arg[ rand.Next(arg.Length) ];\r\n                Thread.Sleep(1000 * waitTime);\r\n\r\n                // Should catch exception\r\n                rwlk.AcquireWriterLock(1000);\r\n\r\n                Resource.value = rand.Next(100);\r\n                Console.WriteLine(\"write: \" + Resource.value);\r\n\r\n                rwlk.ReleaseWriterLock();\r\n            }));\r\n        }\r\n\r\n\r\n        lstThRead.ForEach(th => th.Start());\r\n        lstThWrite.ForEach(th => th.Start());\r\n    }\r\n\r\n\r\n\r\n    class Resource\r\n    {\r\n        public static volatile int value;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo20a01-semaphore.cs",
    "content": "﻿/*\r\n * SEMAPHORES\r\n * Version A: Paper sheets and packages\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo20A01 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var semPackage = new Semaphore(0, int.MaxValue); // initialCount = 0, maximumCount = int.MaxValue\r\n\r\n\r\n        ThreadStart makeOneSheet = () =>\r\n        {\r\n            for (int i = 0; i < 4; ++i)\r\n            {\r\n                Console.WriteLine(\"Make 1 sheet\");\r\n                Thread.Sleep(1000);\r\n                semPackage.Release();\r\n            }\r\n        };\r\n\r\n\r\n        ThreadStart combineOnePackage = () =>\r\n        {\r\n            for (int i = 0; i < 4; ++i)\r\n            {\r\n                semPackage.WaitOne();\r\n                semPackage.WaitOne();\r\n                Console.WriteLine(\"Combine 2 sheets into 1 package\");\r\n            }\r\n        };\r\n\r\n\r\n        new Thread(makeOneSheet).Start();\r\n        new Thread(makeOneSheet).Start();\r\n        new Thread(combineOnePackage).Start();\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo20a02-semaphore.cs",
    "content": "﻿/*\r\n * SEMAPHORES\r\n * Version A: Paper sheets and packages\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo20A02 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var semPackage = new Semaphore(0, int.MaxValue);\r\n        var semSheet = new Semaphore(2, int.MaxValue);\r\n\r\n\r\n        ThreadStart makeOneSheet = () =>\r\n        {\r\n            for (int i = 0; i < 4; ++i)\r\n            {\r\n                semSheet.WaitOne();\r\n                Console.WriteLine(\"Make 1 sheet\");\r\n                semPackage.Release();\r\n            }\r\n        };\r\n\r\n\r\n        ThreadStart combineOnePackage = () =>\r\n        {\r\n            for (int i = 0; i < 4; ++i)\r\n            {\r\n                semPackage.WaitOne();\r\n                semPackage.WaitOne();\r\n\r\n                Console.WriteLine(\"Combine 2 sheets into 1 package\");\r\n                Thread.Sleep(1000);\r\n\r\n                semSheet.Release();\r\n                semSheet.Release();\r\n            }\r\n        };\r\n\r\n\r\n        new Thread(makeOneSheet).Start();\r\n        new Thread(makeOneSheet).Start();\r\n        new Thread(combineOnePackage).Start();\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo20a03-semaphore-deadlock.cs",
    "content": "﻿/*\r\n * SEMAPHORES\r\n * Version A: Paper sheets and packages\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo20A03 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var semPackage = new Semaphore(0, int.MaxValue);\r\n        var semSheet = new Semaphore(2, int.MaxValue);\r\n\r\n\r\n        ThreadStart makeOneSheet = () =>\r\n        {\r\n            for (int i = 0; i < 4; ++i)\r\n            {\r\n                semSheet.WaitOne();\r\n                Console.WriteLine(\"Make 1 sheet\");\r\n                semPackage.Release();\r\n            }\r\n        };\r\n\r\n\r\n        ThreadStart combineOnePackage = () =>\r\n        {\r\n            for (int i = 0; i < 4; ++i)\r\n            {\r\n                semPackage.WaitOne();\r\n                semPackage.WaitOne();\r\n\r\n                Console.WriteLine(\"Combine 2 sheets into 1 package\");\r\n                Thread.Sleep(1000);\r\n\r\n                semSheet.Release();\r\n                // Missing one statement: semSheet.Release() ==> deadlock\r\n            }\r\n        };\r\n\r\n\r\n        new Thread(makeOneSheet).Start();\r\n        new Thread(makeOneSheet).Start();\r\n        new Thread(combineOnePackage).Start();\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo20b-semaphore.cs",
    "content": "﻿/*\r\n * SEMAPHORES\r\n * Version B: Tires and chassis\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo20B : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var semTire = new Semaphore(4, int.MaxValue);\r\n        var semChassis = new Semaphore(0, int.MaxValue);\r\n\r\n\r\n        ThreadStart makeTire = () =>\r\n        {\r\n            for (int i = 0; i < 8; ++i)\r\n            {\r\n                semTire.WaitOne();\r\n\r\n                Console.WriteLine(\"Make 1 tire\");\r\n                Thread.Sleep(1000);\r\n\r\n                semChassis.Release();\r\n            }\r\n        };\r\n\r\n\r\n        ThreadStart makeChassis = () =>\r\n        {\r\n            for (int i = 0; i < 4; ++i)\r\n            {\r\n                semChassis.WaitOne();\r\n                semChassis.WaitOne();\r\n                semChassis.WaitOne();\r\n                semChassis.WaitOne();\r\n\r\n                Console.WriteLine(\"Make 1 chassis\");\r\n                Thread.Sleep(3000);\r\n\r\n                semTire.Release();\r\n                semTire.Release();\r\n                semTire.Release();\r\n                semTire.Release();\r\n            }\r\n        };\r\n\r\n\r\n        new Thread(makeTire).Start();\r\n        new Thread(makeTire).Start();\r\n        new Thread(makeChassis).Start();\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo21a01-condition-variable.cs",
    "content": "﻿/*\r\n * CONDITION VARIABLES\r\n *\r\n * In my opinion, the best mechanism to demonstrate the term \"Condition Variable\"\r\n * in C# is System.Threading.Monitor.\r\n *\r\n * Monitor.Wait(conditionVariable)      ==> Wait\r\n * Monitor.Pulse(conditionVariable)     ==> Notify\r\n * Monitor.PulseAll(conditionVariable)  ==> Notify All\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo21A01 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var conditionVar = new object();\r\n\r\n        ThreadStart foo = () =>\r\n        {\r\n            Console.WriteLine(\"foo is waiting...\");\r\n\r\n            lock (conditionVar)\r\n            {\r\n                // Waiting for a notification from conditionVar\r\n                Monitor.Wait(conditionVar);\r\n            }\r\n\r\n            Console.WriteLine(\"foo resumed\");\r\n        };\r\n\r\n        ThreadStart bar = () =>\r\n        {\r\n            Thread.Sleep(3000);\r\n\r\n            lock (conditionVar)\r\n            {\r\n                // Notify a thread which is waiting for the notification from conditionVar\r\n                Monitor.Pulse(conditionVar);\r\n            }\r\n        };\r\n\r\n        new Thread(foo).Start();\r\n        new Thread(bar).Start();\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo21a02-condition-variable.cs",
    "content": "﻿/*\r\n * CONDITION VARIABLES\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo21A02 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var conditionVar = new object();\r\n\r\n        ThreadStart foo = () =>\r\n        {\r\n            Console.WriteLine(\"foo is waiting...\");\r\n\r\n            lock (conditionVar)\r\n            {\r\n                Monitor.Wait(conditionVar);\r\n            }\r\n\r\n            Console.WriteLine(\"foo resumed\");\r\n        };\r\n\r\n        ThreadStart bar = () =>\r\n        {\r\n            for (int i = 0; i < 3; ++i)\r\n            {\r\n                Thread.Sleep(2000);\r\n\r\n                lock (conditionVar)\r\n                {\r\n                    Monitor.Pulse(conditionVar);\r\n                }\r\n            }\r\n        };\r\n\r\n        for (int i = 0; i < 3; ++i)\r\n            new Thread(foo).Start();\r\n\r\n        new Thread(bar).Start();\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo21a03-condition-variable.cs",
    "content": "﻿/*\r\n * CONDITION VARIABLES\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo21A03 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var conditionVar = new object();\r\n\r\n        ThreadStart foo = () =>\r\n        {\r\n            Console.WriteLine(\"foo is waiting...\");\r\n\r\n            lock (conditionVar)\r\n            {\r\n                Monitor.Wait(conditionVar);\r\n            }\r\n\r\n            Console.WriteLine(\"foo resumed\");\r\n        };\r\n\r\n        ThreadStart bar = () =>\r\n        {\r\n            Thread.Sleep(3000);\r\n\r\n            lock (conditionVar)\r\n            {\r\n                // Notify all waiting threads\r\n                Monitor.PulseAll(conditionVar);\r\n            }\r\n        };\r\n\r\n        for (int i = 0; i < 3; ++i)\r\n            new Thread(foo).Start();\r\n\r\n        new Thread(bar).Start();\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo21b-condition-variable.cs",
    "content": "﻿/*\r\n * CONDITION VARIABLES\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo21B : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        new Thread(foo).Start();\r\n        new Thread(egg).Start();\r\n    }\r\n\r\n\r\n    private void foo()\r\n    {\r\n        for (; ; )\r\n        {\r\n            lock (Global.conditionVar)\r\n            {\r\n                Monitor.Wait(Global.conditionVar);\r\n\r\n                Global.counter += 1;\r\n                Console.WriteLine(\"foo counter = \" + Global.counter);\r\n\r\n                if (Global.counter >= Global.COUNT_DONE)\r\n                {\r\n                    return;\r\n                }\r\n            }\r\n        }\r\n    }\r\n\r\n\r\n    private void egg()\r\n    {\r\n        for (; ; )\r\n        {\r\n            lock (Global.conditionVar)\r\n            {\r\n                if (Global.counter < Global.COUNT_HALT_01 || Global.counter > Global.COUNT_HALT_02)\r\n                {\r\n                    Monitor.Pulse(Global.conditionVar);\r\n                }\r\n                else\r\n                {\r\n                    Global.counter += 1;\r\n                    Console.WriteLine(\"egg counter = \" + Global.counter);\r\n                }\r\n\r\n                if (Global.counter >= Global.COUNT_DONE)\r\n                {\r\n                    return;\r\n                }\r\n            }\r\n        }\r\n    }\r\n\r\n\r\n    private class Global\r\n    {\r\n        public static object conditionVar = new object();\r\n\r\n        public static int counter = 0;\r\n\r\n        public const int COUNT_HALT_01 = 3;\r\n        public const int COUNT_HALT_02 = 6;\r\n        public const int COUNT_DONE = 10;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo22a-blocking-queue.cs",
    "content": "﻿/*\r\n * BLOCKING QUEUES\r\n * Version A: A slow producer and a fast consumer\r\n */\r\nusing System;\r\nusing System.Collections.Concurrent;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo22A : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        // blocking queue with capacity = 2\r\n        var queue = new BlockingCollection<string>(2);\r\n\r\n        new Thread(() => producer(queue)).Start();\r\n        new Thread(() => consumer(queue)).Start();\r\n\r\n        // Should call queue.Dispose();\r\n    }\r\n\r\n\r\n    private void producer(BlockingCollection<string> queue)\r\n    {\r\n        Thread.Sleep(2000);\r\n        queue.Add(\"Alice\");\r\n\r\n        Thread.Sleep(2000);\r\n        queue.Add(\"likes\");\r\n\r\n        Thread.Sleep(2000);\r\n        queue.Add(\"singing\");\r\n    }\r\n\r\n\r\n    private void consumer(BlockingCollection<string> queue)\r\n    {\r\n        string data;\r\n\r\n        for (int i = 0; i < 3; ++i)\r\n        {\r\n            Console.WriteLine(\"\\nWaiting for data...\");\r\n\r\n            data = queue.Take();\r\n\r\n            Console.WriteLine(\"    \" + data);\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo22b-blocking-queue.cs",
    "content": "﻿/*\r\n * BLOCKING QUEUES\r\n * Version B: A fast producer and a slow consumer\r\n */\r\nusing System;\r\nusing System.Collections.Concurrent;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo22B : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        // blocking queue with capacity = 2\r\n        var queue = new BlockingCollection<string>(2);\r\n\r\n        new Thread(() => producer(queue)).Start();\r\n        new Thread(() => consumer(queue)).Start();\r\n\r\n        // Should call queue.Dispose();\r\n    }\r\n\r\n\r\n    private void producer(BlockingCollection<string> queue)\r\n    {\r\n        queue.Add(\"Alice\");\r\n        queue.Add(\"likes\");\r\n\r\n        /*\r\n         * Due to reaching the maximum of capacity = 2, when executing queue.put(\"singing\"),\r\n         * this thread is going to sleep until the queue removes an element.\r\n        */\r\n\r\n        queue.Add(\"singing\");\r\n    }\r\n\r\n\r\n    private void consumer(BlockingCollection<string> queue)\r\n    {\r\n        string data;\r\n        Thread.Sleep(2000);\r\n\r\n        for (int i = 0; i < 3; ++i)\r\n        {\r\n            Console.WriteLine(\"\\nWaiting for data...\");\r\n\r\n            data = queue.Take();\r\n\r\n            Console.WriteLine(\"    \" + data);\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo23a01-thread-local.cs",
    "content": "﻿/*\r\n * THREAD-LOCAL STORAGE\r\n * Introduction\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo23A01 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        // Main thread sets value = \"APPLE\"\r\n        MyTask.set(\"APPLE\");\r\n        Console.WriteLine(MyTask.get());\r\n\r\n        // Child thread gets value\r\n        // Expected output: \"NOT SET\"\r\n        new Thread(() =>\r\n        {\r\n            Console.WriteLine(MyTask.get());\r\n        }).Start();\r\n    }\r\n\r\n\r\n    class MyTask\r\n    {\r\n        private static ThreadLocal<string> data = new ThreadLocal<string>();\r\n\r\n        public static string get()\r\n        {\r\n            if (data.Value is null)\r\n            {\r\n                data.Value = \"NOT SET\";\r\n            }\r\n\r\n            return data.Value;\r\n        }\r\n\r\n        public static void set(string value)\r\n        {\r\n            data.Value = value;\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo23a02-thread-local.cs",
    "content": "﻿/*\r\n * THREAD-LOCAL STORAGE\r\n * Introduction\r\n *\r\n * Use valueFactory function for better initialization.\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo23A02 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        // Main thread sets value = \"APPLE\"\r\n        MyTask.set(\"APPLE\");\r\n        Console.WriteLine(MyTask.get());\r\n\r\n        // Child thread gets value\r\n        // Expected output: \"NOT SET\"\r\n        new Thread(() =>\r\n        {\r\n            Console.WriteLine(MyTask.get());\r\n        }).Start();\r\n    }\r\n\r\n\r\n    class MyTask\r\n    {\r\n        private static ThreadLocal<string> data = new ThreadLocal<string>(() => \"NOT SET\");\r\n\r\n        public static string get()\r\n        {\r\n            return data.Value;\r\n        }\r\n\r\n        public static void set(string value)\r\n        {\r\n            data.Value = value;\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo23b-thread-local.cs",
    "content": "﻿/*\r\n * THREAD-LOCAL STORAGE\r\n * Avoiding synchronization using Thread-Local Storage\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo23B : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        const int NUM_THREADS = 3;\r\n        var lstTh = new List<Thread>();\r\n\r\n        for (int i = 0; i < NUM_THREADS; ++i)\r\n        {\r\n            int t = i;\r\n\r\n            lstTh.Add(new Thread(() =>\r\n            {\r\n                Thread.Sleep(1000);\r\n\r\n                for (int i = 0; i < 1000; ++i)\r\n                    MyTask.increaseCounter();\r\n\r\n                Console.WriteLine($\"Thread {t} gives counter = {MyTask.getCounter()}\");\r\n            }));\r\n        }\r\n\r\n        lstTh.ForEach(th => th.Start());\r\n\r\n        /*\r\n         * By using thread-local storage, each thread has its own counter.\r\n         * So, the counter in one thread is completely independent of each other.\r\n         *\r\n         * Thread-local storage helps us to AVOID SYNCHRONIZATION.\r\n         */\r\n    }\r\n\r\n\r\n    class Counter\r\n    {\r\n        public int value = 0;\r\n    }\r\n\r\n\r\n    class MyTask\r\n    {\r\n        private static ThreadLocal<Counter> thlCounter = new ThreadLocal<Counter>(() => new Counter());\r\n\r\n        public static int getCounter()\r\n        {\r\n            return thlCounter.Value.value;\r\n        }\r\n\r\n        public static void increaseCounter()\r\n        {\r\n            var counter = thlCounter.Value;\r\n            ++counter.value;\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo24-volatile.cs",
    "content": "﻿/*\r\n * THE VOLATILE KEYWORD\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo24 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        Global.isRunning = true;\r\n        new Thread(doTask).Start();\r\n\r\n        Thread.Sleep(6000);\r\n        Global.isRunning = false;\r\n    }\r\n\r\n\r\n    private void doTask()\r\n    {\r\n        while (Global.isRunning)\r\n        {\r\n            Console.WriteLine(\"Running...\");\r\n            Thread.Sleep(2000);\r\n        }\r\n    }\r\n\r\n\r\n    class Global\r\n    {\r\n        public static volatile bool isRunning;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo25a-atomic.cs",
    "content": "﻿/*\r\n * ATOMIC ACCESS\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo25A : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        ThreadStart doTask = () =>\r\n        {\r\n            Thread.Sleep(1000);\r\n            Global.counter += 1;\r\n        };\r\n\r\n\r\n        var lstTh = new List<Thread>();\r\n\r\n        for (int i = 0; i < 1000; ++i)\r\n            lstTh.Add(new Thread(doTask));\r\n\r\n\r\n        lstTh.ForEach(th => th.Start());\r\n        lstTh.ForEach(th => th.Join());\r\n\r\n\r\n        // Unpredictable result\r\n        Console.WriteLine(\"counter = \" + Global.counter);\r\n    }\r\n\r\n\r\n    class Global\r\n    {\r\n        public static volatile int counter = 0;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demo/demo25b-atomic.cs",
    "content": "﻿/*\r\n * ATOMIC ACCESS\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Demo25B : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        ThreadStart doTask = () =>\r\n        {\r\n            Thread.Sleep(1000);\r\n            Interlocked.Increment(ref Global.counter);\r\n        };\r\n\r\n\r\n        var lstTh = new List<Thread>();\r\n\r\n        for (int i = 0; i < 1000; ++i)\r\n            lstTh.Add(new Thread(doTask));\r\n\r\n\r\n        lstTh.ForEach(th => th.Start());\r\n        lstTh.ForEach(th => th.Join());\r\n\r\n\r\n        // counter = 1000\r\n        Console.WriteLine(\"counter = \" + Global.counter);\r\n    }\r\n\r\n\r\n    class Global\r\n    {\r\n        public static volatile int counter = 0;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demoex/demoex-async-future-a01.cs",
    "content": "﻿/*\r\n * ASYNCHRONOUS PROGRAMMING WITH THE FUTURE/TASK\r\n */\r\nusing System;\r\nusing System.Threading.Tasks;\r\n#pragma warning disable CS1998\r\n\r\n\r\n\r\nclass DemoExAsyncA01 : IRunnable\r\n{\r\n    public async void run()\r\n    {\r\n        var task = doSomething(\"cleaning house\");\r\n\r\n        while (false == task.IsCompleted)\r\n        {\r\n            // Waiting...\r\n        }\r\n    }\r\n\r\n\r\n    private async Task doSomething(string taskName)\r\n\r\n    {\r\n        Console.WriteLine(\"I am doing \" + taskName);\r\n    }\r\n}\r\n\r\n\r\n\r\n#pragma warning restore CS1998\r\n"
  },
  {
    "path": "csharp/demoex/demoex-async-future-a02.cs",
    "content": "﻿/*\r\n * ASYNCHRONOUS PROGRAMMING WITH THE FUTURE/TASK\r\n */\r\nusing System;\r\nusing System.Threading.Tasks;\r\n#pragma warning disable CS1998\r\n\r\n\r\n\r\nclass DemoExAsyncA02 : IRunnable\r\n{\r\n    public async void run()\r\n    {\r\n        var task = getSquared(7);\r\n\r\n        while (false == task.IsCompleted)\r\n        {\r\n            // Waiting...\r\n        }\r\n\r\n        int result = await task;\r\n        Console.WriteLine(result);\r\n    }\r\n\r\n\r\n\r\n    private async Task<int> getSquared(int x)\r\n\r\n    {\r\n        return x * x;\r\n    }\r\n}\r\n\r\n\r\n\r\n#pragma warning restore CS1998\r\n"
  },
  {
    "path": "csharp/demoex/demoex-async-future-a03.cs",
    "content": "﻿/*\r\n * ASYNCHRONOUS PROGRAMMING WITH THE FUTURE/TASK\r\n */\r\nusing System;\r\nusing System.Threading.Tasks;\r\n#pragma warning disable CS1998\r\n\r\n\r\n\r\nclass DemoExAsyncA03 : IRunnable\r\n{\r\n    public async void run()\r\n    {\r\n        var task = getSquared(7);\r\n\r\n        // Waiting for task completion\r\n        task.Wait();\r\n\r\n        int result = await task;\r\n        Console.WriteLine(result);\r\n    }\r\n\r\n    private async Task<int> getSquared(int x)\r\n    {\r\n        return x * x;\r\n    }\r\n}\r\n\r\n\r\n\r\n#pragma warning restore CS1998\r\n"
  },
  {
    "path": "csharp/demoex/demoex-async-future-a04.cs",
    "content": "﻿/*\r\n * ASYNCHRONOUS PROGRAMMING WITH THE FUTURE/TASK\r\n */\r\nusing System;\r\nusing System.Threading.Tasks;\r\n\r\n\r\n\r\nclass DemoExAsyncA04 : IRunnable\r\n{\r\n    public async void run()\r\n    {\r\n        var task = cookEggs();\r\n\r\n        // Waiting for task completion\r\n        task.Wait();\r\n\r\n        string result = await task;\r\n        Console.WriteLine(result);\r\n    }\r\n\r\n    private async Task<string> cookEggs()\r\n    {\r\n        Console.WriteLine(\"I am cooking eggs\");\r\n\r\n        // Cooking eggs in 2 seconds\r\n        await Task.Delay(2000);\r\n\r\n        /*\r\n         * Avoid using System.Threading.Thread.Sleep(2000)\r\n         * Because one thread can work on multiple tasks at once.\r\n         *\r\n         * Thread.Sleep pauses current thread.\r\n         * Task.Delay pauses current task.\r\n         */\r\n\r\n        return \"fried eggs\";\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demoex/demoex-async-future-a05.cs",
    "content": "﻿/*\r\n * ASYNCHRONOUS PROGRAMMING WITH THE FUTURE/TASK\r\n */\r\nusing System;\r\nusing System.Threading.Tasks;\r\n\r\n\r\n\r\nclass DemoExAsyncA05 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var task = cookEggs();\r\n\r\n        /*\r\n         * By calling property \"Task.Result\", app should wait if necessary for the computation to complete\r\n         * and then retrieves its result.\r\n         *\r\n         * So, we can omit the statement \"task.Wait()\"\r\n         */\r\n\r\n        string result = task.Result;\r\n        Console.WriteLine(result);\r\n    }\r\n\r\n    private async Task<string> cookEggs()\r\n    {\r\n        Console.WriteLine(\"I am cooking eggs\");\r\n\r\n        // Cooking eggs in 2 seconds\r\n        await Task.Delay(2000);\r\n\r\n        return \"fried eggs\";\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demoex/demoex-async-future-b01.cs",
    "content": "﻿/*\r\n * ASYNCHRONOUS PROGRAMMING WITH THE FUTURE/TASK\r\n *\r\n * The app takes about 1000 miliseconds to run\r\n * because all 3 tasks are running simultaneously.\r\n */\r\nusing System;\r\nusing System.Threading.Tasks;\r\n\r\n\r\n\r\nclass DemoExAsyncB01 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var timePointStart = DateTime.Now;\r\n\r\n        var taskA = doAction(\"cooking eggs\");\r\n        var taskB = doAction(\"making coffee\");\r\n        var taskC = doAction(\"watching movies\");\r\n\r\n        taskA.Wait();\r\n        taskB.Wait();\r\n        taskC.Wait();\r\n\r\n        var timePointEnd = DateTime.Now;\r\n        var duration = (timePointEnd - timePointStart).TotalMilliseconds;\r\n\r\n        Console.WriteLine($\"Total time: {duration} millis\");\r\n    }\r\n\r\n\r\n    private async Task doAction(string actionName)\r\n    {\r\n        Console.WriteLine(actionName);\r\n\r\n        // Doing action in one second...\r\n        await Task.Delay(1000);\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demoex/demoex-async-future-b02.cs",
    "content": "﻿/*\r\n * ASYNCHRONOUS PROGRAMMING WITH THE FUTURE/TASK\r\n *\r\n * The app takes about 3000 miliseconds to run\r\n * because each method \"Task.Wait\" pauses app until the task finishes.\r\n */\r\nusing System;\r\nusing System.Threading.Tasks;\r\n\r\n\r\n\r\nclass DemoExAsyncB02 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var timePointStart = DateTime.Now;\r\n\r\n        var taskA = doAction(\"cooking eggs\");\r\n        taskA.Wait();\r\n\r\n        var taskB = doAction(\"making coffee\");\r\n        taskB.Wait();\r\n\r\n        var taskC = doAction(\"watching movies\");\r\n        taskC.Wait();\r\n\r\n        var timePointEnd = DateTime.Now;\r\n        var duration = (timePointEnd - timePointStart).TotalMilliseconds;\r\n\r\n        Console.WriteLine($\"Total time: {duration} millis\");\r\n    }\r\n\r\n\r\n    private async Task doAction(string actionName)\r\n    {\r\n        Console.WriteLine(actionName);\r\n\r\n        // Doing action in one second...\r\n        await Task.Delay(1000);\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demoex/demoex-async-future-b03.cs",
    "content": "﻿/*\r\n * ASYNCHRONOUS PROGRAMMING WITH THE FUTURE/TASK\r\n */\r\nusing System;\r\nusing System.Threading.Tasks;\r\n\r\n\r\n\r\nclass DemoExAsyncB03 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var taskCookingEggs = cookEggs();\r\n        var taskMakingCoffee = makeCoffee();\r\n\r\n        string resultEggs = taskCookingEggs.Result;\r\n        string resultCoffee = taskMakingCoffee.Result;\r\n\r\n        Console.WriteLine(\"Done!\");\r\n        Console.WriteLine(resultEggs);\r\n        Console.WriteLine(resultCoffee);\r\n    }\r\n\r\n\r\n    private async Task<string> cookEggs()\r\n    {\r\n        Console.WriteLine(\"I am cooking eggs\");\r\n\r\n        // Cooking eggs in two seconds...\r\n        await Task.Delay(2000);\r\n\r\n        return \"fried eggs\";\r\n    }\r\n\r\n\r\n    private async Task<string> makeCoffee()\r\n    {\r\n        Console.WriteLine(\"I am making coffee\");\r\n\r\n        // Making coffee in four seconds...\r\n        await Task.Delay(4000);\r\n\r\n        return \"a cup of coffee\";\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demoex/demoex-async-future-b04.cs",
    "content": "﻿/*\r\n * ASYNCHRONOUS PROGRAMMING WITH THE FUTURE/TASK\r\n */\r\nusing System;\r\nusing System.Threading.Tasks;\r\n\r\n\r\n\r\nclass DemoExAsyncB04 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var taskValidation = validate(\"John\");\r\n        bool result = taskValidation.Result;\r\n\r\n        if (result)\r\n            Console.WriteLine(\"User can view movies.\");\r\n        else\r\n            Console.WriteLine(\"Age must be >= 18 to view movies.\");\r\n    }\r\n\r\n\r\n    private async Task<bool> validate(string userName)\r\n    {\r\n        int userAge = await queryUserAge(userName);\r\n\r\n        Console.WriteLine(\"Validating...\");\r\n\r\n        // Validating in two seconds...\r\n        await Task.Delay(2000);\r\n\r\n        return userAge >= 18;\r\n    }\r\n\r\n\r\n    private async Task<int> queryUserAge(string userName)\r\n    {\r\n        Console.WriteLine(\"Querying userAge in database...\");\r\n\r\n        // Querying database in two seconds...\r\n        await Task.Delay(2000);\r\n\r\n        if (userName == \"Thanh\")\r\n            return 26;\r\n        else\r\n            return 17;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demoex/demoex-async-future-c01.cs",
    "content": "﻿/*\r\n * ASYNCHRONOUS PROGRAMMING WITH THE FUTURE/TASK\r\n */\r\nusing System;\r\nusing System.Threading.Tasks;\r\n\r\n\r\n\r\nclass DemoExAsyncC01 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var task = getSquared(7)\r\n            .ContinueWith((previousTask) => getDiv2(previousTask.Result))\r\n            .Unwrap();\r\n\r\n        int result = task.Result;\r\n        Console.WriteLine(result);\r\n    }\r\n\r\n    private async Task<int> getSquared(int x)\r\n    {\r\n        await Task.Delay(100);\r\n        return x * x;\r\n    }\r\n\r\n    private async Task<int> getDiv2(int x)\r\n    {\r\n        await Task.Delay(100);\r\n        return x / 2;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/demoex/demoex-async-future-c02.cs",
    "content": "﻿/*\r\n * ASYNCHRONOUS PROGRAMMING WITH THE FUTURE/TASK\r\n */\r\nusing System;\r\nusing System.Threading.Tasks;\r\n\r\n\r\n\r\nclass DemoExAsyncC02 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var task = getSquared(7)\r\n            .ContinueWith((previousTask) => getDiv2(previousTask.Result))\r\n            .Unwrap()\r\n            .ContinueWith((previousTask) => Console.WriteLine(previousTask.Result));\r\n\r\n        task.Wait();\r\n    }\r\n\r\n    private async Task<int> getSquared(int x)\r\n    {\r\n        await Task.Delay(100);\r\n        return x * x;\r\n    }\r\n\r\n    private async Task<int> getDiv2(int x)\r\n    {\r\n        await Task.Delay(100);\r\n        return x / 2;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer01a-max-div.cs",
    "content": "﻿/*\r\n * MAXIMUM NUMBER OF DIVISORS\r\n */\r\nusing System;\r\n\r\n\r\n\r\nclass Exer01A : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        const int RANGE_START = 1;\r\n        const int RANGE_END = 100000;\r\n\r\n        int resValue = 0;\r\n        int resNumDiv = 0;  // number of divisors of result\r\n\r\n        var tpStart = DateTime.Now;\r\n\r\n\r\n        for (int i = RANGE_START; i <= RANGE_END; ++i)\r\n        {\r\n            int numDiv = 0;\r\n\r\n            for (int j = i / 2; j > 0; --j)\r\n                if (i % j == 0)\r\n                    ++numDiv;\r\n\r\n            if (resNumDiv < numDiv)\r\n            {\r\n                resNumDiv = numDiv;\r\n                resValue = i;\r\n            }\r\n        }\r\n\r\n\r\n        var timeElapsed = (DateTime.Now - tpStart).TotalSeconds;\r\n\r\n        Console.WriteLine(\"The integer which has largest number of divisors is \" + resValue);\r\n        Console.WriteLine(\"The largest number of divisor is \" + resNumDiv);\r\n        Console.WriteLine(\"Time elapsed = \" + timeElapsed);\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer01b-max-div.cs",
    "content": "﻿/*\r\n * MAXIMUM NUMBER OF DIVISORS\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\nusing System.Linq;\r\n\r\n\r\n\r\nclass Exer01B : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        const int RANGE_START = 1;\r\n        const int RANGE_END = 100000;\r\n        const int NUM_THREADS = 8;\r\n\r\n        var lstWorkerArg = prepareArg(RANGE_START, RANGE_END, NUM_THREADS);\r\n        var lstWorkerRes = new List<WorkerResult>();\r\n        var lstTh = new List<Thread>();\r\n\r\n\r\n        for (int ith = 0; ith < NUM_THREADS; ++ith)\r\n        {\r\n            var arg = lstWorkerArg[ith];\r\n\r\n            lstTh.Add(new Thread(() =>\r\n            {\r\n                int resValue = 0;\r\n                int resNumDiv = 0;\r\n\r\n                for (int i = arg.iStart; i <= arg.iEnd; ++i)\r\n                {\r\n                    int numDiv = 0;\r\n\r\n                    for (int j = i / 2; j > 0; --j)\r\n                        if (i % j == 0)\r\n                            ++numDiv;\r\n\r\n                    if (resNumDiv < numDiv)\r\n                    {\r\n                        resNumDiv = numDiv;\r\n                        resValue = i;\r\n                    }\r\n                }\r\n\r\n                lock (lstWorkerRes)\r\n                {\r\n                    lstWorkerRes.Add(new WorkerResult(resValue, resNumDiv));\r\n                }\r\n            }));\r\n        }\r\n\r\n\r\n        var tpStart = DateTime.Now;\r\n\r\n        lstTh.ForEach(th => th.Start());\r\n        lstTh.ForEach(th => th.Join());\r\n        var finalRes = lstWorkerRes.OrderByDescending(res => res.numDiv).First();\r\n\r\n        var timeElapsed = (DateTime.Now - tpStart).TotalSeconds;\r\n\r\n        Console.WriteLine(\"The integer which has largest number of divisors is \" + finalRes.value);\r\n        Console.WriteLine(\"The largest number of divisor is \" + finalRes.numDiv);\r\n        Console.WriteLine(\"Time elapsed = \" + timeElapsed);\r\n\r\n\r\n        /*\r\n         * BETTER WAY (avoiding synchronization of lstWorkerRes):\r\n         *\r\n         * - Initialize lstWorkerRes with null objects.\r\n         *   Of course, the number of objects is NUM_THREADS.\r\n         *\r\n         * - In thread function:\r\n         *   lstWorkerRes[threadIndex] = new WorkerResult(resValue, resNumDiv);\r\n         */\r\n    }\r\n\r\n\r\n    private List<WorkerArg> prepareArg(int rangeStart, int rangeEnd, int numThreads)\r\n    {\r\n        int rangeA, rangeB, rangeBlock;\r\n\r\n        rangeBlock = (rangeEnd - rangeStart + 1) / numThreads;\r\n        rangeA = rangeStart;\r\n\r\n        var lstWorkerArg = new List<WorkerArg>();\r\n\r\n        for (int i = 0; i < numThreads; ++i, rangeA += rangeBlock)\r\n        {\r\n            rangeB = rangeA + rangeBlock - 1;\r\n\r\n            if (i == numThreads - 1)\r\n                rangeB = rangeEnd;\r\n\r\n            lstWorkerArg.Add(new WorkerArg(rangeA, rangeB));\r\n        }\r\n\r\n        return lstWorkerArg;\r\n    }\r\n\r\n\r\n    readonly struct WorkerArg\r\n    {\r\n        public readonly int iStart, iEnd;\r\n        public WorkerArg(int iStart, int iEnd) => (this.iStart, this.iEnd) = (iStart, iEnd);\r\n    }\r\n\r\n\r\n    readonly struct WorkerResult\r\n    {\r\n        public readonly int value, numDiv;\r\n        public WorkerResult(int value, int numDiv) => (this.value, this.numDiv) = (value, numDiv);\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer01c-max-div.cs",
    "content": "﻿/*\r\n * MAXIMUM NUMBER OF DIVISORS\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer01C : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        const int RANGE_START = 1;\r\n        const int RANGE_END = 100000;\r\n        const int NUM_THREADS = 8;\r\n\r\n        var lstWorkerArg = prepareArg(RANGE_START, RANGE_END, NUM_THREADS);\r\n        var lstTh = new List<Thread>();\r\n        var finalRes = new WorkerResult();\r\n\r\n\r\n        for (int ith = 0; ith < NUM_THREADS; ++ith)\r\n        {\r\n            var arg = lstWorkerArg[ith];\r\n\r\n            lstTh.Add(new Thread(() =>\r\n            {\r\n                int resValue = 0;\r\n                int resNumDiv = 0;\r\n\r\n                for (int i = arg.iStart; i <= arg.iEnd; ++i)\r\n                {\r\n                    int numDiv = 0;\r\n\r\n                    for (int j = i / 2; j > 0; --j)\r\n                        if (i % j == 0)\r\n                            ++numDiv;\r\n\r\n                    if (resNumDiv < numDiv)\r\n                    {\r\n                        resNumDiv = numDiv;\r\n                        resValue = i;\r\n                    }\r\n                }\r\n\r\n                finalRes.update(resValue, resNumDiv);\r\n            }));\r\n        }\r\n\r\n\r\n        var tpStart = DateTime.Now;\r\n\r\n        lstTh.ForEach(th => th.Start());\r\n        lstTh.ForEach(th => th.Join());\r\n\r\n        var timeElapsed = (DateTime.Now - tpStart).TotalSeconds;\r\n\r\n        Console.WriteLine(\"The integer which has largest number of divisors is \" + finalRes.value);\r\n        Console.WriteLine(\"The largest number of divisor is \" + finalRes.numDiv);\r\n        Console.WriteLine(\"Time elapsed = \" + timeElapsed);\r\n    }\r\n\r\n\r\n    private List<WorkerArg> prepareArg(int rangeStart, int rangeEnd, int numThreads)\r\n    {\r\n        int rangeA, rangeB, rangeBlock;\r\n\r\n        rangeBlock = (rangeEnd - rangeStart + 1) / numThreads;\r\n        rangeA = rangeStart;\r\n\r\n        var lstWorkerArg = new List<WorkerArg>();\r\n\r\n        for (int i = 0; i < numThreads; ++i, rangeA += rangeBlock)\r\n        {\r\n            rangeB = rangeA + rangeBlock - 1;\r\n\r\n            if (i == numThreads - 1)\r\n                rangeB = rangeEnd;\r\n\r\n            lstWorkerArg.Add(new WorkerArg(rangeA, rangeB));\r\n        }\r\n\r\n        return lstWorkerArg;\r\n    }\r\n\r\n\r\n    readonly struct WorkerArg\r\n    {\r\n        public readonly int iStart, iEnd;\r\n        public WorkerArg(int iStart, int iEnd) => (this.iStart, this.iEnd) = (iStart, iEnd);\r\n    }\r\n\r\n\r\n    class WorkerResult\r\n    {\r\n        public int value = 0;\r\n        public int numDiv = 0;\r\n\r\n        public void update(int value, int numDiv)\r\n        {\r\n            lock (this)\r\n            {\r\n                if (this.numDiv < numDiv)\r\n                {\r\n                    this.numDiv = numDiv;\r\n                    this.value = value;\r\n                }\r\n            }\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer02a01-producer-consumer.cs",
    "content": "﻿/*\r\n * THE PRODUCER-CONSUMER PROBLEM\r\n *\r\n * SOLUTION TYPE A: USING BLOCKING QUEUES\r\n *      Version A01: 1 slow producer, 1 fast consumer\r\n */\r\nusing System;\r\nusing System.Collections.Concurrent;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer02A01 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var queue = new BlockingCollection<int>(1);\r\n        new Thread(() => producer(queue)).Start();\r\n        new Thread(() => consumer(queue)).Start();\r\n    }\r\n\r\n\r\n    private void producer(BlockingCollection<int> queue)\r\n    {\r\n        int i = 1;\r\n\r\n        for (; ; ++i)\r\n        {\r\n            queue.Add(i);\r\n            Thread.Sleep(1000);\r\n        }\r\n    }\r\n\r\n\r\n    private void consumer(BlockingCollection<int> queue)\r\n    {\r\n        for (; ; )\r\n        {\r\n            int data = queue.Take();\r\n            Console.WriteLine(\"Consumer \" + data);\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer02a02-producer-consumer.cs",
    "content": "﻿/*\r\n * THE PRODUCER-CONSUMER PROBLEM\r\n *\r\n * SOLUTION TYPE A: USING BLOCKING QUEUES\r\n *      Version A02: 2 slow producers, 1 fast consumer\r\n */\r\nusing System;\r\nusing System.Collections.Concurrent;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer02A02 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var queue = new BlockingCollection<int>(1);\r\n\r\n        new Thread(() => producer(queue)).Start();\r\n        new Thread(() => producer(queue)).Start();\r\n\r\n        new Thread(() => consumer(queue)).Start();\r\n    }\r\n\r\n\r\n    private void producer(BlockingCollection<int> queue)\r\n    {\r\n        int i = 1;\r\n\r\n        for (; ; ++i)\r\n        {\r\n            queue.Add(i);\r\n            Thread.Sleep(1000);\r\n        }\r\n    }\r\n\r\n\r\n    private void consumer(BlockingCollection<int> queue)\r\n    {\r\n        for (; ; )\r\n        {\r\n            int data = queue.Take();\r\n            Console.WriteLine(\"Consumer \" + data);\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer02a03-producer-consumer.cs",
    "content": "﻿/*\r\n * THE PRODUCER-CONSUMER PROBLEM\r\n *\r\n * SOLUTION TYPE A: USING BLOCKING QUEUES\r\n *      Version A03: 1 slow producer, 2 fast consumers\r\n */\r\nusing System;\r\nusing System.Collections.Concurrent;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer02A03 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var queue = new BlockingCollection<int>(1);\r\n\r\n        new Thread(() => producer(queue)).Start();\r\n\r\n        new Thread(() => consumer(\"foo\", queue)).Start();\r\n        new Thread(() => consumer(\"bar\", queue)).Start();\r\n    }\r\n\r\n\r\n    private void producer(BlockingCollection<int> queue)\r\n    {\r\n        int i = 1;\r\n\r\n        for (; ; ++i)\r\n        {\r\n            queue.Add(i);\r\n            Thread.Sleep(1000);\r\n        }\r\n    }\r\n\r\n\r\n    private void consumer(string name, BlockingCollection<int> queue)\r\n    {\r\n        for (; ; )\r\n        {\r\n            int data = queue.Take();\r\n            Console.WriteLine($\"Consumer {name}: {data}\");\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer02a04-producer-consumer.cs",
    "content": "/*\n * THE PRODUCER-CONSUMER PROBLEM\n *\n * SOLUTION TYPE A: USING BLOCKING QUEUES\n *      Version A04: Multiple fast producers, multiple slow consumers\n */\nusing System;\nusing System.Collections.Concurrent;\nusing System.Collections.Generic;\nusing System.Threading;\n\n\n\nclass Exer02A04 : IRunnable\n{\n    public void run()\n    {\n        var queue = new BlockingCollection<int>(5);\n\n        const int NUM_PRODUCERS = 3;\n        const int NUM_CONSUMERS = 2;\n\n        var lstThProducer = new List<Thread>();\n        var lstThConsumer = new List<Thread>();\n\n        for (int i = 0; i < NUM_PRODUCERS; ++i)\n        {\n            int t = i;\n            lstThProducer.Add(new Thread(() => producer(queue, t * 1000)));\n        }\n\n        for (int i = 0; i < NUM_CONSUMERS; ++i)\n        {\n            lstThConsumer.Add(new Thread(() => consumer(queue)));\n        }\n\n        lstThProducer.ForEach(th => th.Start());\n        lstThConsumer.ForEach(th => th.Start());\n    }\n\n\n    private void producer(BlockingCollection<int> queue, int startValue)\n    {\n        int i = 1;\n        if (startValue == 2000)\n        {\n            Console.WriteLine(\"FUCK IT\");\n        }\n\n        for (; ; ++i)\n        {\n            queue.Add(i + startValue);\n        }\n    }\n\n\n    private void consumer(BlockingCollection<int> queue)\n    {\n        for (; ; )\n        {\n            int data = queue.Take();\n            Console.WriteLine(\"Consumer \" + data);\n            Thread.Sleep(1000);\n        }\n    }\n}\n"
  },
  {
    "path": "csharp/exercise/exer02b01-producer-consumer.cs",
    "content": "﻿/*\r\n * THE PRODUCER-CONSUMER PROBLEM\r\n *\r\n * SOLUTION TYPE B: USING SEMAPHORES\r\n *      Version B01: 1 slow producer, 1 fast consumer\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer02B01 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var semFill = new Semaphore(0, int.MaxValue);     // item produced\r\n        var semEmpty = new Semaphore(1, int.MaxValue);    // remaining space in queue\r\n\r\n        var queue = new Queue<int>();\r\n\r\n        new Thread(() => producer(semFill, semEmpty, queue)).Start();\r\n        new Thread(() => consumer(semFill, semEmpty, queue)).Start();\r\n    }\r\n\r\n\r\n    private void producer(Semaphore semFill, Semaphore semEmpty, Queue<int> queue)\r\n    {\r\n        int i = 1;\r\n\r\n        for (; ; ++i)\r\n        {\r\n            semEmpty.WaitOne();\r\n\r\n            queue.Enqueue(i);\r\n            Thread.Sleep(1000);\r\n\r\n            semFill.Release();\r\n        }\r\n    }\r\n\r\n\r\n    private void consumer(Semaphore semFill, Semaphore semEmpty, Queue<int> queue)\r\n    {\r\n        for (; ; )\r\n        {\r\n            semFill.WaitOne();\r\n\r\n            int data = queue.Dequeue();\r\n            Console.WriteLine(\"Consumer \" + data);\r\n\r\n            semEmpty.Release();\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer02b02-producer-consumer.cs",
    "content": "﻿/*\r\n * THE PRODUCER-CONSUMER PROBLEM\r\n *\r\n * SOLUTION TYPE B: USING SEMAPHORES\r\n *      Version B02: 2 slow producers, 1 fast consumer\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer02B02 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var semFill = new Semaphore(0, int.MaxValue);     // item produced\r\n        var semEmpty = new Semaphore(1, int.MaxValue);    // remaining space in queue\r\n\r\n        var queue = new Queue<int>();\r\n\r\n        new Thread(() => producer(semFill, semEmpty, queue, 0)).Start();\r\n        new Thread(() => producer(semFill, semEmpty, queue, 1000)).Start();\r\n\r\n        new Thread(() => consumer(semFill, semEmpty, queue)).Start();\r\n    }\r\n\r\n\r\n    private void producer(Semaphore semFill, Semaphore semEmpty,\r\n        Queue<int> queue, int startValue)\r\n    {\r\n        int i = 1;\r\n\r\n        for (; ; ++i)\r\n        {\r\n            semEmpty.WaitOne();\r\n\r\n            queue.Enqueue(i + startValue);\r\n            Thread.Sleep(1000);\r\n\r\n            semFill.Release();\r\n        }\r\n    }\r\n\r\n\r\n    private void consumer(Semaphore semFill, Semaphore semEmpty, Queue<int> queue)\r\n    {\r\n        for (; ; )\r\n        {\r\n            semFill.WaitOne();\r\n\r\n            int data = queue.Dequeue();\r\n            Console.WriteLine(\"Consumer \" + data);\r\n\r\n            semEmpty.Release();\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer02b03-producer-consumer.cs",
    "content": "﻿/*\r\n * THE PRODUCER-CONSUMER PROBLEM\r\n *\r\n * SOLUTION TYPE B: USING SEMAPHORES\r\n *      Version B03: 2 fast producers, 1 slow consumer\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer02B03 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var semFill = new Semaphore(0, int.MaxValue);     // item produced\r\n        var semEmpty = new Semaphore(1, int.MaxValue);    // remaining space in queue\r\n\r\n        var queue = new Queue<int>();\r\n\r\n        new Thread(() => producer(semFill, semEmpty, queue, 0)).Start();\r\n        new Thread(() => producer(semFill, semEmpty, queue, 1000)).Start();\r\n\r\n        new Thread(() => consumer(semFill, semEmpty, queue)).Start();\r\n    }\r\n\r\n\r\n    private void producer(Semaphore semFill, Semaphore semEmpty,\r\n        Queue<int> queue, int startValue)\r\n    {\r\n        int i = 1;\r\n\r\n        for (; ; ++i)\r\n        {\r\n            semEmpty.WaitOne();\r\n            queue.Enqueue(i + startValue);\r\n            semFill.Release();\r\n        }\r\n    }\r\n\r\n\r\n    private void consumer(Semaphore semFill, Semaphore semEmpty, Queue<int> queue)\r\n    {\r\n        for (; ; )\r\n        {\r\n            semFill.WaitOne();\r\n\r\n            int data = queue.Dequeue();\r\n            Console.WriteLine(\"Consumer \" + data);\r\n            Thread.Sleep(1000);\r\n\r\n            semEmpty.Release();\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer02b04-producer-consumer.cs",
    "content": "﻿/*\r\n * THE PRODUCER-CONSUMER PROBLEM\r\n *\r\n * SOLUTION TYPE B: USING SEMAPHORES\r\n *      Version B04: Multiple fast producers, multiple slow consumers\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer02B04 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var semFill = new Semaphore(0, int.MaxValue);     // item produced\r\n        var semEmpty = new Semaphore(1, int.MaxValue);    // remaining space in queue\r\n\r\n        var queue = new Queue<int>();\r\n\r\n        const int NUM_PRODUCERS = 3;\r\n        const int NUM_CONSUMERS = 2;\r\n\r\n\r\n        var lstThProducer = new List<Thread>();\r\n        var lstThConsumer = new List<Thread>();\r\n\r\n        for (int i = 0; i < NUM_PRODUCERS; ++i)\r\n        {\r\n            int t = i;\r\n            lstThProducer.Add(new Thread(() => producer(semFill, semEmpty, queue, t * 1000)));\r\n        }\r\n\r\n        for (int i = 0; i < NUM_CONSUMERS; ++i)\r\n        {\r\n            lstThConsumer.Add(new Thread(() => consumer(semFill, semEmpty, queue)));\r\n        }\r\n\r\n\r\n        lstThProducer.ForEach(th => th.Start());\r\n        lstThConsumer.ForEach(th => th.Start());\r\n    }\r\n\r\n\r\n    private void producer(Semaphore semFill, Semaphore semEmpty,\r\n        Queue<int> queue, int startValue)\r\n    {\r\n        int i = 1;\r\n\r\n        for (; ; ++i)\r\n        {\r\n            semEmpty.WaitOne();\r\n            queue.Enqueue(i + startValue);\r\n            semFill.Release();\r\n        }\r\n    }\r\n\r\n\r\n    private void consumer(Semaphore semFill, Semaphore semEmpty, Queue<int> queue)\r\n    {\r\n        for (; ; )\r\n        {\r\n            semFill.WaitOne();\r\n\r\n            int data = queue.Dequeue();\r\n            Console.WriteLine(\"Consumer \" + data);\r\n            Thread.Sleep(1000);\r\n\r\n            semEmpty.Release();\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer02c-producer-consumer.cs",
    "content": "﻿/*\r\n * THE PRODUCER-CONSUMER PROBLEM\r\n *\r\n * SOLUTION TYPE C: USING CONDITION VARIABLES & CUSTOM MONITORS\r\n *      Multiple fast producers, multiple slow consumers\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer02C : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var monitor = new ProdConsMonitor<int>();\r\n        var queue = new Queue<int>();\r\n\r\n\r\n        const int MAX_QUEUE_SIZE = 6;\r\n        const int NUM_PRODUCERS = 3;\r\n        const int NUM_CONSUMERS = 2;\r\n\r\n\r\n        monitor.init(MAX_QUEUE_SIZE, queue);\r\n\r\n\r\n        var lstProd = new List<Thread>();\r\n        var lstCons = new List<Thread>();\r\n\r\n        for (int i = 0; i < NUM_PRODUCERS; ++i)\r\n        {\r\n            int t = i;\r\n            lstProd.Add(new Thread(() => producer(monitor, t * 1000)));\r\n        }\r\n\r\n        for (int i = 0; i < NUM_CONSUMERS; ++i)\r\n        {\r\n            lstCons.Add(new Thread(() => consumer(monitor)));\r\n        }\r\n\r\n\r\n        lstProd.ForEach(th => th.Start());\r\n        lstCons.ForEach(th => th.Start());\r\n    }\r\n\r\n\r\n    private void producer(ProdConsMonitor<int> monitor, int startValue)\r\n    {\r\n        int i = 1;\r\n\r\n        for (; ; ++i)\r\n        {\r\n            monitor.add(i + startValue);\r\n        }\r\n    }\r\n\r\n\r\n    private void consumer(ProdConsMonitor<int> monitor)\r\n    {\r\n        for (; ; )\r\n        {\r\n            int data = monitor.remove();\r\n            Console.WriteLine(\"Consumer \" + data);\r\n            Thread.Sleep(1000);\r\n        }\r\n    }\r\n\r\n\r\n    class ProdConsMonitor<T>\r\n    {\r\n        private Queue<T> queue;\r\n        private int maxQueueSize;\r\n\r\n        private object condFull = new object();\r\n        private object condEmpty = new object();\r\n\r\n\r\n        public void init(int maxQueueSize, Queue<T> queue)\r\n        {\r\n            this.maxQueueSize = maxQueueSize;\r\n            this.queue = queue;\r\n        }\r\n\r\n\r\n        public void add(T item)\r\n        {\r\n            lock (condFull)\r\n            {\r\n                while (queue.Count == maxQueueSize)\r\n                    Monitor.Wait(condFull);\r\n\r\n                lock (queue)\r\n                {\r\n                    queue.Enqueue(item);\r\n                }\r\n            }\r\n\r\n            lock (condEmpty)\r\n            {\r\n                if (queue.Count == 1)\r\n                    Monitor.Pulse(condEmpty);\r\n            }\r\n        }\r\n\r\n\r\n        public T remove()\r\n        {\r\n            T item = default;\r\n\r\n            lock (condEmpty)\r\n            {\r\n                while (queue.Count == 0)\r\n                    Monitor.Wait(condEmpty);\r\n\r\n                lock (queue)\r\n                {\r\n                    item = queue.Dequeue();\r\n                }\r\n            }\r\n\r\n            lock (condFull)\r\n            {\r\n                if (queue.Count == maxQueueSize - 1)\r\n                    Monitor.Pulse(condFull);\r\n            }\r\n\r\n            return item;\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer03a-readers-writers.cs",
    "content": "﻿/*\r\n * THE READERS-WRITERS PROBLEM\r\n * Solution for the first readers-writers problem\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer03A : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        const int NUM_READER = 8;\r\n        const int NUM_WRITER = 6;\r\n\r\n        var rand = new Random();\r\n        var lstThReader = new List<Thread>();\r\n        var lstThWriter = new List<Thread>();\r\n\r\n        for (int i = 0; i < NUM_READER; ++i)\r\n            lstThReader.Add(new Thread(() => doTaskReader(rand.Next(3))));\r\n\r\n        for (int i = 0; i < NUM_WRITER; ++i)\r\n            lstThWriter.Add(new Thread(() => doTaskWriter(rand.Next(3))));\r\n\r\n        lstThReader.ForEach(th => th.Start());\r\n        lstThWriter.ForEach(th => th.Start());\r\n    }\r\n\r\n\r\n    private static void doTaskWriter(int delayTime)\r\n    {\r\n        var rand = new Random();\r\n        Thread.Sleep(1000 * delayTime);\r\n\r\n        Global.mutResource.WaitOne();\r\n\r\n        Global.resource = rand.Next(100);\r\n        Console.WriteLine(\"Write \" + Global.resource);\r\n\r\n        Global.mutResource.Release();\r\n    }\r\n\r\n\r\n    private void doTaskReader(int delayTime)\r\n    {\r\n        Thread.Sleep(1000 * delayTime);\r\n\r\n        // Increase reader count\r\n        lock (Global.mutReaderCount)\r\n        {\r\n            Global.readerCount += 1;\r\n\r\n            if (1 == Global.readerCount)\r\n                Global.mutResource.WaitOne();\r\n        }\r\n\r\n        // Do the reading\r\n        int data = Global.resource;\r\n        Console.WriteLine(\"Read \" + data);\r\n\r\n        // Decrease reader count\r\n        lock (Global.mutReaderCount)\r\n        {\r\n            Global.readerCount -= 1;\r\n\r\n            if (0 == Global.readerCount)\r\n                Global.mutResource.Release();\r\n        }\r\n    }\r\n\r\n\r\n    class Global\r\n    {\r\n        public static Semaphore mutResource = new Semaphore(1, int.MaxValue);\r\n        public static object mutReaderCount = new object();\r\n\r\n        public static volatile int resource = 0;\r\n        public static int readerCount = 0;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer03b-readers-writers.cs",
    "content": "﻿/*\r\n * THE READERS-WRITERS PROBLEM\r\n * Solution for the third readers-writers problem\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer03B : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        const int NUM_READER = 8;\r\n        const int NUM_WRITER = 6;\r\n\r\n        var rand = new Random();\r\n        var lstThReader = new List<Thread>();\r\n        var lstThWriter = new List<Thread>();\r\n\r\n        for (int i = 0; i < NUM_READER; ++i)\r\n            lstThReader.Add(new Thread(() => doTaskReader(rand.Next(3))));\r\n\r\n        for (int i = 0; i < NUM_WRITER; ++i)\r\n            lstThWriter.Add(new Thread(() => doTaskWriter(rand.Next(3))));\r\n\r\n        lstThReader.ForEach(th => th.Start());\r\n        lstThWriter.ForEach(th => th.Start());\r\n    }\r\n\r\n\r\n    private static void doTaskWriter(int delayTime)\r\n    {\r\n        var rand = new Random();\r\n        Thread.Sleep(1000 * delayTime);\r\n\r\n        lock (Global.mutServiceQueue)\r\n        {\r\n            Global.mutResource.WaitOne();\r\n        }\r\n\r\n        Global.resource = rand.Next(100);\r\n        Console.WriteLine(\"Write \" + Global.resource);\r\n\r\n        Global.mutResource.Release();\r\n    }\r\n\r\n\r\n    private void doTaskReader(int delayTime)\r\n    {\r\n        Thread.Sleep(1000 * delayTime);\r\n\r\n        lock (Global.mutServiceQueue)\r\n        {\r\n            // Increase reader count\r\n            lock (Global.mutReaderCount)\r\n            {\r\n                Global.readerCount += 1;\r\n\r\n                if (1 == Global.readerCount)\r\n                    Global.mutResource.WaitOne();\r\n            }\r\n        }\r\n\r\n        // Do the reading\r\n        int data = Global.resource;\r\n        Console.WriteLine(\"Read \" + data);\r\n\r\n        // Decrease reader count\r\n        lock (Global.mutReaderCount)\r\n        {\r\n            Global.readerCount -= 1;\r\n\r\n            if (0 == Global.readerCount)\r\n                Global.mutResource.Release();\r\n        }\r\n    }\r\n\r\n\r\n    class Global\r\n    {\r\n        public static object mutServiceQueue = new object();\r\n\r\n        public static Semaphore mutResource = new Semaphore(1, int.MaxValue);\r\n        public static object mutReaderCount = new object();\r\n\r\n        public static volatile int resource = 0;\r\n        public static int readerCount = 0;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer04a-dining-philosophers.cs",
    "content": "﻿/*\r\n * THE DINING PHILOSOPHERS PROBLEM\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer04A : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        const int NUM_PHILOSOPHERS = 5;\r\n\r\n\r\n        var chopstick = new Semaphore[NUM_PHILOSOPHERS];\r\n\r\n        for (int i = 0; i < NUM_PHILOSOPHERS; ++i)\r\n            chopstick[i] = new Semaphore(1, NUM_PHILOSOPHERS + 1);\r\n\r\n\r\n        var lstTh = new List<Thread>();\r\n\r\n        for (int ith = 0; ith < NUM_PHILOSOPHERS; ++ith)\r\n        {\r\n            int i = ith;\r\n\r\n            lstTh.Add(new Thread(() =>\r\n            {\r\n                int n = NUM_PHILOSOPHERS;\r\n                Thread.Sleep(1000);\r\n\r\n                chopstick[i].WaitOne();\r\n                chopstick[(i + 1) % n].WaitOne();\r\n\r\n                Console.WriteLine($\"Philosopher #{i} is eating the rice\");\r\n\r\n                chopstick[(i + 1) % n].Release();\r\n                chopstick[i].Release();\r\n            }));\r\n        }\r\n\r\n\r\n        lstTh.ForEach(th => th.Start());\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer04b-dining-philosophers.cs",
    "content": "﻿/*\r\n * THE DINING PHILOSOPHERS PROBLEM\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer04B : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        const int NUM_PHILOSOPHERS = 5;\r\n\r\n\r\n        var chopstick = new object[NUM_PHILOSOPHERS];\r\n\r\n        for (int i = 0; i < NUM_PHILOSOPHERS; ++i)\r\n            chopstick[i] = new object();\r\n\r\n\r\n        var lstTh = new List<Thread>();\r\n\r\n        for (int ith = 0; ith < NUM_PHILOSOPHERS; ++ith)\r\n        {\r\n            int i = ith;\r\n\r\n            lstTh.Add(new Thread(() =>\r\n            {\r\n                int n = NUM_PHILOSOPHERS;\r\n                Thread.Sleep(1000);\r\n\r\n                lock (chopstick[i])\r\n                {\r\n                    lock (chopstick[(i + 1) % n])\r\n                    {\r\n                        Console.WriteLine($\"Philosopher #{i} is eating the rice\");\r\n                    }\r\n                }\r\n            }));\r\n        }\r\n\r\n\r\n        lstTh.ForEach(th => th.Start());\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer05-util.cs",
    "content": "﻿using System;\r\n\r\n\r\n\r\nclass Exer05Util\r\n{\r\n    public static double getScalarProduct(double[] u, double[] v)\r\n    {\r\n        double sum = 0;\r\n        int sizeVector = u.Length;\r\n\r\n        for (int i = sizeVector - 1; i >= 0; --i)\r\n            sum += u[i] * v[i];\r\n\r\n        return sum;\r\n    }\r\n\r\n\r\n    public static double[][] getTransposeMatrix(double[][] input)\r\n    {\r\n        int numRow = input.Length;\r\n        int numCol = input[0].Length;\r\n\r\n        var output = create2dArray<double>(numCol, numRow);\r\n\r\n        for (int i = 0; i < numRow; ++i)\r\n            for (int j = 0; j < numCol; ++j)\r\n                output[j][i] = input[i][j];\r\n\r\n        return output;\r\n    }\r\n\r\n\r\n    public static void printMatrix(double[][] mat)\r\n    {\r\n        foreach (var row in mat)\r\n        {\r\n            foreach (var value in row)\r\n                Console.Write($\"\\t{value:0.0}\");\r\n\r\n            Console.WriteLine();\r\n        }\r\n    }\r\n\r\n\r\n    public static T[][] create2dArray<T>(int height, int width)\r\n    {\r\n        T[][] res = new T[height][];\r\n\r\n        for (int i = 0; i < height; ++i)\r\n            res[i] = new T[width];\r\n\r\n        return res;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer05a-product-matrix-vector.cs",
    "content": "﻿/*\r\n * MATRIX-VECTOR MULTIPLICATION\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer05A : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        double[][] A = new double[][] {\r\n            new double[] { 1, 2, 3 },\r\n            new double[] { 4, 5, 6 },\r\n            new double[] { 7, 8, 9 }\r\n        };\r\n\r\n        double[] b = {\r\n            3,\r\n            -1,\r\n            0\r\n        };\r\n\r\n        var result = getProduct(A, b);\r\n\r\n        Array.ForEach(result, Console.WriteLine);\r\n    }\r\n\r\n\r\n    private double[] getProduct(double[][] mat, double[] vec)\r\n    {\r\n        // Assume that size of mat and vec are both correct\r\n        int sizeRowMat = mat.Length;\r\n        // int sizeColMat = mat[0].Length;\r\n        // int sizeVec = vec.Length;\r\n\r\n        var result = new double[sizeRowMat];\r\n        var lstTh = new List<Thread>();\r\n\r\n        for (int ith = 0; ith < sizeRowMat; ++ith)\r\n        {\r\n            int i = ith;\r\n\r\n            lstTh.Add(new Thread(() =>\r\n            {\r\n                var u = mat[i];\r\n                var v = vec;\r\n                result[i] = Exer05Util.getScalarProduct(u, v);\r\n            }));\r\n        }\r\n\r\n        lstTh.ForEach(th => th.Start());\r\n        lstTh.ForEach(th => th.Join());\r\n\r\n        return result;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer05b-product-matrix-matrix.cs",
    "content": "﻿/*\r\n * MATRIX-MATRIX MULTIPLICATION (DOT PRODUCT)\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer05B : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        double[][] A = new double[][] {\r\n            new double[] { 1, 3, 5 },\r\n            new double[] { 2, 4, 6 }\r\n        };\r\n\r\n        double[][] B = new double[][] {\r\n            new double[] { 1, 0, 1, 0 },\r\n            new double[] { 0, 1, 0, 1 },\r\n            new double[] { 1, 0, 0, -2 }\r\n        };\r\n\r\n        double[][] result = getProduct(A, B);\r\n\r\n        Exer05Util.printMatrix(result);\r\n    }\r\n\r\n\r\n    private double[][] getProduct(double[][] matA, double[][] matB)\r\n    {\r\n        // Assume that the size of A and B are both correct\r\n        int sizeRowA = matA.Length;\r\n        int sizeColB = matB[0].Length;\r\n\r\n        double[][] matBT = Exer05Util.getTransposeMatrix(matB);\r\n        var result = Exer05Util.create2dArray<double>(sizeRowA, sizeColB);\r\n        var lstTh = new List<Thread>();\r\n\r\n\r\n        for (int ith = 0; ith < sizeRowA; ++ith)\r\n        {\r\n            int i = ith;\r\n\r\n            for (int jth = 0; jth < sizeColB; ++jth)\r\n            {\r\n                int j = jth;\r\n                var u = matA[i];\r\n                var v = matBT[j];\r\n\r\n                lstTh.Add(new Thread(() =>\r\n                {\r\n                    result[i][j] = Exer05Util.getScalarProduct(u, v);\r\n                }));\r\n            }\r\n        }\r\n\r\n\r\n        lstTh.ForEach(th => th.Start());\r\n        lstTh.ForEach(th => th.Join());\r\n\r\n\r\n        return result;\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer06a-blocking-queue.cs",
    "content": "﻿/*\r\n * BLOCKING QUEUE IMPLEMENTATION\r\n * Version A: Synchronous queues\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer06A : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var queue = new MySynchronousQueue<string>();\r\n        new Thread(() => producer(queue)).Start();\r\n        new Thread(() => consumer(queue)).Start();\r\n    }\r\n\r\n\r\n    private void producer(MySynchronousQueue<string> queue)\r\n    {\r\n        string[] arr = { \"lorem\", \"ipsum\", \"dolor\" };\r\n\r\n        foreach (var data in arr)\r\n        {\r\n            Console.WriteLine($\"Producer: {data}\");\r\n            queue.put(data);\r\n            Console.WriteLine($\"Producer: {data}\\t\\t\\t[done]\");\r\n        }\r\n    }\r\n\r\n\r\n    private void consumer(MySynchronousQueue<string> queue)\r\n    {\r\n        string data;\r\n        Thread.Sleep(5000);\r\n\r\n        for (int i = 0; i < 3; ++i)\r\n        {\r\n            data = queue.take();\r\n            Console.WriteLine($\"\\tConsumer: {data}\");\r\n        }\r\n    }\r\n\r\n\r\n    class MySynchronousQueue<T>\r\n    {\r\n        private Semaphore semPut = new Semaphore(1, int.MaxValue);\r\n        private Semaphore semTake = new Semaphore(0, int.MaxValue);\r\n        private T element = default;\r\n\r\n\r\n        public void put(T value)\r\n        {\r\n            semPut.WaitOne();\r\n            element = value;\r\n            semTake.Release();\r\n        }\r\n\r\n\r\n        public T take()\r\n        {\r\n            semTake.WaitOne();\r\n\r\n            T result = element;\r\n            element = default;\r\n\r\n            semPut.Release();\r\n\r\n            return result;\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer06b01-blocking-queue.cs",
    "content": "﻿/*\r\n * BLOCKING QUEUE IMPLEMENTATION\r\n * Version B01: General blocking queues\r\n *              Underlying mechanism: Semaphores\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer06B01 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var queue = new MyBlockingQueue<string>(2); // capacity = 2\r\n        new Thread(() => producer(queue)).Start();\r\n        new Thread(() => consumer(queue)).Start();\r\n    }\r\n\r\n\r\n    private void producer(MyBlockingQueue<string> queue)\r\n    {\r\n        string[] arr = { \"nice\", \"to\", \"meet\", \"you\" };\r\n\r\n        foreach (var data in arr)\r\n        {\r\n            Console.WriteLine($\"Producer: {data}\");\r\n            queue.put(data);\r\n            Console.WriteLine($\"Producer: {data}\\t\\t\\t[done]\");\r\n        }\r\n    }\r\n\r\n\r\n    private void consumer(MyBlockingQueue<string> queue)\r\n    {\r\n        string data;\r\n        Thread.Sleep(5000);\r\n\r\n        for (int i = 0; i < 4; ++i)\r\n        {\r\n            data = queue.take();\r\n            Console.WriteLine($\"\\tConsumer: {data}\");\r\n\r\n            if (0 == i)\r\n                Thread.Sleep(5000);\r\n        }\r\n    }\r\n\r\n\r\n    //////////////////////////////////////////////\r\n\r\n\r\n    class MyBlockingQueue<T>\r\n    {\r\n        private Semaphore semRemain = null;\r\n        private Semaphore semFill = null;\r\n\r\n        private int capacity = 0;\r\n        private Queue<T> queue = null;\r\n\r\n\r\n        public MyBlockingQueue(int capacity)\r\n        {\r\n            if (capacity <= 0)\r\n                throw new ArgumentException(\"capacity must be a positive integer\");\r\n\r\n            semRemain = new Semaphore(capacity, capacity);\r\n            semFill = new Semaphore(0, capacity);\r\n\r\n            this.capacity = capacity;\r\n            queue = new Queue<T>();\r\n        }\r\n\r\n\r\n        public void put(T value)\r\n        {\r\n            semRemain.WaitOne();\r\n\r\n            lock (queue)\r\n            {\r\n                queue.Enqueue(value);\r\n            }\r\n\r\n            semFill.Release();\r\n        }\r\n\r\n\r\n        public T take()\r\n        {\r\n            T result = default;\r\n            semFill.WaitOne();\r\n\r\n            lock (queue)\r\n            {\r\n                result = queue.Dequeue();\r\n            }\r\n\r\n            semRemain.Release();\r\n            return result;\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer06b02-blocking-queue.cs",
    "content": "﻿/*\r\n * BLOCKING QUEUE IMPLEMENTATION\r\n * Version B02: General blocking queues\r\n *              Underlying mechanism: Condition variables\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer06B02 : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var queue = new MyBlockingQueue<string>(2); // capacity = 2\r\n        new Thread(() => producer(queue)).Start();\r\n        new Thread(() => consumer(queue)).Start();\r\n    }\r\n\r\n\r\n    private void producer(MyBlockingQueue<string> queue)\r\n    {\r\n        string[] arr = { \"nice\", \"to\", \"meet\", \"you\" };\r\n\r\n        foreach (var data in arr)\r\n        {\r\n            queue.put(data);\r\n            Console.WriteLine($\"Producer: {data}\\t\\t\\t[done]\");\r\n        }\r\n    }\r\n\r\n\r\n    private void consumer(MyBlockingQueue<string> queue)\r\n    {\r\n        string data;\r\n        Thread.Sleep(5000);\r\n\r\n        for (int i = 0; i < 4; ++i)\r\n        {\r\n            data = queue.take();\r\n            Console.WriteLine($\"\\tConsumer: {data}\");\r\n\r\n            if (0 == i)\r\n                Thread.Sleep(5000);\r\n        }\r\n    }\r\n\r\n\r\n    //////////////////////////////////////////////\r\n\r\n\r\n    class MyBlockingQueue<T>\r\n    {\r\n        private object condEmpty = new object();\r\n        private object condFull = new object();\r\n\r\n        private int capacity = 0;\r\n        private Queue<T> queue = null;\r\n\r\n\r\n        public MyBlockingQueue(int capacity)\r\n        {\r\n            if (capacity <= 0)\r\n                throw new ArgumentException(\"capacity must be a positive integer\");\r\n\r\n            this.capacity = capacity;\r\n            queue = new Queue<T>();\r\n        }\r\n\r\n\r\n        public void put(T value)\r\n        {\r\n            lock (condFull)\r\n            {\r\n                while (queue.Count >= capacity)\r\n                {\r\n                    // Queue is full, must wait for 'take'\r\n                    Monitor.Wait(condFull);\r\n                }\r\n\r\n                lock (queue)\r\n                {\r\n                    queue.Enqueue(value);\r\n                }\r\n            }\r\n\r\n            lock (condEmpty)\r\n            {\r\n                Monitor.Pulse(condEmpty);\r\n            }\r\n        }\r\n\r\n\r\n        public T take()\r\n        {\r\n            T result = default;\r\n\r\n            lock (condEmpty)\r\n            {\r\n                while (0 == queue.Count)\r\n                {\r\n                    // Queue is empty, must wait for 'put'\r\n                    Monitor.Wait(condEmpty);\r\n                }\r\n\r\n                lock (queue)\r\n                {\r\n                    result = queue.Dequeue();\r\n                }\r\n            }\r\n\r\n            lock (condFull)\r\n            {\r\n                Monitor.Pulse(condFull);\r\n            }\r\n\r\n            return result;\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer07a-data-server.cs",
    "content": "﻿/*\r\n * THE DATA SERVER PROBLEM\r\n * Version A: Solving the problem using a condition variable\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer07A : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var server = new DataServer();\r\n        server.processRequest();\r\n    }\r\n\r\n\r\n    private class DataServer\r\n    {\r\n        private class Counter\r\n        {\r\n            public int value;\r\n            public Counter(int value)\r\n            {\r\n                this.value = value;\r\n            }\r\n        }\r\n\r\n\r\n        public void processRequest()\r\n        {\r\n            var lstFileName = new String[] { \"foo.html\", \"bar.json\" };\r\n            var counter = new Counter(lstFileName.Length);\r\n\r\n            // The server checks auth user while reading files, concurrently\r\n            new Thread(() => processFiles(lstFileName, counter)).Start();\r\n            checkAuthUser();\r\n\r\n            // The server waits for completion of loading files\r\n            lock (counter)\r\n            {\r\n                while (counter.value > 0)\r\n                {\r\n                    Monitor.Wait(counter, 10000); // timeout = 10 seconds\r\n                }\r\n            }\r\n\r\n            Console.WriteLine(\"\\nNow user is authorized and files are loaded\");\r\n            Console.WriteLine(\"Do other tasks...\\n\");\r\n        }\r\n\r\n\r\n        // This task consumes CPU (and network bandwidth, maybe)\r\n        private void checkAuthUser()\r\n        {\r\n            Console.WriteLine(\"[   Auth   ] Start\");\r\n            // Send request to authenticator, check permissions, encrypt, decrypt...\r\n            Thread.Sleep(20000);\r\n            Console.WriteLine(\"[   Auth   ] Done\");\r\n        }\r\n\r\n\r\n        // This task consumes disk\r\n        private void processFiles(String[] lstFileName, Counter counter)\r\n        {\r\n            foreach (var fileName in lstFileName)\r\n            {\r\n                // Read file\r\n                Console.WriteLine(\"[ ReadFile ] Start \" + fileName);\r\n                Thread.Sleep(10000);\r\n                Console.WriteLine(\"[ ReadFile ] Done  \" + fileName);\r\n\r\n                lock (counter)\r\n                {\r\n                    --counter.value;\r\n                    Monitor.Pulse(counter);\r\n                }\r\n\r\n                // Write log into disk\r\n                Thread.Sleep(5000);\r\n                Console.WriteLine(\"[ WriteLog ]\");\r\n            }\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer07b-data-server.cs",
    "content": "﻿/*\r\n * THE DATA SERVER PROBLEM\r\n * Version B: Solving the problem using a semaphore\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer07B : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var server = new DataServer();\r\n        server.processRequest();\r\n    }\r\n\r\n\r\n    private class DataServer\r\n    {\r\n        public void processRequest()\r\n        {\r\n            var lstFileName = new String[] { \"foo.html\", \"bar.json\" };\r\n            var sem = new Semaphore(0, int.MaxValue);\r\n\r\n            // The server checks auth user while reading files, concurrently\r\n            new Thread(() => processFiles(lstFileName, sem)).Start();\r\n            checkAuthUser();\r\n\r\n            // The server waits for completion of loading files\r\n            for (int i = lstFileName.Length; i > 0; --i)\r\n            {\r\n                sem.WaitOne();\r\n            }\r\n\r\n            Console.WriteLine(\"\\nNow user is authorized and files are loaded\");\r\n            Console.WriteLine(\"Do other tasks...\\n\");\r\n        }\r\n\r\n\r\n        // This task consumes CPU (and network bandwidth, maybe)\r\n        private void checkAuthUser()\r\n        {\r\n            Console.WriteLine(\"[   Auth   ] Start\");\r\n            // Send request to authenticator, check permissions, encrypt, decrypt...\r\n            Thread.Sleep(20000);\r\n            Console.WriteLine(\"[   Auth   ] Done\");\r\n        }\r\n\r\n\r\n        // This task consumes disk\r\n        private void processFiles(String[] lstFileName, Semaphore sem)\r\n        {\r\n            foreach (var fileName in lstFileName)\r\n            {\r\n                // Read file\r\n                Console.WriteLine(\"[ ReadFile ] Start \" + fileName);\r\n                Thread.Sleep(10000);\r\n                Console.WriteLine(\"[ ReadFile ] Done  \" + fileName);\r\n\r\n                sem.Release();\r\n\r\n                // Write log into disk\r\n                Thread.Sleep(5000);\r\n                Console.WriteLine(\"[ WriteLog ]\");\r\n            }\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer07c-data-server.cs",
    "content": "﻿/*\r\n * THE DATA SERVER PROBLEM\r\n * Version C: Solving the problem using a count-down latch\r\n */\r\nusing System;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer07C : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var server = new DataServer();\r\n        server.processRequest();\r\n    }\r\n\r\n\r\n    private class DataServer\r\n    {\r\n        public void processRequest()\r\n        {\r\n            var lstFileName = new String[] { \"foo.html\", \"bar.json\" };\r\n            var readFileLatch = new CountdownEvent(lstFileName.Length);\r\n\r\n            // The server checks auth user while reading files, concurrently\r\n            new Thread(() => processFiles(lstFileName, readFileLatch)).Start();\r\n            checkAuthUser();\r\n\r\n            // The server waits for completion of loading files\r\n            readFileLatch.Wait();\r\n\r\n            Console.WriteLine(\"\\nNow user is authorized and files are loaded\");\r\n            Console.WriteLine(\"Do other tasks...\\n\");\r\n        }\r\n\r\n\r\n        // This task consumes CPU (and network bandwidth, maybe)\r\n        private void checkAuthUser()\r\n        {\r\n            Console.WriteLine(\"[   Auth   ] Start\");\r\n            // Send request to authenticator, check permissions, encrypt, decrypt...\r\n            Thread.Sleep(20000);\r\n            Console.WriteLine(\"[   Auth   ] Done\");\r\n        }\r\n\r\n\r\n        // This task consumes disk\r\n        private void processFiles(String[] lstFileName, CountdownEvent latch)\r\n        {\r\n            foreach (var fileName in lstFileName)\r\n            {\r\n                // Read file\r\n                Console.WriteLine(\"[ ReadFile ] Start \" + fileName);\r\n                Thread.Sleep(10000);\r\n                Console.WriteLine(\"[ ReadFile ] Done  \" + fileName);\r\n\r\n                latch.Signal();\r\n\r\n                // Write log into disk\r\n                Thread.Sleep(5000);\r\n                Console.WriteLine(\"[ WriteLog ]\");\r\n            }\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer07d-data-server.cs",
    "content": "﻿/*\r\n * THE DATA SERVER PROBLEM\r\n * Version D: Solving the problem using a blocking queue\r\n */\r\nusing System;\r\nusing System.Collections.Concurrent;\r\nusing System.Threading;\r\n\r\n\r\n\r\nclass Exer07D : IRunnable\r\n{\r\n    public void run()\r\n    {\r\n        var server = new DataServer();\r\n        server.processRequest();\r\n    }\r\n\r\n\r\n    private class DataServer\r\n    {\r\n        public void processRequest()\r\n        {\r\n            var lstFileName = new String[] { \"foo.html\", \"bar.json\" };\r\n            var queue = new BlockingCollection<string>();\r\n\r\n            // The server checks auth user while reading files, concurrently\r\n            new Thread(() => processFiles(lstFileName, queue)).Start();\r\n            checkAuthUser();\r\n\r\n            // The server waits for completion of loading files\r\n            for (int i = lstFileName.Length; i > 0; --i)\r\n            {\r\n                queue.Take();\r\n            }\r\n\r\n            Console.WriteLine(\"\\nNow user is authorized and files are loaded\");\r\n            Console.WriteLine(\"Do other tasks...\\n\");\r\n        }\r\n\r\n\r\n        // This task consumes CPU (and network bandwidth, maybe)\r\n        private void checkAuthUser()\r\n        {\r\n            Console.WriteLine(\"[   Auth   ] Start\");\r\n            // Send request to authenticator, check permissions, encrypt, decrypt...\r\n            Thread.Sleep(20000);\r\n            Console.WriteLine(\"[   Auth   ] Done\");\r\n        }\r\n\r\n\r\n        // This task consumes disk\r\n        private void processFiles(String[] lstFileName, BlockingCollection<string> queue)\r\n        {\r\n            foreach (var fileName in lstFileName)\r\n            {\r\n                // Read file\r\n                Console.WriteLine(\"[ ReadFile ] Start \" + fileName);\r\n                Thread.Sleep(10000);\r\n                Console.WriteLine(\"[ ReadFile ] Done  \" + fileName);\r\n\r\n                queue.Add(fileName); // You may put file data here\r\n\r\n                // Write log into disk\r\n                Thread.Sleep(5000);\r\n                Console.WriteLine(\"[ WriteLog ]\");\r\n            }\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer08-exec-service-main.cs",
    "content": "﻿/*\r\n * EXECUTOR SERVICE & THREAD POOL IMPLEMENTATION\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nnamespace Exer08\r\n{\r\n    class MainApp : IRunnable\r\n    {\r\n        public void run()\r\n        {\r\n            const int NUM_THREADS = 2;\r\n            const int NUM_TASKS = 5;\r\n\r\n\r\n            var execService = new MyExecServiceV0A(NUM_THREADS);\r\n\r\n\r\n            var lstTask = new List<MyTask>();\r\n\r\n            for (int i = 0; i < NUM_TASKS; ++i)\r\n                lstTask.Add(new MyTask((char)('A' + i)));\r\n\r\n\r\n            lstTask.ForEach(task => execService.submit(task));\r\n            Console.WriteLine(\"All tasks are submitted\");\r\n\r\n\r\n            execService.waitTaskDone();\r\n            Console.WriteLine(\"All tasks are completed\");\r\n\r\n\r\n            execService.shutdown();\r\n        }\r\n    }\r\n\r\n\r\n\r\n    class MyTask : IRunnable\r\n    {\r\n        public char id;\r\n\r\n        public MyTask(char id) {\r\n            this.id = id;\r\n        }\r\n\r\n        public void run() {\r\n            Console.WriteLine($\"Task {id} is starting\");\r\n            Thread.Sleep(3000);\r\n            Console.WriteLine($\"Task {id} is completed\");\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer08-exec-service-v0a.cs",
    "content": "﻿/*\r\n * MY EXECUTOR SERVICE\r\n *\r\n * Version 0A: The easiest executor service\r\n * - It uses a blocking queue as underlying mechanism.\r\n */\r\nusing System;\r\nusing System.Collections.Concurrent;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nnamespace Exer08\r\n{\r\n    class MyExecServiceV0A\r\n    {\r\n        private int numThreads = 0;\r\n        private List<Thread> lstTh = new List<Thread>();\r\n        private BlockingCollection<IRunnable> taskPending = new BlockingCollection<IRunnable>();\r\n\r\n\r\n\r\n        public MyExecServiceV0A(int numThreads) {\r\n            init(numThreads);\r\n        }\r\n\r\n\r\n\r\n        private void init(int inpNumThreads)\r\n        {\r\n            numThreads = inpNumThreads;\r\n\r\n            for (int i = 0; i < numThreads; ++i)\r\n                lstTh.Add(new Thread(() => threadWorkerFunc(this)));\r\n\r\n            lstTh.ForEach(th => th.Start());\r\n        }\r\n\r\n\r\n\r\n        public void submit(IRunnable task)\r\n        {\r\n            taskPending.Add(task);\r\n        }\r\n\r\n\r\n\r\n        public void waitTaskDone()\r\n        {\r\n            // This ExecService is too simple,\r\n            // so there is no implementation for waitTaskDone()\r\n            Thread.Sleep(11000); // fake behaviour\r\n        }\r\n\r\n\r\n\r\n        public void shutdown()\r\n        {\r\n            // This ExecService is too simple,\r\n            // so there is no implementation for shutdown()\r\n            Console.WriteLine(\"No implementation for shutdown().\");\r\n            Console.WriteLine(\"You need to exit the app manually.\");\r\n        }\r\n\r\n\r\n\r\n        private static void threadWorkerFunc(MyExecServiceV0A thisPtr)\r\n        {\r\n            ref var taskPending = ref thisPtr.taskPending;\r\n            IRunnable task = null;\r\n\r\n            for (; ; )\r\n            {\r\n                // WAIT FOR AN AVAILABLE PENDING TASK\r\n                task = taskPending.Take();\r\n\r\n                // DO THE TASK\r\n                task.run();\r\n            }\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer08-exec-service-v0b.cs",
    "content": "﻿/*\r\n * MY EXECUTOR SERVICE\r\n *\r\n * Version 0B: The easiest executor service\r\n * - It uses a blocking queue as underlying mechanism.\r\n * - It supports waitTaskDone() and shutdown().\r\n */\r\nusing System;\r\nusing System.Collections.Concurrent;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nnamespace Exer08\r\n{\r\n    class MyExecServiceV0B\r\n    {\r\n        private int numThreads = 0;\r\n        private List<Thread> lstTh = new List<Thread>();\r\n\r\n        private BlockingCollection<IRunnable> taskPending = new BlockingCollection<IRunnable>();\r\n        private int counterTaskRunning = 0;\r\n\r\n        private volatile bool forceThreadShutdown = false;\r\n\r\n        private class EmptyTask : IRunnable\r\n        {\r\n            public void run() { }\r\n        }\r\n\r\n\r\n\r\n        public MyExecServiceV0B(int numThreads) {\r\n            init(numThreads);\r\n        }\r\n\r\n\r\n\r\n        private void init(int inpNumThreads)\r\n        {\r\n            numThreads = inpNumThreads;\r\n            Interlocked.Exchange(ref counterTaskRunning, 0);\r\n            forceThreadShutdown = false;\r\n\r\n            for (int i = 0; i < numThreads; ++i)\r\n                lstTh.Add(new Thread(() => threadWorkerFunc(this)));\r\n\r\n            lstTh.ForEach(th => th.Start());\r\n        }\r\n\r\n\r\n\r\n        public void submit(IRunnable task)\r\n        {\r\n            taskPending.Add(task);\r\n        }\r\n\r\n\r\n\r\n        public void waitTaskDone()\r\n        {\r\n            // This ExecService is too simple,\r\n            // so there is no good implementation for waitTaskDone()\r\n            while (\r\n                taskPending.Count > 0 ||\r\n                Interlocked.CompareExchange(ref counterTaskRunning, 0, 0) > 0\r\n            ) {\r\n                Thread.Sleep(1000);\r\n                // Thread.Yield();\r\n            }\r\n        }\r\n\r\n\r\n\r\n        public void shutdown()\r\n        {\r\n            forceThreadShutdown = true;\r\n            IRunnable temp;\r\n\r\n            while (taskPending.Count > 0)\r\n                taskPending.TryTake(out temp, 1000);\r\n\r\n            // Invoke blocked threads by adding \"empty\" tasks\r\n            for (int i = 0; i < numThreads; ++i)\r\n                taskPending.Add(new EmptyTask());\r\n\r\n            lstTh.ForEach(th => th.Join());\r\n\r\n            numThreads = 0;\r\n            lstTh.Clear();\r\n        }\r\n\r\n\r\n\r\n        private static void threadWorkerFunc(MyExecServiceV0B thisPtr)\r\n        {\r\n            ref var taskPending = ref thisPtr.taskPending;\r\n            ref var counterTaskRunning = ref thisPtr.counterTaskRunning;\r\n            IRunnable task = null;\r\n\r\n            for (; ; )\r\n            {\r\n                // WAIT FOR AN AVAILABLE PENDING TASK\r\n                task = taskPending.Take();\r\n\r\n                // If shutdown() was called, then exit the function\r\n                if (thisPtr.forceThreadShutdown)\r\n                {\r\n                    break;\r\n                }\r\n\r\n                // DO THE TASK\r\n                Interlocked.Increment(ref counterTaskRunning);\r\n                task.run();\r\n                Interlocked.Decrement(ref counterTaskRunning);\r\n            }\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer08-exec-service-v1a.cs",
    "content": "﻿/*\r\n * MY EXECUTOR SERVICE\r\n *\r\n * Version 1A: Simple executor service\r\n * - Method \"waitTaskDone\" invokes thread sleeps in loop (which can cause performance problems).\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nnamespace Exer08\r\n{\r\n    class MyExecServiceV1A\r\n    {\r\n        private int numThreads = 0;\r\n        private List<Thread> lstTh = new List<Thread>();\r\n\r\n        private Queue<IRunnable> taskPending = new Queue<IRunnable>();\r\n        private int counterTaskRunning = 0;\r\n\r\n        private volatile bool forceThreadShutdown = false;\r\n\r\n\r\n\r\n        public MyExecServiceV1A(int numThreads) {\r\n            init(numThreads);\r\n        }\r\n\r\n\r\n\r\n        private void init(int inpNumThreads)\r\n        {\r\n            // shutdown();\r\n\r\n            numThreads = inpNumThreads;\r\n            Interlocked.Exchange(ref counterTaskRunning, 0);\r\n            forceThreadShutdown = false;\r\n\r\n            for (int i = 0; i < numThreads; ++i)\r\n                lstTh.Add(new Thread(() => threadWorkerFunc(this)));\r\n\r\n            lstTh.ForEach(th => th.Start());\r\n        }\r\n\r\n\r\n\r\n        public void submit(IRunnable task)\r\n        {\r\n            lock (taskPending)\r\n            {\r\n                taskPending.Enqueue(task);\r\n                Monitor.Pulse(taskPending);\r\n            }\r\n        }\r\n\r\n\r\n\r\n        public void waitTaskDone()\r\n        {\r\n            bool done = false;\r\n\r\n            for (; ; )\r\n            {\r\n                lock (taskPending)\r\n                {\r\n                    if (0 == taskPending.Count && 0 == counterTaskRunning)\r\n                    {\r\n                        done = true;\r\n                    }\r\n                }\r\n\r\n                if (done)\r\n                    break;\r\n\r\n                Thread.Sleep(1000);\r\n                // Thread.Yield();\r\n            }\r\n        }\r\n\r\n\r\n\r\n        public void shutdown()\r\n        {\r\n            lock (taskPending)\r\n            {\r\n                forceThreadShutdown = true;\r\n                taskPending.Clear();\r\n                Monitor.PulseAll(taskPending);\r\n            }\r\n\r\n            lstTh.ForEach(th => th.Join());\r\n\r\n            numThreads = 0;\r\n            lstTh.Clear();\r\n        }\r\n\r\n\r\n\r\n        private static void threadWorkerFunc(MyExecServiceV1A thisPtr)\r\n        {\r\n            ref var taskPending = ref thisPtr.taskPending;\r\n            ref var counterTaskRunning = ref thisPtr.counterTaskRunning;\r\n\r\n            IRunnable task = null;\r\n\r\n            for (; ; )\r\n            {\r\n                // WAIT FOR AN AVAILABLE PENDING TASK\r\n                lock (taskPending)\r\n                {\r\n                    while (0 == taskPending.Count && false == thisPtr.forceThreadShutdown)\r\n                    {\r\n                        Monitor.Wait(taskPending);\r\n                    }\r\n\r\n                    if (thisPtr.forceThreadShutdown)\r\n                    {\r\n                        break;\r\n                    }\r\n\r\n                    // GET THE TASK FROM THE PENDING QUEUE\r\n                    task = taskPending.Dequeue();\r\n\r\n                    Interlocked.Increment(ref counterTaskRunning);\r\n                }\r\n\r\n                // DO THE TASK\r\n                task.run();\r\n                Interlocked.Decrement(ref counterTaskRunning);\r\n            }\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer08-exec-service-v1b.cs",
    "content": "﻿/*\r\n * MY EXECUTOR SERVICE\r\n *\r\n * Version 1B: Simple executor service\r\n * - Method \"waitTaskDone\" uses a condition variable to synchronize.\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nnamespace Exer08\r\n{\r\n    class MyExecServiceV1B\r\n    {\r\n        private int numThreads = 0;\r\n        private List<Thread> lstTh = new List<Thread>();\r\n\r\n        private Queue<IRunnable> taskPending = new Queue<IRunnable>();\r\n\r\n        private int counterTaskRunning = 0;\r\n        private object lkTaskRunning = new object();\r\n\r\n        private volatile bool forceThreadShutdown = false;\r\n\r\n\r\n\r\n        public MyExecServiceV1B(int numThreads) {\r\n            init(numThreads);\r\n        }\r\n\r\n\r\n\r\n        private void init(int inpNumThreads)\r\n        {\r\n            // shutdown();\r\n\r\n            numThreads = inpNumThreads;\r\n            counterTaskRunning = 0;\r\n            forceThreadShutdown = false;\r\n\r\n            for (int i = 0; i < numThreads; ++i)\r\n                lstTh.Add(new Thread(() => threadWorkerFunc(this)));\r\n\r\n            lstTh.ForEach(th => th.Start());\r\n        }\r\n\r\n\r\n\r\n        public void submit(IRunnable task)\r\n        {\r\n            lock (taskPending)\r\n            {\r\n                taskPending.Enqueue(task);\r\n                Monitor.Pulse(taskPending);\r\n            }\r\n        }\r\n\r\n\r\n\r\n        public void waitTaskDone()\r\n        {\r\n            for (; ; )\r\n            {\r\n                lock (taskPending)\r\n                {\r\n                    if (0 == taskPending.Count)\r\n                    {\r\n                        lock (lkTaskRunning)\r\n                        {\r\n                            while (counterTaskRunning > 0)\r\n                            {\r\n                                Monitor.Wait(lkTaskRunning);\r\n                            }\r\n\r\n                            // no pending task and no running task\r\n                            break;\r\n                        }\r\n                    }\r\n                }\r\n            }\r\n        }\r\n\r\n\r\n\r\n        public void shutdown()\r\n        {\r\n            lock (taskPending)\r\n            {\r\n                forceThreadShutdown = true;\r\n                taskPending.Clear();\r\n                Monitor.PulseAll(taskPending);\r\n            }\r\n\r\n            lstTh.ForEach(th => th.Join());\r\n\r\n            numThreads = 0;\r\n            lstTh.Clear();\r\n        }\r\n\r\n\r\n\r\n        private static void threadWorkerFunc(MyExecServiceV1B thisPtr)\r\n        {\r\n            ref var taskPending = ref thisPtr.taskPending;\r\n            ref var counterTaskRunning = ref thisPtr.counterTaskRunning;\r\n            ref var lkTaskRunning = ref thisPtr.lkTaskRunning;\r\n\r\n            IRunnable task = null;\r\n\r\n            for (; ; )\r\n            {\r\n                // WAIT FOR AN AVAILABLE PENDING TASK\r\n                lock (taskPending)\r\n                {\r\n                    while (0 == taskPending.Count && false == thisPtr.forceThreadShutdown)\r\n                    {\r\n                        Monitor.Wait(taskPending);\r\n                    }\r\n\r\n                    if (thisPtr.forceThreadShutdown)\r\n                    {\r\n                        break;\r\n                    }\r\n\r\n                    // GET THE TASK FROM THE PENDING QUEUE\r\n                    task = taskPending.Dequeue();\r\n\r\n                    ++counterTaskRunning;\r\n                }\r\n\r\n                // DO THE TASK\r\n                task.run();\r\n\r\n                lock (lkTaskRunning)\r\n                {\r\n                    --counterTaskRunning;\r\n\r\n                    if (0 == counterTaskRunning) {\r\n                        Monitor.Pulse(lkTaskRunning);\r\n                    }\r\n                }\r\n            }\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer08-exec-service-v2a.cs",
    "content": "﻿/*\r\n * MY EXECUTOR SERVICE\r\n *\r\n * Version 2A: The executor service storing running tasks\r\n * - Method \"waitTaskDone\" uses a semaphore to synchronize.\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nnamespace Exer08\r\n{\r\n    class MyExecServiceV2A\r\n    {\r\n        private int numThreads = 0;\r\n        private List<Thread> lstTh = new List<Thread>();\r\n\r\n        private Queue<IRunnable> taskPending = new Queue<IRunnable>();\r\n        private List<IRunnable> taskRunning = new List<IRunnable>();\r\n\r\n        private SemaphoreSlim counterTaskRunning = new SemaphoreSlim(0);\r\n\r\n        private volatile bool forceThreadShutdown = false;\r\n\r\n\r\n\r\n        public MyExecServiceV2A(int numThreads) {\r\n            init(numThreads);\r\n        }\r\n\r\n\r\n\r\n        private void init(int inpNumThreads)\r\n        {\r\n            // shutdown();\r\n\r\n            numThreads = inpNumThreads;\r\n            forceThreadShutdown = false;\r\n\r\n            for (int i = 0; i < numThreads; ++i)\r\n                lstTh.Add(new Thread(() => threadWorkerFunc(this)));\r\n\r\n            lstTh.ForEach(th => th.Start());\r\n        }\r\n\r\n\r\n\r\n        public void submit(IRunnable task)\r\n        {\r\n            lock (taskPending)\r\n            {\r\n                taskPending.Enqueue(task);\r\n                Monitor.Pulse(taskPending);\r\n            }\r\n        }\r\n\r\n\r\n\r\n        public void waitTaskDone()\r\n        {\r\n            for (; ; )\r\n            {\r\n                counterTaskRunning.Wait();\r\n\r\n                lock (taskPending)\r\n                {\r\n                    lock (taskRunning)\r\n                    {\r\n                        if (0 == taskPending.Count && 0 == taskRunning.Count\r\n                            /* && 0 == counterTaskRunning.CurrentCount */\r\n                        )\r\n                            break;\r\n                    }\r\n                }\r\n            }\r\n        }\r\n\r\n\r\n\r\n        public void shutdown()\r\n        {\r\n            lock (taskPending)\r\n            {\r\n                forceThreadShutdown = true;\r\n                taskPending.Clear();\r\n                Monitor.PulseAll(taskPending);\r\n            }\r\n\r\n            lstTh.ForEach(th => th.Join());\r\n\r\n            numThreads = 0;\r\n            lstTh.Clear();\r\n            taskRunning.Clear();\r\n\r\n            if (counterTaskRunning.CurrentCount > 0)\r\n                counterTaskRunning.Release(counterTaskRunning.CurrentCount);\r\n        }\r\n\r\n\r\n\r\n        private static void threadWorkerFunc(MyExecServiceV2A thisPtr)\r\n        {\r\n            ref var taskPending = ref thisPtr.taskPending;\r\n            ref var taskRunning = ref thisPtr.taskRunning;\r\n            ref var counterTaskRunning = ref thisPtr.counterTaskRunning;\r\n\r\n            IRunnable task = null;\r\n\r\n            for (; ; )\r\n            {\r\n                lock (taskPending)\r\n                {\r\n                    // WAIT FOR AN AVAILABLE PENDING TASK\r\n                    while (0 == taskPending.Count && false == thisPtr.forceThreadShutdown)\r\n                    {\r\n                        Monitor.Wait(taskPending);\r\n                    }\r\n\r\n                    if (thisPtr.forceThreadShutdown)\r\n                    {\r\n                        break;\r\n                    }\r\n\r\n                    // GET THE TASK FROM THE PENDING QUEUE\r\n                    task = taskPending.Dequeue();\r\n\r\n                    // PUSH IT TO THE RUNNING QUEUE\r\n                    lock (taskRunning)\r\n                    {\r\n                        taskRunning.Add(task);\r\n                    }\r\n                }\r\n\r\n                // DO THE TASK\r\n                task.run();\r\n\r\n                // REMOVE IT FROM THE RUNNING QUEUE\r\n                lock (taskRunning)\r\n                {\r\n                    taskRunning.Remove(task);\r\n                }\r\n\r\n                counterTaskRunning.Release();\r\n            }\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/exercise/exer08-exec-service-v2b.cs",
    "content": "﻿/*\r\n * MY EXECUTOR SERVICE\r\n *\r\n * Version 2B: The executor service storing running tasks\r\n * - Method \"waitTaskDone\" uses a condition variable to synchronize.\r\n */\r\nusing System;\r\nusing System.Collections.Generic;\r\nusing System.Threading;\r\n\r\n\r\n\r\nnamespace Exer08\r\n{\r\n    class MyExecServiceV2B\r\n    {\r\n        private int numThreads = 0;\r\n        private List<Thread> lstTh = new List<Thread>();\r\n\r\n        private Queue<IRunnable> taskPending = new Queue<IRunnable>();\r\n        private List<IRunnable> taskRunning = new List<IRunnable>();\r\n\r\n        private volatile bool forceThreadShutdown = false;\r\n\r\n\r\n\r\n        public MyExecServiceV2B(int numThreads) {\r\n            init(numThreads);\r\n        }\r\n\r\n\r\n\r\n        private void init(int inpNumThreads)\r\n        {\r\n            // shutdown();\r\n\r\n            numThreads = inpNumThreads;\r\n            forceThreadShutdown = false;\r\n\r\n            for (int i = 0; i < numThreads; ++i)\r\n                lstTh.Add(new Thread(() => threadWorkerFunc(this)));\r\n\r\n            lstTh.ForEach(th => th.Start());\r\n        }\r\n\r\n\r\n\r\n        public void submit(IRunnable task)\r\n        {\r\n            lock (taskPending)\r\n            {\r\n                taskPending.Enqueue(task);\r\n                Monitor.Pulse(taskPending);\r\n            }\r\n        }\r\n\r\n\r\n\r\n        public void waitTaskDone()\r\n        {\r\n            for (; ; )\r\n            {\r\n                lock (taskPending)\r\n                {\r\n                    if (0 == taskPending.Count)\r\n                    {\r\n                        lock (taskRunning)\r\n                        {\r\n                            while (taskRunning.Count > 0)\r\n                                Monitor.Wait(taskRunning);\r\n\r\n                            // no pending task and no running task\r\n                            break;\r\n                        }\r\n                    }\r\n                }\r\n            }\r\n        }\r\n\r\n\r\n\r\n        public void shutdown()\r\n        {\r\n            lock (taskPending)\r\n            {\r\n                forceThreadShutdown = true;\r\n                taskPending.Clear();\r\n                Monitor.PulseAll(taskPending);\r\n            }\r\n\r\n            lstTh.ForEach(th => th.Join());\r\n\r\n            numThreads = 0;\r\n            lstTh.Clear();\r\n            taskRunning.Clear();\r\n        }\r\n\r\n\r\n\r\n        private static void threadWorkerFunc(MyExecServiceV2B thisPtr)\r\n        {\r\n            ref var taskPending = ref thisPtr.taskPending;\r\n            ref var taskRunning = ref thisPtr.taskRunning;\r\n            IRunnable task = null;\r\n\r\n            for (; ; )\r\n            {\r\n                lock (taskPending)\r\n                {\r\n                    // WAIT FOR AN AVAILABLE PENDING TASK\r\n                    while (0 == taskPending.Count && false == thisPtr.forceThreadShutdown)\r\n                    {\r\n                        Monitor.Wait(taskPending);\r\n                    }\r\n\r\n                    if (thisPtr.forceThreadShutdown)\r\n                    {\r\n                        break;\r\n                    }\r\n\r\n                    // GET THE TASK FROM THE PENDING QUEUE\r\n                    task = taskPending.Dequeue();\r\n\r\n                    // PUSH IT TO THE RUNNING QUEUE\r\n                    lock (taskRunning)\r\n                    {\r\n                        taskRunning.Add(task);\r\n                    }\r\n                }\r\n\r\n                // DO THE TASK\r\n                task.run();\r\n\r\n                // REMOVE IT FROM THE RUNNING QUEUE\r\n                lock (taskRunning)\r\n                {\r\n                    taskRunning.Remove(task);\r\n                    Monitor.PulseAll(taskRunning);\r\n                }\r\n            }\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "csharp/multithreading.csproj",
    "content": "<Project Sdk=\"Microsoft.NET.Sdk\">\r\n\r\n  <PropertyGroup>\r\n    <OutputType>Exe</OutputType>\r\n    <TargetFramework>net6.0</TargetFramework>\r\n  </PropertyGroup>\r\n\r\n</Project>\r\n"
  },
  {
    "path": "csharp/multithreading.sln",
    "content": "﻿\r\nMicrosoft Visual Studio Solution File, Format Version 12.00\r\n# Visual Studio Version 16\r\nVisualStudioVersion = 16.0.31729.503\r\nMinimumVisualStudioVersion = 10.0.40219.1\r\nProject(\"{9A19103F-16F7-4668-BE54-9A1E7A4F7556}\") = \"multithreading\", \"multithreading.csproj\", \"{E4A5B6AD-5516-49B7-90D2-08E0CC56D10B}\"\r\nEndProject\r\nGlobal\r\n\tGlobalSection(SolutionConfigurationPlatforms) = preSolution\r\n\t\tDebug|Any CPU = Debug|Any CPU\r\n\t\tRelease|Any CPU = Release|Any CPU\r\n\tEndGlobalSection\r\n\tGlobalSection(ProjectConfigurationPlatforms) = postSolution\r\n\t\t{E4A5B6AD-5516-49B7-90D2-08E0CC56D10B}.Debug|Any CPU.ActiveCfg = Debug|Any CPU\r\n\t\t{E4A5B6AD-5516-49B7-90D2-08E0CC56D10B}.Debug|Any CPU.Build.0 = Debug|Any CPU\r\n\t\t{E4A5B6AD-5516-49B7-90D2-08E0CC56D10B}.Release|Any CPU.ActiveCfg = Release|Any CPU\r\n\t\t{E4A5B6AD-5516-49B7-90D2-08E0CC56D10B}.Release|Any CPU.Build.0 = Release|Any CPU\r\n\tEndGlobalSection\r\n\tGlobalSection(SolutionProperties) = preSolution\r\n\t\tHideSolutionNode = FALSE\r\n\tEndGlobalSection\r\n\tGlobalSection(ExtensibilityGlobals) = postSolution\r\n\t\tSolutionGuid = {5BE3393C-A944-4DC7-AC10-4ABF4213147E}\r\n\tEndGlobalSection\r\nEndGlobal\r\n"
  },
  {
    "path": "java/.gitignore",
    "content": "/bin/\n/build/\n/dist/\n.classpath\n.project\n"
  },
  {
    "path": "java/README.md",
    "content": "# JAVA MULTITHREADING\n\n## DESCRIPTION\n\nMultithreading in Java.\n\n## PROJECT SPECIFICATIONS\n\nWARNING: Java-17 features.\n"
  },
  {
    "path": "java/src/demo00_intro/App.java",
    "content": "/*\n * INTRODUCTION TO MULTITHREADING\n * You should try running this app several times and see results.\n */\n\npackage demo00_intro;\n\n\n\npublic class App {\n\n    public static void main(String[] args) {\n        Thread th = new ExampleThread();\n\n        th.start();\n\n        for (int i = 0; i < 300; ++i)\n            System.out.print(\"A\");\n    }\n\n}\n\n\n\nclass ExampleThread extends Thread {\n    @Override\n    public void run() {\n        for (int i = 0; i < 300; ++i)\n            System.out.print(\"B\");\n    }\n}\n"
  },
  {
    "path": "java/src/demo01_hello/AppA01.java",
    "content": "/*\n * HELLO WORLD VERSION MULTITHREADING\n * Version A01: Using class Thread (original way)\n */\n\npackage demo01_hello;\n\n\n\npublic class AppA01 {\n\n    public static void main(String[] args) throws InterruptedException {\n        var th = new ExampleThread();\n        th.start();\n        System.out.println(\"Hello from main thread\");\n    }\n\n}\n\n\n\nclass ExampleThread extends Thread {\n    @Override\n    public void run() {\n        System.out.println(\"Hello from example thread\");\n    }\n}\n"
  },
  {
    "path": "java/src/demo01_hello/AppA02.java",
    "content": "/*\n * HELLO WORLD VERSION MULTITHREADING\n * Version A02: Using class Thread (anonymous subclass)\n */\n\npackage demo01_hello;\n\n\n\npublic class AppA02 {\n\n    public static void main(String[] args) throws InterruptedException {\n        var th = new Thread() {\n            @Override\n            public void run() {\n                System.out.println(\"Hello from example thread\");\n            }\n        };\n\n        th.start();\n\n        System.out.println(\"Hello from main thread\");\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo01_hello/AppB01.java",
    "content": "/*\n * HELLO WORLD VERSION MULTITHREADING\n * Version B01: Using inteface Runnable\n */\n\npackage demo01_hello;\n\n\n\npublic class AppB01 {\n\n    public static void main(String[] args) throws InterruptedException {\n        var doTask = new ExampleRunnable();\n\n        var th1 = new Thread(doTask);\n        var th2 = new Thread(doTask);\n\n        th1.start();\n        th2.start();\n\n        System.out.println(\"Hello from main thread\");\n    }\n\n}\n\n\n\nclass ExampleRunnable implements Runnable {\n    @Override\n    public void run() {\n        System.out.println(\"Hello from example thread\");\n    }\n}\n"
  },
  {
    "path": "java/src/demo01_hello/AppB02.java",
    "content": "/*\n * HELLO WORLD VERSION MULTITHREADING\n * Version B02: Using inteface Runnable with lambdas\n */\n\npackage demo01_hello;\n\n\n\npublic class AppB02 {\n\n    public static void main(String[] args) throws InterruptedException {\n        Runnable doTask = () -> System.out.println(\"Hello from example thread\");\n\n        var th1 = new Thread(doTask);\n        var th2 = new Thread(doTask);\n\n        th1.start();\n        th2.start();\n\n        System.out.println(\"Hello from main thread\");\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo01_hello/AppB03.java",
    "content": "/*\n * HELLO WORLD VERSION MULTITHREADING\n * Version B03: Using inteface Runnable with lambdas\n */\n\npackage demo01_hello;\n\n\n\npublic class AppB03 {\n\n    public static void main(String[] args) throws InterruptedException {\n        var th = new Thread(() -> System.out.println(\"Hello from example thread\"));\n\n        th.start();\n\n        System.out.println(\"Hello from main thread\");\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo01_hello/AppC01.java",
    "content": "/*\n * HELLO WORLD VERSION MULTITHREADING\n * Version C01: Using functions (via delegation from lambdas)\n */\n\npackage demo01_hello;\n\n\n\npublic class AppC01 {\n\n    public static void main(String[] args) throws InterruptedException {\n        var th = new Thread(() -> doTask());\n\n        th.start();\n\n        System.out.println(\"Hello from main thread\");\n    }\n\n\n    private static void doTask() {\n        System.out.println(\"Hello from example thread\");\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo01_hello/AppC02.java",
    "content": "/*\n * HELLO WORLD VERSION MULTITHREADING\n * Version C02: Using function references\n */\n\npackage demo01_hello;\n\n\n\npublic class AppC02 {\n\n    public static void main(String[] args) throws InterruptedException {\n        var th = new Thread(AppC02::doTask);\n\n        th.start();\n\n        System.out.println(\"Hello from main thread\");\n    }\n\n\n    private static void doTask() {\n        System.out.println(\"Hello from example thread\");\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo01_hello/AppExtra.java",
    "content": "/*\n * HELLO WORLD VERSION MULTITHREADING\n * Version extra: Getting thread name and reference to thread instance executing current thread\n */\n\npackage demo01_hello;\n\n\n\npublic class AppExtra {\n\n    public static void main(String[] args) throws InterruptedException {\n        var th = new Thread(\"Lorem\") {\n            @Override\n            public void run() {\n                Thread myself = Thread.currentThread(); // th is myself\n\n                System.out.println(\"My name is \" + this.getName()); // or myself.getName()\n                System.out.println(\"My self is \" + myself);\n            }\n        };\n\n        th.start();\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo02_join/AppA.java",
    "content": "/*\n * THREAD JOINS\n */\n\npackage demo02_join;\n\n\n\npublic class AppA {\n\n    public static void main(String[] args) throws InterruptedException {\n        Thread th = new Thread(() -> doHeavyTask());\n\n        th.start();\n        th.join();\n\n        System.out.println(\"Good bye!\");\n    }\n\n\n    @SuppressWarnings(\"unused\")\n    private static void doHeavyTask() {\n        // Do a heavy task, which takes a little time\n        long sum = 0;\n        for (int i = 0; i < 2000000000; ++i)\n            sum += i;\n\n        System.out.println(\"Done!\");\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo02_join/AppB.java",
    "content": "/*\n * THREAD JOINS\n */\n\npackage demo02_join;\n\n\n\npublic class AppB {\n\n    public static void main(String[] args) {\n        Thread thFoo = new Thread(() -> System.out.println(\"foo\"));\n        Thread thBar = new Thread(() -> System.out.println(\"bar\"));\n\n        thFoo.start();\n        thBar.start();\n\n        // thFoo.join();\n        // thBar.join();\n\n        /*\n         * We do not need to call thFoo.join() and thBar.join().\n         * The reason is main thread will wait for the completion of all threads before app exits.\n         */\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo03_pass_arg/AppA.java",
    "content": "/*\n * PASSING ARGUMENTS\n * Version A: Passing arguments to Thread objects\n */\n\npackage demo03_pass_arg;\n\n\n\npublic class AppA {\n\n    public static void main(String[] args) throws InterruptedException {\n        var thFoo = new MyThread(1, 2, \"red\");\n        var thBar = new MyThread(3, 4, \"blue\");\n\n        thFoo.start();\n        thBar.start();\n    }\n\n}\n\n\n\nclass MyThread extends Thread {\n    private int a;\n    private double b;\n    private String c;\n\n    public MyThread(int a, double b, String c) {\n        super();\n        this.a = a;\n        this.b = b;\n        this.c = c;\n    }\n\n    @Override\n    public void run() {\n        System.out.printf(\"%d  %.1f  %s %n\", a, b, c);\n    }\n}\n"
  },
  {
    "path": "java/src/demo03_pass_arg/AppB.java",
    "content": "/*\n * PASSING ARGUMENTS\n * Version B: Passing arguments to functions\n */\n\npackage demo03_pass_arg;\n\n\n\npublic class AppB {\n\n    public static void main(String[] args) throws InterruptedException {\n        var thFoo = new Thread(() -> doTask(1, 2, \"red\"));\n        var thBar = new Thread(() -> doTask(3, 4, \"blue\"));\n\n        thFoo.start();\n        thBar.start();\n    }\n\n\n    private static void doTask(int a, double b, String c) {\n        System.out.printf(\"%d  %.1f  %s %n\", a, b, c);\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo03_pass_arg/AppC.java",
    "content": "/*\n * PASSING ARGUMENTS\n * Version C: Passing arguments by capturing them\n *\n * Note: The captured variables must be final or effectively final.\n */\n\npackage demo03_pass_arg;\n\n\n\npublic class AppC {\n\n    public static void main(String[] args) throws InterruptedException {\n        final int COUNT = 10;\n\n        new Thread(() -> {\n\n            for (int i = 1; i <= COUNT; ++i)\n                System.out.println(\"value: \" + i);\n\n        }).start();\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo04_sleep/App.java",
    "content": "/*\n * SLEEP\n */\n\npackage demo04_sleep;\n\n\n\npublic class App {\n\n    public static void main(String[] args) throws InterruptedException {\n        var thFoo = new Thread(() -> {\n            System.out.println(\"foo is sleeping\");\n\n            try {\n                Thread.sleep(3000);\n            } catch (InterruptedException e) {\n\n            }\n\n            System.out.println(\"foo wakes up\");\n        });\n\n\n        thFoo.start();\n        thFoo.join();\n\n\n        System.out.println(\"Good bye\");\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo05_id/App.java",
    "content": "/*\n * GETTING THREAD'S ID\n */\n\npackage demo05_id;\n\n\n\npublic class App {\n\n    public static void main(String[] args) {\n        Runnable doTask = () -> {\n            long id = Thread.currentThread().getId();\n            System.out.println(id);\n        };\n\n        var thFoo = new Thread(doTask);\n        var thBar = new Thread(doTask);\n\n        System.out.println(\"foo's id: \" + thFoo.getId());\n        System.out.println(\"bar's id: \" + thBar.getId());\n\n        thFoo.start();\n        thBar.start();\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo06_list_threads/AppA.java",
    "content": "/*\n * LIST OF MULTIPLE THREADS\n * Version A: Using java.util.List / java.util.ArrayList\n */\n\npackage demo06_list_threads;\n\nimport java.util.ArrayList;\n\n\n\npublic class AppA {\n\n    public static void main(String[] args) {\n        final int NUM_THREADS = 5;\n\n        var lstTh = new ArrayList<Thread>();\n\n\n        for (int i = 0; i < NUM_THREADS; ++i) {\n            final int index = i;\n\n            lstTh.add(new Thread(() -> {\n                try { Thread.sleep(500); }\n                catch (InterruptedException e) { }\n\n                System.out.print(index);\n            }));\n        }\n\n\n        for (var th : lstTh)\n            th.start();\n\n        // We can reduce above for loop with this statement:\n        // lstTh.forEach(Thread::start);\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo06_list_threads/AppB01.java",
    "content": "/*\n * LIST OF MULTIPLE THREADS\n * Version B01: Using streams\n */\n\npackage demo06_list_threads;\n\nimport java.util.stream.IntStream;\n\n\n\npublic class AppB01 {\n\n    public static void main(String[] args) {\n        var lstTh = IntStream.range(0, 5).mapToObj(i -> new Thread(() -> {\n\n            try { Thread.sleep(500); }\n            catch (InterruptedException e) { }\n\n            System.out.print(i);\n\n        })).toList();\n\n\n        for (var th : lstTh)\n            th.start();\n\n        // We can reduce above for loop with this statement:\n        // lstTh.forEach(Thread::start);\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo06_list_threads/AppB02.java",
    "content": "/*\n * LIST OF MULTIPLE THREADS\n * Version B02: Using streams (shorten code, no variable to store list of threads)\n */\n\npackage demo06_list_threads;\n\nimport java.util.stream.IntStream;\n\n\n\npublic class AppB02 {\n\n    public static void main(String[] args) {\n        IntStream.range(0, 5).forEach(i -> new Thread(() -> {\n\n            try { Thread.sleep(500); }\n            catch (InterruptedException e) { }\n\n            System.out.print(i);\n\n        }).start());\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo07_terminate/AppA.java",
    "content": "/*\n * FORCING A THREAD TO TERMINATE (i.e. killing the thread)\n * Version A: Interrupting the thread\n */\n\npackage demo07_terminate;\n\n\n\npublic class AppA {\n\n    public static void main(String[] args) throws InterruptedException {\n        var th = new Thread(() -> {\n            while (true) {\n                System.out.println(\"Running...\");\n\n                try { Thread.sleep(1000); }\n                catch (InterruptedException e) {\n                    // Received interrupt signal, now current thread is going to exit\n                    return;\n                }\n            }\n        });\n\n\n        th.start();\n\n        Thread.sleep(3000);\n\n        th.interrupt();\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo07_terminate/AppB.java",
    "content": "/*\n * FORCING A THREAD TO TERMINATE (i.e. killing the thread)\n * Version B: Using a flag to notify the thread\n *\n * Beside atomic variables, you can use the \"volatile\" specifier.\n */\n\npackage demo07_terminate;\n\nimport java.util.concurrent.atomic.AtomicBoolean;\n\n\n\npublic class AppB {\n\n    public static void main(String[] args) throws InterruptedException {\n        var flagStop = new AtomicBoolean(false);\n\n        var th = new Thread(() -> {\n            while (true) {\n                if (flagStop.get())\n                    break;\n\n                System.out.println(\"Running...\");\n\n                try { Thread.sleep(1000); }\n                catch (InterruptedException e) { }\n            }\n        });\n\n\n        th.start();\n\n        Thread.sleep(3000);\n\n        flagStop.set(true);\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo08_return_value/AppA.java",
    "content": "/*\n * GETTING RETURNED VALUES FROM THREADS\n */\n\npackage demo08_return_value;\n\n\n\npublic class AppA {\n\n    public static void main(String[] args) throws InterruptedException {\n        var thFoo = new MyThread(5);\n        var thBar = new MyThread(80);\n\n        thFoo.start();\n        thBar.start();\n\n        thFoo.join();\n        thBar.join();\n\n        System.out.println(thFoo.result);\n        System.out.println(thBar.result);\n    }\n\n}\n\n\n\nclass MyThread extends Thread {\n    public int arg = 0;\n    public int result = 0;\n\n    public MyThread(int arg) {\n        super();\n        this.arg = arg;\n    }\n\n    @Override\n    public void run() {\n        result = arg * 2;\n    }\n}\n"
  },
  {
    "path": "java/src/demo08_return_value/AppB.java",
    "content": "/*\n * GETTING RETURNED VALUES FROM THREADS\n */\n\npackage demo08_return_value;\n\n\n\npublic class AppB {\n\n    public static void main(String[] args) throws InterruptedException {\n        int[] result = new int[2];\n\n        var thFoo = new Thread(() -> result[0] = doubleValue(5));\n        var thBar = new Thread(() -> result[1] = doubleValue(80));\n\n        thFoo.start();\n        thBar.start();\n\n        // Wait until thFoo and thBar finish\n        thFoo.join();\n        thBar.join();\n\n        System.out.println(result[0]);\n        System.out.println(result[1]);\n    }\n\n\n    private static int doubleValue(int value) {\n        return value * 2;\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo08_return_value/AppC.java",
    "content": "/*\n * GETTING RETURNED VALUES FROM THREADS\n */\n\npackage demo08_return_value;\n\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.concurrent.Future;\n\n\n\npublic class AppC {\n\n    public static void main(String[] args) throws InterruptedException, ExecutionException {\n        var cal = new SimpleCalculator();\n        System.out.println(\"Begin calculating\");\n\n        var futResult = cal.calculate(-9);\n\n        var result = futResult.get();\n\n        System.out.println(result);\n\n        cal.shutdown();\n    }\n\n}\n\n\n\nclass SimpleCalculator {\n    private ExecutorService executor = Executors.newSingleThreadExecutor();\n\n    public Future<Integer> calculate(Integer input) {\n        return executor.submit(() -> {\n            Thread.sleep(1000);\n            return input * 2;\n        });\n    }\n\n    public void shutdown() {\n        executor.shutdown();\n    }\n}\n"
  },
  {
    "path": "java/src/demo09_detach/App.java",
    "content": "/*\n * THREAD DETACHING\n */\n\npackage demo09_detach;\n\n\n\npublic class App {\n\n    public static void main(String[] args) throws InterruptedException {\n        var thFoo = new Thread(() -> {\n            System.out.println(\"foo is starting...\");\n\n            try { Thread.sleep(2000); } catch (InterruptedException e) { }\n\n            System.out.println(\"foo is exiting...\");\n        });\n\n\n        thFoo.setDaemon(true);\n        thFoo.start();\n\n\n        // If I comment this statement,\n        // thFoo will be forced into terminating with main thread\n        Thread.sleep(3000);\n\n\n        System.out.println(\"Main thread is exiting\");\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo10_yield/App.java",
    "content": "/*\n * THREAD YIELDING\n */\n\npackage demo10_yield;\n\nimport java.time.Duration;\nimport java.time.Instant;\n\n\n\npublic class App {\n\n    public static void main(String[] args) {\n        var tpStartMeasure = Instant.now();\n\n        littleSleep(130000);\n\n        var timeElapsed = Duration.between(tpStartMeasure, Instant.now());\n\n        System.out.println(\"Elapsed time: \" + timeElapsed.toNanos() + \" nanoseonds\");\n    }\n\n\n    private static void littleSleep(int ns) {\n        var tpStart = Instant.now();\n        var tpEnd = tpStart.plusNanos(ns);\n\n        do {\n            Thread.yield();\n        }\n        while (Instant.now().isBefore(tpEnd));\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo11_exec_service/AppA.java",
    "content": "/*\n * EXECUTOR SERVICES AND THREAD POOLS\n * Version A: The executor service (of which thread pool) containing a single thread\n *\n * Note: The single thread executor is ideal for creating an event loop.\n */\n\npackage demo11_exec_service;\n\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\n\n\n\npublic class AppA {\n\n    public static void main(String[] args) {\n        ExecutorService executor = Executors.newSingleThreadExecutor();\n\n        executor.submit(() -> System.out.println(\"Hello World\"));\n        executor.submit(() -> System.out.println(\"Hello Multithreading\"));\n\n        Runnable rnn = () -> System.out.println(\"Hello the Executor Service\");\n        executor.submit(rnn);\n\n        executor.shutdown();\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo11_exec_service/AppB01.java",
    "content": "/*\n * EXECUTOR SERVICES AND THREAD POOLS\n * Version B01: The executor service containing multiple threads - FixedThreadPool\n */\n\npackage demo11_exec_service;\n\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.stream.IntStream;\n\n\n\npublic class AppB01 {\n\n    public static void main(String[] args) throws InterruptedException {\n        final int NUM_THREADS = 2;\n        final int NUM_TASKS = 5;\n\n\n        ExecutorService executor = Executors.newFixedThreadPool(NUM_THREADS);\n\n\n        IntStream.range(0, NUM_TASKS).forEach(i -> executor.submit(() -> {\n            char nameTask = (char) (i + 'A');\n\n            System.out.println(\"Task %c is starting\".formatted(nameTask));\n\n            try { Thread.sleep(3000); } catch (InterruptedException e) { }\n\n            System.out.println(\"Task %c is completed\".formatted(nameTask));\n        }));\n\n\n        // shutdown() stops the ExecutorService from accepting new tasks\n        // and closes down idle worker threads\n        executor.shutdown();\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo11_exec_service/AppB02.java",
    "content": "/*\n * EXECUTOR SERVICES AND THREAD POOLS\n * Version B02: The executor service containing multiple threads - FixedThreadPool\n */\n\npackage demo11_exec_service;\n\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.concurrent.TimeUnit;\nimport java.util.stream.IntStream;\n\n\n\npublic class AppB02 {\n\n    public static void main(String[] args) throws InterruptedException {\n        final int NUM_THREADS = 2;\n        final int NUM_TASKS = 5;\n\n\n        ExecutorService executor = Executors.newFixedThreadPool(NUM_THREADS);\n\n\n        IntStream.range(0, NUM_TASKS).forEach(i -> executor.submit(() -> {\n            char nameTask = (char) (i + 'A');\n\n            System.out.println(\"Task %c is starting\".formatted(nameTask));\n\n            try { Thread.sleep(3000); } catch (InterruptedException e) { }\n\n            System.out.println(\"Task %c is completed\".formatted(nameTask));\n        }));\n\n\n        executor.shutdown();\n\n\n        System.out.println(\"All tasks are submitted\");\n\n        executor.awaitTermination(20, TimeUnit.SECONDS);\n\n        System.out.println(\"All tasks are completed\");\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo11_exec_service/AppC01.java",
    "content": "/*\n * EXECUTOR SERVICES AND THREAD POOLS\n * Version C01: The executor service and Future objects - Getting started\n *\n * Future objects help you to programatically manage tasks, such as:\n * - Wait for a task to finish executing (and get result), with the \"get\" method.\n * - Cancel a task prematurely, with the \"cancel\" method.\n */\n\npackage demo11_exec_service;\n\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.concurrent.Future;\n\n\n\npublic class AppC01 {\n\n    public static void main(String[] args) throws InterruptedException, ExecutionException {\n        ExecutorService executor = Executors.newSingleThreadExecutor();\n\n        Future<String> task = executor.submit(() -> \"lorem ipsum\");\n\n        executor.shutdown();\n\n        while (false == task.isDone()) {\n            // Waiting...\n        }\n\n        String result = task.get();\n        System.out.println(result);\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo11_exec_service/AppC02.java",
    "content": "/*\n * EXECUTOR SERVICES AND THREAD POOLS\n * Version C02: The executor service and Future objects - Getting started\n */\n\npackage demo11_exec_service;\n\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.concurrent.Future;\n\n\n\npublic class AppC02 {\n\n    public static void main(String[] args) throws InterruptedException, ExecutionException {\n        ExecutorService executor = Executors.newSingleThreadExecutor();\n\n        Future<Integer> task = executor.submit(() -> getSquared(7));\n\n        executor.shutdown();\n\n        while (false == task.isDone()) {\n            // Waiting...\n        }\n\n        Integer result = task.get();\n        System.out.println(result);\n    }\n\n\n    private static int getSquared(int x) {\n        return x * x;\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo11_exec_service/AppC03.java",
    "content": "/*\n * EXECUTOR SERVICES AND THREAD POOLS\n * Version C03: The executor service and Future objects - Getting started\n */\n\npackage demo11_exec_service;\n\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.concurrent.Future;\n\n\n\npublic class AppC03 {\n\n    public static void main(String[] args) throws InterruptedException, ExecutionException {\n        ExecutorService executor = Executors.newSingleThreadExecutor();\n\n        Future<Integer> task = executor.submit(() -> getSquared(7));\n        executor.shutdown();\n\n        /*\n         * Method \"Future.get\" should wait if necessary for the computation to complete,\n         * and then retrieves its result.\n         *\n         * So, we can omit the while loop (to wait for task completion).\n         */\n\n        Integer result = task.get();\n        System.out.println(result);\n    }\n\n\n    private static int getSquared(int x) {\n        return x * x;\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo11_exec_service/AppC04.java",
    "content": "/*\n * EXECUTOR SERVICES AND THREAD POOLS\n * Version C04: The executor service and Future objects - Getting started\n */\n\npackage demo11_exec_service;\n\nimport java.util.concurrent.Callable;\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.concurrent.Future;\n\n\n\npublic class AppC04 {\n\n    public static void main(String[] args) throws InterruptedException, ExecutionException {\n        ExecutorService executor = Executors.newSingleThreadExecutor();\n\n        // Old syntax\n//        Callable<Integer> callable = new Callable<>() {\n//            @Override\n//            public Integer call() throws Exception {\n//                return getSquared(7);\n//            }\n//        };\n\n        Callable<Integer> callable = () -> getSquared(7);\n        Future<Integer> task = executor.submit(callable);\n\n        executor.shutdown();\n\n        System.out.println(\"Calculating...\");\n\n        Integer result = task.get();\n        System.out.println(result);\n    }\n\n\n    private static int getSquared(int x) {\n        // Calculating in three seconds...\n        try {\n            Thread.sleep(3000);\n        } catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n\n        return x * x;\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo11_exec_service/AppC05.java",
    "content": "/*\n * EXECUTOR SERVICES AND THREAD POOLS\n * Version C05: The executor service and Future objects - List of Future objects\n */\n\npackage demo11_exec_service;\n\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.stream.IntStream;\n\n\n\npublic class AppC05 {\n\n    public static void main(String[] args) throws InterruptedException, ExecutionException {\n        final int NUM_THREADS = 2;\n        final int NUM_TASKS = 5;\n\n        ExecutorService executor = Executors.newFixedThreadPool(NUM_THREADS);\n\n        System.out.println(\"Begin to submit all tasks\");\n\n\n        // lstTask is List< Future<Character> >\n        var lstTask = IntStream.range(0, NUM_TASKS)\n                .mapToObj(i -> executor.submit(() -> (char)(i + 'A')))\n                .toList();\n\n\n        executor.shutdown();\n\n        for (var task : lstTask) {\n            System.out.println(task.get());\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo11_exec_service/AppC06.java",
    "content": "/*\n * EXECUTOR SERVICES AND THREAD POOLS\n * Version C06: The executor service and Future objects - List of Future objects\n */\n\npackage demo11_exec_service;\n\nimport java.util.concurrent.Callable;\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.stream.IntStream;\n\n\n\npublic class AppC06 {\n\n    public static void main(String[] args) throws InterruptedException, ExecutionException {\n        final int NUM_THREADS = 2;\n        final int NUM_TASKS = 5;\n\n        ExecutorService executor = Executors.newFixedThreadPool(NUM_THREADS);\n\n\n        // List< Callable<String> > todo\n        var todo = IntStream.range(0, NUM_TASKS)\n                .mapToObj(i -> (Callable<String>)() -> doTask(i))\n                .toList();\n\n\n        System.out.println(\"Begin to submit all tasks\");\n\n        /*\n         * invokeAll() will not return until all the tasks are completed\n         * (i.e., all the Futures in your answer collection will report isDone() if asked)\n         */\n\n        // lstTask is List< Future<String> >\n        var lstTask = executor.invokeAll(todo);\n\n\n        System.out.println(\"All tasks are completed\");\n        executor.shutdown();\n\n\n        for (var task : lstTask) {\n            System.out.println(task.get());\n        }\n    }\n\n\n\n    private static String doTask(int number) {\n        try { Thread.sleep(1000); } catch (InterruptedException e) { }\n        System.out.println(\"Finish \" + number);\n        return number + \" ok\";\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo11_exec_service/AppExtra.java",
    "content": "/*\n * EXECUTOR SERVICES AND THREAD POOLS\n * Demo extra: The fork/join pool\n *\n * ForkJoinPool is the central part of the fork/join framework introduced in Java 7.\n * It solves a common problem of spawning multiple tasks in recursive algorithms.\n * Using a simple ThreadPoolExecutor, you will run out of threads quickly,\n * as every task or subtask requires its own thread to run.\n *\n * In a fork/join framework, any task can spawn (fork) a number of subtasks and wait for their completion\n * using the join method. The benefit of the fork/join framework is that it does not create a new thread\n * for each task or subtask, implementing the Work Stealing algorithm instead.\n * https://www.baeldung.com/thread-pool-java-and-guava\n */\n\npackage demo11_exec_service;\n\nimport java.util.Arrays;\nimport java.util.List;\nimport java.util.concurrent.ForkJoinPool;\nimport java.util.concurrent.ForkJoinTask;\nimport java.util.concurrent.RecursiveTask;\nimport java.util.stream.Collectors;\n\n\n\npublic class AppExtra {\n\n    public static void main(String[] args) {\n        var tree =\n                new TreeNode(5,\n                        new TreeNode(-1),\n                        new TreeNode(3,\n                                new TreeNode(6),\n                                new TreeNode(-4))\n        );\n\n        var forkJoinPool = ForkJoinPool.commonPool();\n\n        int result = forkJoinPool.invoke(new TreeSumTask(tree));\n\n        System.out.println(result);\n    }\n\n}\n\n\n\nclass TreeNode {\n    int value;\n    List<TreeNode> children;\n\n    TreeNode(int value, TreeNode... children) {\n        this.value = value;\n        this.children = Arrays.asList(children);\n    }\n}\n\n\n\nclass TreeSumTask extends RecursiveTask<Integer> {\n    private static final long serialVersionUID = 1L;\n\n    final TreeNode node;\n\n    TreeSumTask(TreeNode node) {\n        this.node = node;\n    }\n\n    @Override\n    protected Integer compute() {\n        return node.value\n                + node.children.stream()\n                .map(child -> new TreeSumTask(child).fork())\n                .collect(Collectors.summingInt(ForkJoinTask::join));\n    }\n}\n"
  },
  {
    "path": "java/src/demo12_race_condition/AppA.java",
    "content": "/*\n * RACE CONDITIONS\n */\n\npackage demo12_race_condition;\n\nimport java.util.stream.IntStream;\n\n\n\npublic class AppA {\n\n    public static void main(String[] args) {\n        final int NUM_THREADS = 4;\n\n\n        var lstTh = IntStream.range(0, NUM_THREADS)\n                .mapToObj(i -> new Thread(() -> {\n                    try { Thread.sleep(1000); } catch (InterruptedException e) { }\n                    System.out.print(i);\n                }))\n                .toList();\n\n\n        lstTh.forEach(Thread::start);\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo12_race_condition/AppB01.java",
    "content": "/*\n * DATA RACES\n * Version 01: Without multithreading\n */\n\npackage demo12_race_condition;\n\nimport java.util.Arrays;\n\n\n\npublic class AppB01 {\n\n    public static void main(String[] args) {\n        final int N = 8;\n        int result = getResult(N);\n        System.out.println(\"Number of integers that are divisible by 2 or 3 is: \" + result);\n    }\n\n\n    private static int getResult(int N) {\n        var a = new boolean[N + 1];\n        Arrays.fill(a, false);\n\n        for (int i = 1; i <= N; ++i)\n            if (i % 2 == 0 || i % 3 == 0)\n                a[i] = true;\n\n        int res = countTrue(a, N);\n        return res;\n    }\n\n\n    private static int countTrue(boolean[] a, int N) {\n        int count = 0;\n\n        for (int i = 1; i <= N; ++i)\n            if (a[i])\n                ++count;\n\n        return count;\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo12_race_condition/AppB02.java",
    "content": "/*\n * DATA RACES\n * Version 02: Multithreading\n */\n\npackage demo12_race_condition;\n\nimport java.util.Arrays;\n\n\n\npublic class AppB02 {\n\n    public static void main(String[] args) throws InterruptedException {\n        final int N = 8;\n\n\n        var a = new boolean[N + 1];\n        Arrays.fill(a, false);\n\n\n        var thDiv2 = new Thread(() -> {\n            for (int i = 2; i <= N; i += 2)\n                a[i] = true;\n        });\n\n        var thDiv3 = new Thread(() -> {\n            for (int i = 3; i <= N; i += 3)\n                a[i] = true;\n        });\n\n\n        thDiv2.start();\n        thDiv3.start();\n        thDiv2.join();\n        thDiv3.join();\n\n\n        int result = countTrue(a, N);\n        System.out.println(\"Number of integers that are divisible by 2 or 3 is: \" + result);\n    }\n\n\n    private static int countTrue(boolean[] a, int N) {\n        int count = 0;\n\n        for (int i = 1; i <= N; ++i)\n            if (a[i])\n                ++count;\n\n        return count;\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo12_race_condition/AppC01.java",
    "content": "/*\n * RACE CONDITIONS AND DATA RACES\n */\n\npackage demo12_race_condition;\n\nimport java.util.stream.Stream;\n\n\n\npublic class AppC01 {\n\n    public static void main(String[] args) throws InterruptedException {\n        final int NUM_THREADS = 16;\n\n\n        var lstTh = Stream.generate(() -> new Thread(() -> {\n\n            try { Thread.sleep(1000); } catch (InterruptedException e) { }\n\n            for (int i = 0; i < 1000; ++i) {\n                Global.counter += 1;\n            }\n\n        })).limit(NUM_THREADS).toList();\n\n\n        for (var th : lstTh)\n            th.start();\n\n        for (var th : lstTh)\n            th.join();\n\n\n        System.out.println(\"counter = \" + Global.counter);\n        /*\n         * We are not sure that counter = 16000\n         */\n    }\n\n\n\n    private static class Global {\n        public static int counter = 0;\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo12_race_condition/AppC02.java",
    "content": "/*\n * RACE CONDITIONS AND DATA RACES\n */\n\npackage demo12_race_condition;\n\n\n\npublic class AppC02 {\n\n    public static void main(String[] args) throws InterruptedException {\n        var thA = new Thread(() -> {\n            try { Thread.sleep(1000); } catch (InterruptedException e) { }\n\n            while (Global.counter < 10)\n                ++Global.counter;\n\n            System.out.println(\"A won !!!\");\n        });\n\n\n        var thB = new Thread(() -> {\n            try { Thread.sleep(1000); } catch (InterruptedException e) { }\n\n            while (Global.counter > -10)\n                --Global.counter;\n\n            System.out.println(\"B won !!!\");\n        });\n\n\n        thA.start();\n        thB.start();\n    }\n\n\n\n    private static class Global {\n        public static int counter = 0;\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo13_mutex/AppA.java",
    "content": "/*\n * MUTEXES\n */\n\npackage demo13_mutex;\n\n\n\npublic class AppA {\n\n    public static void main(String[] args) {\n        /*\n         * Unfortunately, Java does not support the mutex feature by default.\n         * You can use a binary semaphore as a mutex.\n         */\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo13_mutex/AppB.java",
    "content": "/*\n * MUTEXES\n *\n * A binary semaphore can be used as a mutex.\n * Without synchronization (by a mutex), we are not sure that counter = 16000.\n */\n\npackage demo13_mutex;\n\nimport java.util.concurrent.Semaphore;\nimport java.util.stream.Stream;\n\n\n\npublic class AppB {\n\n    public static void main(String[] args) throws InterruptedException {\n        final int NUM_THREADS = 16;\n\n\n        var lstTh = Stream.generate(() -> new Thread(() -> {\n\n            try {\n                Thread.sleep(1000);\n\n                Global.mutex.acquire();\n\n                for (int i = 0; i < 1000; ++i) {\n                    Global.counter += 1;\n                }\n\n                Global.mutex.release();\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n\n        })).limit(NUM_THREADS).toList();\n\n\n        for (var th : lstTh)\n            th.start();\n\n        for (var th : lstTh)\n            th.join();\n\n\n        System.out.println(\"counter = \" + Global.counter);\n    }\n\n\n\n    private static class Global {\n        public static Semaphore mutex = new Semaphore(1);\n        public static int counter = 0;\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo14_synchronized_block/AppA.java",
    "content": "/*\n * SYNCHRONIZED BLOCKS\n * Version A: Synchronized blocks\n */\n\npackage demo14_synchronized_block;\n\nimport java.util.stream.IntStream;\n\n\n\npublic class AppA {\n\n    public static void main(String[] args) throws InterruptedException {\n        final int NUM_THREADS = 16;\n\n        var worker = new MyTask();\n\n        var lstTh = IntStream.range(0, NUM_THREADS).mapToObj(i -> new Thread(worker)).toList();\n\n        for (var th : lstTh)\n            th.start();\n\n        for (var th : lstTh)\n            th.join();\n\n        System.out.println(\"counter = \" + MyTask.counter);\n        /*\n         * We are sure that counter = 16000\n         */\n    }\n\n\n\n    private static class MyTask implements Runnable {\n        public static int counter = 0;\n\n        @Override\n        public void run() {\n            try { Thread.sleep(500); } catch (InterruptedException e) { }\n\n            /*\n             * synchronized (this) means that on \"this\" object,\n             *                    only and only one thread can execute the enclosed block at one time.\n             *\n             * \"this\" is the monitor object, the code inside the block gets synchronized on the monitor object.\n             * Simply put, only one thread per monitor object can execute inside that block of code.\n             */\n\n            synchronized (this) {\n                for (int i = 0; i < 1000; ++i) {\n                    ++counter;\n                }\n            }\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo14_synchronized_block/AppB01.java",
    "content": "/*\n * SYNCHRONIZED BLOCKS\n * Version B01: Synchronized instance methods\n */\n\npackage demo14_synchronized_block;\n\nimport java.util.stream.IntStream;\n\n\n\npublic class AppB01 {\n\n    public static void main(String[] args) throws InterruptedException {\n        final int NUM_THREADS = 16;\n\n        var myTask = new MyTask();\n\n        var lstTh = IntStream.range(0, NUM_THREADS).mapToObj(i -> new Thread(myTask)).toList();\n\n        for (var th : lstTh)\n            th.start();\n\n        for (var th : lstTh)\n            th.join();\n\n        System.out.println(\"counter = \" + MyTask.counter);\n        /*\n         * We are sure that counter = 16000\n         */\n    }\n\n\n\n    private static class MyTask implements Runnable {\n        public static int counter = 0;\n\n        @Override\n        public synchronized void run() {\n            try { Thread.sleep(500); } catch (InterruptedException e) { }\n\n            for (int i = 0; i < 1000; ++i) {\n                ++counter;\n            }\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo14_synchronized_block/AppB02.java",
    "content": "/*\n * SYNCHRONIZED BLOCKS\n * Version B02: Synchronized instance methods\n *\n * Assume there is a synchronized method \"run\" in object X:\n * Multiple threads associating with X should synchronize (block) when they execute method \"run\".\n *\n * If multiple threads associate with multiple objects (each thread ~ each object),\n *      they will NOT SYNCHRONIZE when execute method \"run\".\n *\n *      In demo code below, we create multiple Worker objects, so the problem happens.\n */\n\npackage demo14_synchronized_block;\n\nimport java.util.stream.IntStream;\n\n\n\npublic class AppB02 {\n\n    public static void main(String[] args) throws InterruptedException {\n        final int NUM_THREADS = 16;\n\n        var lstTh = IntStream.range(0, NUM_THREADS).mapToObj(i -> new Worker()).toList();\n\n        for (var th : lstTh)\n            th.start();\n\n        for (var th : lstTh)\n            th.join();\n\n        System.out.println(\"counter = \" + Worker.counter);\n        /*\n         * We are NOT sure that counter = 16000\n         */\n    }\n\n\n\n    private static class Worker extends Thread {\n        public static int counter = 0;\n\n        @Override\n        public synchronized void run() {\n            try { Thread.sleep(500); } catch (InterruptedException e) { }\n\n            for (int i = 0; i < 1000; ++i) {\n                ++counter;\n            }\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo14_synchronized_block/AppC.java",
    "content": "/*\n * SYNCHRONIZED BLOCKS\n * Version C: Synchronized static methods\n */\n\npackage demo14_synchronized_block;\n\nimport java.util.stream.IntStream;\n\n\n\npublic class AppC {\n\n    public static void main(String[] args) throws InterruptedException {\n        final int NUM_THREADS = 16;\n\n        var lstTh = IntStream.range(0, NUM_THREADS).mapToObj(i -> new Worker()).toList();\n\n        for (var th : lstTh)\n            th.start();\n\n        for (var th : lstTh)\n            th.join();\n\n        System.out.println(\"counter = \" + Worker.counter);\n        /*\n         * We are sure that counter = 16000\n         */\n    }\n\n\n\n    private static class Worker extends Thread {\n        public static int counter = 0;\n\n        @Override\n        public void run() {\n            incCounter();\n        }\n\n        private static synchronized void incCounter() {\n            try { Thread.sleep(500); } catch (InterruptedException e) { }\n\n            for (int i = 0; i < 1000; ++i) {\n                ++counter;\n            }\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo15_deadlock/AppA.java",
    "content": "/*\n * DEADLOCK\n * Version A\n */\n\npackage demo15_deadlock;\n\nimport java.util.concurrent.Semaphore;\n\n\n\npublic class AppA {\n\n    public static void main(String[] args) throws InterruptedException {\n        var mutex = new Semaphore(1);\n\n        var thFoo = new Thread(() -> doTask(mutex, \"foo\"));\n        var thBar = new Thread(() -> doTask(mutex, \"bar\"));\n\n        thFoo.start();\n        thBar.start();\n\n        thFoo.join();\n        thBar.join();\n\n        System.out.println(\"You will never see this statement due to deadlock!\");\n    }\n\n\n    private static void doTask(Semaphore mutex, String name) {\n        try {\n            mutex.acquire();\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n\n        System.out.println(name + \" acquired resource\");\n\n        // mutex.release(); // Forget this statement ==> deadlock\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo15_deadlock/AppB.java",
    "content": "/*\n * DEADLOCK\n * Version B\n */\n\npackage demo15_deadlock;\n\n\n\npublic class AppB {\n\n    public static void main(String[] args) throws InterruptedException {\n        final Object resourceA = \"resourceA\";\n        final Object resourceB = \"resourceB\";\n\n\n        var thFoo = new Thread(() -> {\n            synchronized (resourceA) {\n                System.out.println(\"foo acquired resource A\");\n\n                try { Thread.sleep(1000); } catch (InterruptedException e) { }\n\n                synchronized (resourceB) {\n                    System.out.println(\"foo acquired resource B\");\n                }\n            }\n        });\n\n\n        var thBar = new Thread(() -> {\n            synchronized (resourceB) {\n                System.out.println(\"bar acquired resource B\");\n\n                try { Thread.sleep(1000); } catch (InterruptedException e) { }\n\n                synchronized (resourceA) {\n                    System.out.println(\"bar acquired resource A\");\n                }\n            }\n        });\n\n\n        thFoo.start();\n        thBar.start();\n        thFoo.join();\n        thBar.join();\n\n\n        System.out.println(\"You will never see this statement due to deadlock!\");\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo16_monitor/App.java",
    "content": "/*\n * MONITORS\n * Implementation of a monitor for managing a counter\n */\n\npackage demo16_monitor;\n\nimport java.util.stream.IntStream;\n\n\n\npublic class App {\n\n    public static void main(String[] args) throws InterruptedException {\n        var counter = new Counter();\n        var monitor = new MyMonitor();\n        monitor.init(counter);\n\n        final int NUM_THREADS = 16;\n\n\n        var lstTh = IntStream.range(0, NUM_THREADS).mapToObj(t -> new Thread(() -> {\n\n            try { Thread.sleep(1000); } catch (InterruptedException e) { }\n\n            for (int i = 0; i < 1000; ++i)\n                monitor.increaseCounter();\n\n        })).toList();\n\n\n        for (var th : lstTh)\n            th.start();\n\n        for (var th : lstTh)\n            th.join();\n\n\n        System.out.println(\"counter = \" + counter.value);\n    }\n\n}\n\n\n\nclass Counter {\n    public int value = 0;\n}\n\n\n\nclass MyMonitor {\n    private Counter counter = null;\n\n    public void init(Counter counter) {\n        this.counter = counter;\n    }\n\n    public void increaseCounter() {\n        synchronized (counter) {\n            ++counter.value;\n        }\n    }\n}\n"
  },
  {
    "path": "java/src/demo17_reentrant_lock/AppA01.java",
    "content": "/*\n * REENTRANT LOCKS (RECURSIVE MUTEXES)\n * Version A01: A simple example\n */\n\npackage demo17_reentrant_lock;\n\nimport java.util.concurrent.locks.Lock;\nimport java.util.concurrent.locks.ReentrantLock;\n\n\n\npublic class AppA01 {\n\n    public static void main(String[] args) {\n        Lock lk = new ReentrantLock();\n\n\n        new Thread(() -> {\n\n            lk.lock();\n            System.out.println(\"First time acquiring the resource\");\n\n            lk.lock();\n            System.out.println(\"Second time acquiring the resource\");\n\n            lk.unlock();\n            lk.unlock();\n\n        }).start();\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo17_reentrant_lock/AppA02.java",
    "content": "/*\n * REENTRANT LOCKS (RECURSIVE MUTEXES)\n * Version A02: A multithreaded app example\n */\n\npackage demo17_reentrant_lock;\n\nimport java.util.concurrent.locks.Lock;\nimport java.util.concurrent.locks.ReentrantLock;\nimport java.util.stream.IntStream;\n\n\n\npublic class AppA02 {\n\n    public static void main(String[] args) {\n        final int NUM_THREADS = 3;\n\n        IntStream.range(0, NUM_THREADS).forEach(i -> new Worker((char)(i + 'A')).start());\n    }\n\n\n\n    private static class Worker extends Thread {\n        private static Lock lk = new ReentrantLock();\n\n        private char name;\n\n        public Worker(char name) {\n            this.name = name;\n        }\n\n        @Override\n        public void run() {\n            try { Thread.sleep(1000); } catch (InterruptedException e) { }\n\n            lk.lock();\n            System.out.println(\"First time %c acquiring the resource\".formatted(name));\n\n            lk.lock();\n            System.out.println(\"Second time %c acquiring the resource\".formatted(name));\n\n            lk.unlock();\n            lk.unlock();\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo17_reentrant_lock/AppB01.java",
    "content": "/*\n * REENTRANT LOCKS (RECURSIVE MUTEXES)\n * Version B01: A simple example\n *\n * Looking into the lock behind the \"synchronized\" keyword.\n * The lock behind the synchronized methods and blocks is reentrant lock.\n * That is, the current thread can acquire the same synchronized lock over and over again while holding it.\n */\n\npackage demo17_reentrant_lock;\n\n\n\npublic class AppB01 {\n\n    public static void main(String[] args) {\n        final Object resource = \"resource\";\n\n\n        new Thread(() -> {\n            synchronized (resource) {\n                System.out.println(\"First time acquiring the resource\");\n\n                synchronized (resource) {\n                    System.out.println(\"Second time acquiring the resource\");\n                }\n            }\n        }).start();\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo17_reentrant_lock/AppB02.java",
    "content": "/*\n * REENTRANT LOCKS (RECURSIVE MUTEXES)\n * Version B02: A multithreaded app example\n */\n\npackage demo17_reentrant_lock;\n\nimport java.util.stream.IntStream;\n\n\n\npublic class AppB02 {\n\n    public static void main(String[] args) throws InterruptedException {\n        final int NUM_THREADS = 3;\n\n        IntStream.range(0, NUM_THREADS).forEach(i -> new Worker((char)(i + 'A')).start());\n    }\n\n\n\n    private static class Worker extends Thread {\n        private static Object lock = new Object();\n\n        private char name;\n\n        public Worker(char name) {\n            this.name = name;\n        }\n\n        @Override\n        public void run() {\n            try { Thread.sleep(1000); } catch (InterruptedException e) { }\n\n            synchronized (lock) {\n                System.out.println(\"First time %c acquiring the lock\".formatted(name));\n\n                synchronized (lock) {\n                    System.out.println(\"Second time %c acquiring the lock\".formatted(name));\n                }\n            }\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo18_barrier_latch/AppA01.java",
    "content": "/*\n * BARRIERS AND LATCHES\n * Version A: Cyclic barriers\n */\n\npackage demo18_barrier_latch;\n\nimport java.util.List;\nimport java.util.concurrent.BrokenBarrierException;\nimport java.util.concurrent.CyclicBarrier;\n\n\n\npublic class AppA01 {\n\n    public static void main(String[] args) {\n        var syncPoint = new CyclicBarrier(3); // participant count = 3\n\n\n        var lstArg = List.of(\n                new ThreadArg(\"lorem\", 1),\n                new ThreadArg(\"ipsum\", 2),\n                new ThreadArg(\"dolor\", 3)\n        );\n\n\n        lstArg.forEach(arg -> new Thread(() -> {\n\n            try {\n                Thread.sleep(1000 * arg.waitTime);\n\n                System.out.println(\"Get request from \" + arg.userName);\n                syncPoint.await();\n\n                System.out.println(\"Process request for \" + arg.userName);\n                syncPoint.await();\n\n                System.out.println(\"Done \" + arg.userName);\n            }\n            catch (InterruptedException | BrokenBarrierException e) {\n                e.printStackTrace();\n            }\n\n        }).start());\n    }\n\n\n\n    private record ThreadArg(String userName, int waitTime) { }\n\n}\n"
  },
  {
    "path": "java/src/demo18_barrier_latch/AppA02.java",
    "content": "/*\n * BARRIERS AND LATCHES\n * Version A: Cyclic barriers\n */\n\npackage demo18_barrier_latch;\n\nimport java.util.List;\nimport java.util.concurrent.BrokenBarrierException;\nimport java.util.concurrent.CyclicBarrier;\n\n\n\npublic class AppA02 {\n\n    public static void main(String[] args) {\n        var syncPoint = new CyclicBarrier(2); // participant count = 2\n\n\n        var lstArg = List.of(\n                new ThreadArg(\"lorem\", 1),\n                new ThreadArg(\"ipsum\", 3),\n                new ThreadArg(\"dolor\", 3),\n                new ThreadArg(\"amet\", 10)\n        );\n\n\n        lstArg.forEach(arg -> new Thread(() -> {\n\n            try {\n                Thread.sleep(1000 * arg.waitTime);\n\n                System.out.println(\"Get request from \" + arg.userName);\n                syncPoint.await();\n\n                System.out.println(\"Process request for \" + arg.userName);\n                syncPoint.await();\n\n                System.out.println(\"Done \" + arg.userName);\n            }\n            catch (InterruptedException | BrokenBarrierException e) {\n                e.printStackTrace();\n            }\n\n        }).start());\n\n\n        // Thread with userName = \"amet\" shall be FREEZED\n    }\n\n\n\n    private record ThreadArg(String userName, int waitTime) { }\n\n}\n"
  },
  {
    "path": "java/src/demo18_barrier_latch/AppA03.java",
    "content": "/*\n * BARRIERS AND LATCHES\n * Version A: Cyclic barriers\n */\n\npackage demo18_barrier_latch;\n\nimport java.util.List;\nimport java.util.concurrent.BrokenBarrierException;\nimport java.util.concurrent.CyclicBarrier;\n\n\n\npublic class AppA03 {\n\n    public static void main(String[] args) {\n        var syncPointA = new CyclicBarrier(2);\n        var syncPointB = new CyclicBarrier(2);\n\n\n        var lstArg = List.of(\n                new ThreadArg(\"lorem\", 1),\n                new ThreadArg(\"ipsum\", 3),\n                new ThreadArg(\"dolor\", 3),\n                new ThreadArg(\"amet\", 10)\n        );\n\n\n        lstArg.forEach(arg -> new Thread(() -> {\n\n            try {\n                Thread.sleep(1000 * arg.waitTime);\n\n                System.out.println(\"Get request from \" + arg.userName);\n                syncPointA.await();\n\n                System.out.println(\"Process request for \" + arg.userName);\n                syncPointB.await();\n\n                System.out.println(\"Done \" + arg.userName);\n            }\n            catch (InterruptedException | BrokenBarrierException e) {\n                e.printStackTrace();\n            }\n\n        }).start());\n    }\n\n\n\n    private record ThreadArg(String userName, int waitTime) { }\n\n}\n"
  },
  {
    "path": "java/src/demo18_barrier_latch/AppB01.java",
    "content": "/*\n * BARRIERS AND LATCHES\n * Version B: Count-down latches\n *\n * Notes:\n * - CyclicBarrier maintains a count of threads whereas CountDownLatch maintains a count of tasks.\n * - CountDownLatch in Java is different from that one in C++.\n */\n\npackage demo18_barrier_latch;\n\nimport java.util.List;\nimport java.util.concurrent.CountDownLatch;\n\n\n\npublic class AppB01 {\n\n    public static void main(String[] args) {\n        var syncPoint = new CountDownLatch(3); // participant count = 3\n\n\n        var lstArg = List.of(\n                new ThreadArg(\"lorem\", 1),\n                new ThreadArg(\"ipsum\", 2),\n                new ThreadArg(\"dolor\", 3)\n        );\n\n\n        lstArg.forEach(arg -> new Thread(() -> {\n\n            try {\n                Thread.sleep(1000 * arg.waitTime);\n\n                System.out.println(\"Get request from \" + arg.userName);\n\n                syncPoint.countDown();\n                syncPoint.await();\n\n                System.out.println(\"Done \" + arg.userName);\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n\n        }).start());\n    }\n\n\n\n    private record ThreadArg(String userName, int waitTime) { }\n\n}\n"
  },
  {
    "path": "java/src/demo18_barrier_latch/AppB02.java",
    "content": "/*\n * BARRIERS AND LATCHES\n * Version B: Count-down latches\n *\n * Main thread waits for 3 child threads to get enough data to progress.\n */\n\npackage demo18_barrier_latch;\n\nimport java.util.List;\nimport java.util.concurrent.CountDownLatch;\n\n\n\npublic class AppB02 {\n\n    public static void main(String[] args) throws InterruptedException {\n        var lstArg = List.of(\n                new ThreadArg(\"Send request to egg.net to get data\", 6),\n                new ThreadArg(\"Send request to foo.org to get data\", 2),\n                new ThreadArg(\"Send request to bar.com to get data\", 4)\n        );\n\n\n        var syncPoint = new CountDownLatch(lstArg.size());\n\n\n        lstArg.forEach(arg -> new Thread(() -> {\n            try {\n                Thread.sleep(1000 * arg.waitTime);\n\n                System.out.println(arg.message);\n                syncPoint.countDown();\n\n                Thread.sleep(8000);\n                System.out.println(\"Cleanup\");\n            }\n            catch (InterruptedException e) {\n            }\n        }).start());\n\n\n        syncPoint.await();\n\n        System.out.println(\"\\nNow we have enough data to progress to next step\\n\");\n    }\n\n\n\n    private record ThreadArg(String message, int waitTime) { }\n\n}\n"
  },
  {
    "path": "java/src/demo19_read_write_lock/App.java",
    "content": "/*\n * READ-WRITE LOCKS\n */\n\npackage demo19_read_write_lock;\n\nimport java.util.Random;\nimport java.util.concurrent.locks.ReadWriteLock;\nimport java.util.concurrent.locks.ReentrantReadWriteLock;\nimport java.util.stream.IntStream;\nimport java.util.stream.Stream;\n\n\n\npublic class App {\n\n    public static void main(String[] args) {\n        ReadWriteLock rwlock = new ReentrantReadWriteLock();\n\n\n        final int NUM_THREADS_READ = 10;\n        final int NUM_THREADS_WRITE = 4;\n        final int NUM_ARGS = 3;\n\n\n        var lstArg = IntStream.range(0, NUM_ARGS).toArray();\n        var rand = new Random();\n\n\n        var lstThRead = Stream.generate(() -> new Thread(() -> {\n\n            int waitTime = lstArg[ rand.nextInt(lstArg.length) ];\n            try { Thread.sleep(1000 * waitTime);} catch (InterruptedException e) { }\n\n            rwlock.readLock().lock();\n\n            System.out.println(\"read: \" + Resource.value);\n\n            rwlock.readLock().unlock(); // Should be protected by try...finally...\n\n        })).limit(NUM_THREADS_READ).toList();\n\n\n        var lstThWrite = Stream.generate(() -> new Thread(() -> {\n\n            int waitTime = lstArg[ rand.nextInt(lstArg.length) ];\n            try { Thread.sleep(1000 * waitTime);} catch (InterruptedException e) { }\n\n            rwlock.writeLock().lock();\n\n            Resource.value = rand.nextInt(100);\n            System.out.println(\"write: \" + Resource.value);\n\n            rwlock.writeLock().unlock(); // Should be protected by try...finally...\n\n        })).limit(NUM_THREADS_WRITE).toList();\n\n\n        lstThRead.forEach(Thread::start);\n        lstThWrite.forEach(Thread::start);\n    }\n\n}\n\n\n\nclass Resource {\n    public static volatile int value;\n}\n"
  },
  {
    "path": "java/src/demo20_semaphore/AppA01.java",
    "content": "/*\n * SEMAPHORES\n * Version A: Paper sheets and packages\n */\n\npackage demo20_semaphore;\n\nimport java.util.concurrent.Semaphore;\n\n\n\npublic class AppA01 {\n\n    public static void main(String[] args) {\n        var semPackage = new Semaphore(0);\n\n\n        Runnable makeOneSheet = () -> {\n            for (int i = 0; i < 4; ++i) {\n                try {\n                    System.out.println(\"Make 1 sheet\");\n                    Thread.sleep(1000);\n                    semPackage.release();\n                }\n                catch (InterruptedException e) {\n                    e.printStackTrace();\n                }\n            }\n        };\n\n\n        Runnable combineOnePackage = () -> {\n            for (int i = 0; i < 4; ++i) {\n                try {\n                    semPackage.acquire(2);\n                    System.out.println(\"Combine 2 sheets into 1 package\");\n                }\n                catch (InterruptedException e) {\n                    e.printStackTrace();\n                }\n            }\n        };\n\n\n        new Thread(makeOneSheet).start();\n        new Thread(makeOneSheet).start();\n        new Thread(combineOnePackage).start();\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo20_semaphore/AppA02.java",
    "content": "/*\n * SEMAPHORES\n * Version A: Paper sheets and packages\n */\n\npackage demo20_semaphore;\n\nimport java.util.concurrent.Semaphore;\n\n\n\npublic class AppA02 {\n\n    public static void main(String[] args) {\n        var semPackage = new Semaphore(0);\n        var semSheet = new Semaphore(2);\n\n\n        Runnable makeOneSheet = () -> {\n            for (int i = 0; i < 4; ++i) {\n                try {\n                    semSheet.acquire();\n                    System.out.println(\"Make 1 sheet\");\n                    semPackage.release();\n                }\n                catch (InterruptedException e) {\n                    e.printStackTrace();\n                }\n\n            }\n        };\n\n\n        Runnable combineOnePackage = () -> {\n            for (int i = 0; i < 4; ++i) {\n                try {\n                    semPackage.acquire(2);\n                    System.out.println(\"Combine 2 sheets into 1 package\");\n                    Thread.sleep(1000);\n                    semSheet.release(2);\n                }\n                catch (InterruptedException e) {\n                    e.printStackTrace();\n                }\n            }\n        };\n\n\n        new Thread(makeOneSheet).start();\n        new Thread(makeOneSheet).start();\n        new Thread(combineOnePackage).start();\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo20_semaphore/AppA03.java",
    "content": "/*\n * SEMAPHORES\n * Version A: Paper sheets and packages\n */\n\npackage demo20_semaphore;\n\nimport java.util.concurrent.Semaphore;\n\n\n\npublic class AppA03 {\n\n    public static void main(String[] args) {\n        var semPackage = new Semaphore(0);\n        var semSheet = new Semaphore(2);\n\n\n        Runnable makeOneSheet = () -> {\n            for (int i = 0; i < 4; ++i) {\n                try {\n                    semSheet.acquire();\n                    System.out.println(\"Make 1 sheet\");\n                    semPackage.release();\n                }\n                catch (InterruptedException e) {\n                    e.printStackTrace();\n                }\n\n            }\n        };\n\n\n        Runnable combineOnePackage = () -> {\n            for (int i = 0; i < 4; ++i) {\n                try {\n                    semPackage.acquire(2);\n\n                    System.out.println(\"Combine 2 sheets into 1 package\");\n                    Thread.sleep(1000);\n\n                    semSheet.release(1);\n                    // The code causes deadlock due to missing one release.\n                    // The code should be semSheet.release(2);\n                }\n                catch (InterruptedException e) {\n                    e.printStackTrace();\n                }\n            }\n        };\n\n\n        new Thread(makeOneSheet).start();\n        new Thread(makeOneSheet).start();\n        new Thread(combineOnePackage).start();\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo20_semaphore/AppB.java",
    "content": "/*\n * SEMAPHORES\n * Version B: Tires and chassis\n */\n\npackage demo20_semaphore;\n\nimport java.util.concurrent.Semaphore;\n\n\n\npublic class AppB {\n\n    public static void main(String[] args) {\n        var semTire = new Semaphore(4);\n        var semChassis = new Semaphore(0);\n\n\n        Runnable makeTire = () -> {\n            for (int i = 0; i < 8; ++i) {\n                try {\n                    semTire.acquire();\n\n                    System.out.println(\"Make 1 tire\");\n                    Thread.sleep(1000);\n\n                    semChassis.release();\n                }\n                catch (InterruptedException e) {\n                    e.printStackTrace();\n                }\n\n            }\n        };\n\n\n        Runnable makeChassis = () -> {\n            for (int i = 0; i < 4; ++i) {\n                try {\n                    semChassis.acquire(4);\n\n                    System.out.println(\"Make 1 chassis\");\n                    Thread.sleep(3000);\n\n                    semTire.release(4);\n                }\n                catch (InterruptedException e) {\n                    e.printStackTrace();\n                }\n            }\n        };\n\n\n        new Thread(makeTire).start();\n        new Thread(makeTire).start();\n        new Thread(makeChassis).start();\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo21_condition_variable/AppA01.java",
    "content": "/*\n * CONDITION VARIABLES\n */\n\npackage demo21_condition_variable;\n\n\n\npublic class AppA01 {\n\n    public static void main(String[] args) {\n        var conditionVar = new Object();\n\n\n        Runnable foo = () -> {\n            try {\n                System.out.println(\"foo is waiting...\");\n\n                synchronized (conditionVar) {\n                    // foo must own the conditionVar before using it\n                    conditionVar.wait();\n                }\n\n                System.out.println(\"foo resumed\");\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        };\n\n\n        Runnable bar = () -> {\n            try { Thread.sleep(3000); } catch (InterruptedException e) { }\n\n            synchronized (conditionVar) {\n                // bar must own the conditionVar before using it\n                conditionVar.notify();\n            }\n        };\n\n\n        new Thread(foo).start();\n        new Thread(bar).start();\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo21_condition_variable/AppA02.java",
    "content": "/*\n * CONDITION VARIABLES\n */\n\npackage demo21_condition_variable;\n\n\n\npublic class AppA02 {\n\n    public static void main(String[] args) {\n        var conditionVar = new Object();\n\n\n        Runnable foo = () -> {\n            try {\n                System.out.println(\"foo is waiting...\");\n\n                synchronized (conditionVar) {\n                    conditionVar.wait();\n                }\n\n                System.out.println(\"foo resumed\");\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        };\n\n\n        Runnable bar = () -> {\n            for (int i = 0; i < 3; ++i) {\n                try { Thread.sleep(2000); } catch (InterruptedException e) { }\n\n                synchronized (conditionVar) {\n                    conditionVar.notify();\n                }\n            }\n        };\n\n\n        for (int i = 0; i < 3; ++i)\n            new Thread(foo).start();\n\n        new Thread(bar).start();\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo21_condition_variable/AppA03.java",
    "content": "/*\n * CONDITION VARIABLES\n */\n\npackage demo21_condition_variable;\n\n\n\npublic class AppA03 {\n\n    public static void main(String[] args) {\n        var conditionVar = new Object();\n\n\n        Runnable foo = () -> {\n            try {\n                System.out.println(\"foo is waiting...\");\n\n                synchronized (conditionVar) {\n                    conditionVar.wait();\n                }\n\n                System.out.println(\"foo resumed\");\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        };\n\n\n        Runnable bar = () -> {\n            try { Thread.sleep(3000); } catch (InterruptedException e) { }\n\n            synchronized (conditionVar) {\n                // Notify all waiting threads\n                conditionVar.notifyAll();\n            }\n        };\n\n\n        for (int i = 0; i < 3; ++i)\n            new Thread(foo).start();\n\n        new Thread(bar).start();\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo21_condition_variable/AppB.java",
    "content": "/*\n * CONDITION VARIABLES\n */\n\npackage demo21_condition_variable;\n\n\n\npublic class AppB {\n\n    public static void main(String[] args) {\n        new FooThread().start();\n        new EggThread().start();\n    }\n\n}\n\n\n\nclass Global {\n    public static final Object conditionVar = new Object();\n\n    public static int counter = 0;\n\n    public static final int COUNT_HALT_01 = 3;\n    public static final int COUNT_HALT_02 = 6;\n    public static final int COUNT_DONE = 10;\n}\n\n\n\n// Write numbers 1-3 and 8-10 as permitted by egg()\nclass FooThread extends Thread {\n    @Override\n    public void run() {\n        for (;;) {\n            synchronized (Global.conditionVar) {\n                try {\n                    Global.conditionVar.wait();\n\n                    Global.counter += 1;\n                    System.out.println(\"foo counter = \" + Global.counter);\n\n                    if (Global.counter >= Global.COUNT_DONE) {\n                        return;\n                    }\n                }\n                catch (InterruptedException e) {\n                    e.printStackTrace();\n                }\n            }\n        }\n    }\n}\n\n\n\n// Write numbers 4-7\nclass EggThread extends Thread {\n    @Override\n    public void run() {\n        for (;;) {\n            synchronized (Global.conditionVar) {\n                if (Global.counter < Global.COUNT_HALT_01 || Global.counter > Global.COUNT_HALT_02) {\n                    Global.conditionVar.notify();\n                }\n                else {\n                    Global.counter += 1;\n                    System.out.println(\"egg counter = \" + Global.counter);\n                }\n\n                if (Global.counter >= Global.COUNT_DONE) {\n                    return;\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "java/src/demo22_blocking_queue/AppA.java",
    "content": "/*\n * BLOCKING QUEUES\n * Version A: A slow producer and a fast consumer\n */\n\npackage demo22_blocking_queue;\n\nimport java.util.concurrent.BlockingQueue;\nimport java.util.concurrent.LinkedBlockingQueue;\n\n\n\npublic class AppA {\n\n    public static void main(String[] args) {\n        BlockingQueue<String> queue;\n\n        queue = new LinkedBlockingQueue<>();\n        // queue = new ArrayBlockingQueue<>(2); // blocking queue with capacity = 2\n\n        new Thread(() -> producer(queue)).start();\n        new Thread(() -> consumer(queue)).start();\n    }\n\n\n    private static void producer(BlockingQueue<String> queue) {\n        try {\n            Thread.sleep(2000);\n            queue.put(\"Alice\");\n\n            Thread.sleep(2000);\n            queue.put(\"likes\");\n\n            Thread.sleep(2000);\n            queue.put(\"singing\");\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n\n    private static void consumer(BlockingQueue<String> queue) {\n        String data;\n\n        try {\n            for (int i = 0; i < 3; ++i) {\n                System.out.println(\"\\nWaiting for data...\");\n\n                data = queue.take();\n\n                System.out.println(\"    \" + data);\n            }\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo22_blocking_queue/AppB.java",
    "content": "/*\n * BLOCKING QUEUES\n * Version B: A fast producer and a slow consumer\n */\n\npackage demo22_blocking_queue;\n\nimport java.util.concurrent.ArrayBlockingQueue;\nimport java.util.concurrent.BlockingQueue;\n\n\n\npublic class AppB {\n\n    public static void main(String[] args) {\n        BlockingQueue<String> queue;\n        queue = new ArrayBlockingQueue<>(2); // blocking queue with capacity = 2\n\n        new Thread(() -> producer(queue)).start();\n        new Thread(() -> consumer(queue)).start();\n    }\n\n\n    private static void producer(BlockingQueue<String> queue) {\n        try {\n            queue.put(\"Alice\");\n            queue.put(\"likes\");\n\n            /*\n             * Due to reaching the maximum capacity = 2, when executing queue.put(\"singing\"),\n             * this thread is going to sleep until the queue removes an element.\n             */\n\n            queue.put(\"singing\");\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n\n    private static void consumer(BlockingQueue<String> queue) {\n        String data;\n\n        try {\n            Thread.sleep(2000);\n\n            for (int i = 0; i < 3; ++i) {\n                System.out.println(\"\\nWaiting for data...\");\n\n                data = queue.take();\n\n                System.out.println(\"    \" + data);\n            }\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo22_blocking_queue/AppExtra.java",
    "content": "/*\n * BLOCKING QUEUES\n * VersionC: Introduction to SynchronousQueue\n *\n * The SynchronousQueue is simply the BlockingQueue with zero capacity.\n * Therefore, each insert operation must wait for a corresponding remove operation by another thread,\n *            and vice versa.\n */\n\npackage demo22_blocking_queue;\n\nimport java.util.concurrent.BlockingQueue;\nimport java.util.concurrent.SynchronousQueue;\n\n\n\npublic class AppExtra {\n\n    public static void main(String[] args) {\n        final BlockingQueue<String> queue = new SynchronousQueue<>();\n        new Thread(() -> producer(queue)).start();\n        new Thread(() -> consumer(queue)).start();\n    }\n\n\n    private static void producer(BlockingQueue<String> queue) {\n        try {\n            queue.put(\"Alice\");\n            queue.put(\"likes\");\n            queue.put(\"singing\");\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n\n    private static void consumer(BlockingQueue<String> queue) {\n        String data;\n\n        try {\n            Thread.sleep(2000);\n\n            for (int i = 0; i < 3; ++i) {\n                System.out.println(\"\\nWaiting for data...\");\n\n                data = queue.take();\n\n                System.out.println(\"    \" + data);\n            }\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo23_thread_local/AppA01.java",
    "content": "/*\n * THREAD-LOCAL STORAGE\n * Introduction\n */\n\npackage demo23_thread_local;\n\n\n\npublic class AppA01 {\n\n    public static void main(String[] args) {\n        // Main thread sets value = \"APPLE\"\n        MyTask.set(\"APPLE\");\n        System.out.println(MyTask.get());\n\n\n        // Child thread gets value\n        // Expected output: \"NOT SET\"\n        new Thread(() -> {\n            System.out.println(MyTask.get());\n        }).start();\n    }\n\n\n\n    private static class MyTask {\n        private static final ThreadLocal<String> data = new ThreadLocal<>();\n\n        public static String get() {\n            // If this is first time getting data\n            if (null == data.get()) {\n                data.set(\"NOT SET\");\n            }\n\n            return data.get();\n        }\n\n        public static void set(String value) {\n            data.set(value);\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo23_thread_local/AppA02.java",
    "content": "/*\n * THREAD-LOCAL STORAGE\n * Introduction\n *\n * Use ThreadLocal.withInitial for better initialization.\n */\n\npackage demo23_thread_local;\n\n\n\npublic class AppA02 {\n\n    public static void main(String[] args) {\n        // Main thread sets value = \"APPLE\"\n        MyTask.set(\"APPLE\");\n        System.out.println(MyTask.get());\n\n\n        // Child thread gets value\n        // Expected output: \"NOT SET\"\n        new Thread(() -> {\n            System.out.println(MyTask.get());\n        }).start();\n    }\n\n\n\n    private static class MyTask {\n        private static final ThreadLocal<String> data\n                = ThreadLocal.withInitial(() -> \"NOT SET\");\n\n        public static String get() {\n            return data.get();\n        }\n\n        public static void set(String value) {\n            data.set(value);\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo23_thread_local/AppB.java",
    "content": "/*\n * THREAD-LOCAL STORAGE\n * Avoiding synchronization using thread-local storage\n */\n\npackage demo23_thread_local;\n\nimport java.util.stream.IntStream;\n\n\n\npublic class AppB {\n\n    public static void main(String[] args) throws InterruptedException {\n        final int NUM_THREADS = 3;\n\n\n        var lstTh = IntStream.range(0, NUM_THREADS).mapToObj(t -> new Thread(() -> {\n            try { Thread.sleep(1000); } catch (InterruptedException e) { }\n\n            for (int i = 0; i < 1000; ++i)\n                MyTask.increaseCounter();\n\n            System.out.println(\"Thread \" + t + \" gives counter = \" + MyTask.getCounter());\n        })).toList();\n\n\n        lstTh.forEach(Thread::start);\n\n\n        /*\n         * By using thread-local storage, each thread has its own counter.\n         * So, the counter in one thread is completely independent of each other.\n         *\n         * Thread-local storage helps us to AVOID SYNCHRONIZATION.\n         */\n    }\n\n\n\n    private static class Counter {\n        public int value = 0;\n    }\n\n\n\n    private static class MyTask {\n        private static final ThreadLocal<Counter> thlCounter\n                = ThreadLocal.withInitial(() -> new Counter());\n\n        public static int getCounter() {\n            return thlCounter.get().value;\n        }\n\n        public static void increaseCounter() {\n            var counter = thlCounter.get();\n            ++counter.value;\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo24_volatile/App.java",
    "content": "/*\n * THE VOLATILE KEYWORD\n */\n\npackage demo24_volatile;\n\n\n\npublic class App {\n\n    public static void main(String[] args) throws InterruptedException {\n        Global.isRunning = true;\n        new Thread(() -> doTask()).start();\n\n        Thread.sleep(6000);\n        Global.isRunning = false;\n    }\n\n\n    private static void doTask() {\n        while (Global.isRunning) {\n            System.out.println(\"Running...\");\n            try { Thread.sleep(2000); } catch (InterruptedException e) { }\n        }\n    }\n\n\n\n    private static class Global {\n        public static volatile boolean isRunning;\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo25_atomic/AppA.java",
    "content": "/*\n * ATOMIC ACCESS\n */\n\npackage demo25_atomic;\n\nimport java.util.stream.IntStream;\n\n\n\npublic class AppA {\n\n    public static void main(String[] args) throws InterruptedException {\n        Runnable doTask = () -> {\n            try { Thread.sleep(1000); } catch (InterruptedException e) { }\n            Global.counter += 1;\n        };\n\n\n        var lstTh = IntStream.range(0, 1000).mapToObj(i -> new Thread(doTask)).toList();\n\n\n        for (var th : lstTh)\n            th.start();\n\n        for (var th : lstTh)\n            th.join();\n\n\n        // Unpredictable result\n        System.out.println(\"counter = \" + Global.counter);\n    }\n\n\n\n    private static class Global {\n        public static volatile int counter = 0;\n    }\n\n}\n"
  },
  {
    "path": "java/src/demo25_atomic/AppB.java",
    "content": "/*\n * ATOMIC ACCESS\n */\n\npackage demo25_atomic;\n\nimport java.util.concurrent.atomic.AtomicInteger;\nimport java.util.stream.IntStream;\n\n\n\npublic class AppB {\n\n    public static void main(String[] args) throws InterruptedException {\n        Runnable doTask = () -> {\n            try { Thread.sleep(1000); } catch (InterruptedException e) { }\n            Global.counter.incrementAndGet();\n            // Global.counter.addAndGet(1);\n        };\n\n\n        var lstTh = IntStream.range(0, 1000).mapToObj(i -> new Thread(doTask)).toList();\n\n\n        for (var th : lstTh)\n            th.start();\n\n        for (var th : lstTh)\n            th.join();\n\n\n        // counter = 1000\n        System.out.println(\"counter = \" + Global.counter);\n    }\n\n\n\n    private static class Global {\n        public static AtomicInteger counter = new AtomicInteger(0);\n    }\n\n}\n"
  },
  {
    "path": "java/src/demoex/async/AppA.java",
    "content": "/*\n * ASYNCHRONOUS PROGRAMMING WITH THE FUTURE/TASK\n */\n\npackage demoex.async;\n\n\n\npublic class AppA {\n\n    public static void main(String[] args) {\n        System.out.println(\"Please review demo: Thread pool, version C\");\n    }\n\n}\n"
  },
  {
    "path": "java/src/demoex/async/AppB01.java",
    "content": "/*\n * ASYNCHRONOUS PROGRAMMING WITH FUTURE/TASK\n *\n * The app takes about 1000 miliseconds to run\n * because all 3 tasks are running simultaneously.\n */\n\npackage demoex.async;\n\nimport java.time.Duration;\nimport java.time.Instant;\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.concurrent.Future;\n\n\n\npublic class AppB01 {\n\n    public static void main(String[] args) throws InterruptedException, ExecutionException {\n        // A pool of 5 threads, which helps us to do asynchronous tasks\n        ExecutorService executor = Executors.newFixedThreadPool(5);\n\n        var timePointStart = Instant.now();\n\n        Future<Void> taskA = executor.submit(() -> doAction(\"cooking eggs\"));\n        Future<Void> taskB = executor.submit(() -> doAction(\"making coffee\"));\n        Future<Void> taskC = executor.submit(() -> doAction(\"watching movies\"));\n\n        taskA.get();\n        taskB.get();\n        taskC.get();\n\n        var timePointEnd = Instant.now();\n        var duration = Duration.between(timePointStart, timePointEnd).toMillis();\n\n        executor.shutdown();\n\n        System.out.println(\"Total time: \" + duration + \" millis\");\n    }\n\n\n    private static Void doAction(String actionName) {\n        System.out.println(actionName);\n\n        // Doing action in one second...\n        try {\n            Thread.sleep(1000);\n        } catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n\n        return null;\n    }\n\n}\n"
  },
  {
    "path": "java/src/demoex/async/AppB02.java",
    "content": "/*\n * ASYNCHRONOUS PROGRAMMING WITH THE FUTURE/TASK\n *\n * The app takes about 3000 miliseconds to run\n * because each method \"Future.get\" pauses app until the task finishes.\n */\n\npackage demoex.async;\n\nimport java.time.Duration;\nimport java.time.Instant;\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.concurrent.Future;\n\n\n\npublic class AppB02 {\n\n    public static void main(String[] args) throws InterruptedException, ExecutionException {\n        ExecutorService executor = Executors.newFixedThreadPool(5);\n\n        var timePointStart = Instant.now();\n\n        Future<Void> taskA = executor.submit(() -> doAction(\"cooking eggs\"));\n        taskA.get();\n\n        Future<Void> taskB = executor.submit(() -> doAction(\"making coffee\"));\n        taskB.get();\n\n        Future<Void> taskC = executor.submit(() -> doAction(\"watching movies\"));\n        taskC.get();\n\n        var timePointEnd = Instant.now();\n        var duration = Duration.between(timePointStart, timePointEnd).toMillis();\n\n        executor.shutdown();\n\n        System.out.println(\"Total time: \" + duration + \" millis\");\n    }\n\n\n    private static Void doAction(String actionName) {\n        System.out.println(actionName);\n\n        // Doing action in one second...\n        try {\n            Thread.sleep(1000);\n        } catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n\n        return null;\n    }\n\n}\n"
  },
  {
    "path": "java/src/demoex/async/AppB03.java",
    "content": "/*\n * ASYNCHRONOUS PROGRAMMING WITH FUTURE/TASK\n */\n\npackage demoex.async;\n\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.concurrent.Future;\n\n\n\npublic class AppB03 {\n\n    public static void main(String[] args) throws InterruptedException, ExecutionException {\n        ExecutorService executor = Executors.newFixedThreadPool(5);\n\n        Future<String> taskCookingEggs = executor.submit(() -> cookEggs());\n        Future<String> taskMakingCoffee = executor.submit(() -> makeCoffee());\n\n        executor.shutdown();\n\n        String resultEggs = taskCookingEggs.get();\n        String resultCoffee = taskMakingCoffee.get();\n\n        System.out.println(\"Done!\");\n        System.out.println(resultEggs);\n        System.out.println(resultCoffee);\n    }\n\n\n    private static String cookEggs() {\n        System.out.println(\"I am cooking eggs\");\n\n        // Cooking eggs in two seconds...\n        try {\n            Thread.sleep(2000);\n        } catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n\n        return \"fried eggs\";\n    }\n\n\n    private static String makeCoffee() {\n        System.out.println(\"I am making coffee\");\n\n        // Making coffee in four seconds...\n        try {\n            Thread.sleep(4000);\n        } catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n\n        return \"a cup of coffee\";\n    }\n\n}\n"
  },
  {
    "path": "java/src/demoex/async/AppB04.java",
    "content": "/*\n * ASYNCHRONOUS PROGRAMMING WITH THE FUTURE/TASK\n */\n\npackage demoex.async;\n\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.concurrent.Future;\n\n\n\npublic class AppB04 {\n\n    private static final ExecutorService executor = Executors.newFixedThreadPool(5);\n\n\n    public static void main(String[] args) throws InterruptedException, ExecutionException {\n        Future<Boolean> taskValidation = executor.submit(() -> validate(\"John\"));\n\n        boolean result = taskValidation.get();\n\n        if (result)\n            System.out.println(\"User can view movies.\");\n        else\n            System.out.println(\"Age must be >= 18 to view movies.\");\n\n        executor.shutdown();\n    }\n\n\n    private static boolean validate(String userName) throws InterruptedException, ExecutionException {\n        var taskGettingAge = executor.submit(() -> queryUserAge(userName));\n        int userAge = taskGettingAge.get();\n\n        System.out.println(\"Validating...\");\n\n        // Validating in two seconds...\n        try {\n            Thread.sleep(2000);\n        } catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n\n        return userAge >= 18;\n    }\n\n\n    private static int queryUserAge(String userName) {\n        System.out.println(\"Querying userAge in database...\");\n\n        // Querying database in two seconds...\n        try {\n            Thread.sleep(2000);\n        } catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n\n        if (userName.equals(\"Thanh\"))\n            return 26;\n        else\n            return 17;\n    }\n\n}\n"
  },
  {
    "path": "java/src/demoex/async/AppC01.java",
    "content": "/*\n * ASYNCHRONOUS PROGRAMMING WITH FUTURE/TASK\n *\n * From Javadoc: CompletableFuture is a Future that may be explicitly completed\n * (setting its value and status), and may be used as a CompletionStage,\n * supporting dependent functions and actions that trigger upon its completion.\n *\n * From that definition, CompletableFuture can be called as \"Promise\".\n */\n\npackage demoex.async;\n\nimport java.util.concurrent.CompletableFuture;\nimport java.util.concurrent.ExecutionException;\n\n\n\npublic class AppC01 {\n\n    public static void main(String[] args) throws InterruptedException, ExecutionException {\n        CompletableFuture<Integer> task;\n\n        task = CompletableFuture\n                .supplyAsync(() -> getSquared(7))\n                .thenApply(AppC01::getDiv2);\n\n        Integer result = task.get();\n        System.out.println(result);\n    }\n\n\n    private static int getSquared(int x) {\n        return x * x;\n    }\n\n\n    private static int getDiv2(int x) {\n        return x / 2;\n    }\n\n}\n"
  },
  {
    "path": "java/src/demoex/async/AppC02.java",
    "content": "/*\n * ASYNCHRONOUS PROGRAMMING WITH THE FUTURE/TASK\n */\n\npackage demoex.async;\n\nimport java.util.concurrent.CompletableFuture;\nimport java.util.concurrent.ExecutionException;\n\n\n\npublic class AppC02 {\n\n    public static void main(String[] args) throws InterruptedException, ExecutionException {\n        CompletableFuture<Void> task;\n\n        task = CompletableFuture\n                .supplyAsync(() -> getSquared(7))\n                .thenApply(AppC02::getDiv2)\n                .thenAccept(System.out::println);\n\n        task.get();\n    }\n\n\n    private static int getSquared(int x) {\n        return x * x;\n    }\n\n\n    private static int getDiv2(int x) {\n        return x / 2;\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer01_max_div/AppA.java",
    "content": "/*\n * MAXIMUM NUMBER OF DIVISORS\n */\n\npackage exer01_max_div;\n\nimport java.time.Duration;\nimport java.time.Instant;\n\n\n\npublic class AppA {\n\n    public static void main(String[] args) {\n        final int RANGE_START = 1;\n        final int RANGE_END = 100000;\n\n        int resValue = 0;\n        int resNumDiv = 0;  // number of divisors of result\n\n        var tpStart = Instant.now();\n\n\n        for (int i = RANGE_START; i <= RANGE_END; ++i) {\n            int numDiv = 0;\n\n            for (int j = i / 2; j > 0; --j)\n                if (i % j == 0)\n                    ++numDiv;\n\n            if (resNumDiv < numDiv) {\n                resNumDiv = numDiv;\n                resValue = i;\n            }\n        }\n\n\n        var timeElapsed = Duration.between(tpStart, Instant.now());\n\n        System.out.println(\"The integer which has largest number of divisors is \" + resValue);\n        System.out.println(\"The largest number of divisor is \" + resNumDiv);\n        System.out.println(\"Time elapsed = \" + timeElapsed.toNanos() / 1e9);\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer01_max_div/AppB.java",
    "content": "/*\n * MAXIMUM NUMBER OF DIVISORS\n */\n\npackage exer01_max_div;\n\nimport java.time.Duration;\nimport java.time.Instant;\nimport java.util.ArrayList;\nimport java.util.List;\n\n\n\npublic class AppB {\n\n    public static void main(String[] args) throws InterruptedException {\n        final int RANGE_START = 1;\n        final int RANGE_END = 100000;\n        final int NUM_THREADS = 8;\n\n\n        var lstWorkerArg = prepareArg(RANGE_START, RANGE_END, NUM_THREADS);\n        final var lstWorkerRes = new ArrayList<WorkerResult>();\n\n\n        var lstWorker = lstWorkerArg.stream().map(arg -> new Thread(() -> {\n\n            int resValue = 0;\n            int resNumDiv = 0;\n\n            for (int i = arg.iStart; i <= arg.iEnd; ++i) {\n                int numDiv = 0;\n\n                for (int j = i / 2; j > 0; --j)\n                    if (i % j == 0)\n                        ++numDiv;\n\n                if (resNumDiv < numDiv) {\n                    resNumDiv = numDiv;\n                    resValue = i;\n                }\n            }\n\n            synchronized (lstWorkerRes) {\n                lstWorkerRes.add(new WorkerResult(resValue, resNumDiv));\n            }\n\n        })).toList();\n\n\n        var tpStart = Instant.now();\n\n        for (var worker : lstWorker)\n            worker.start();\n\n        for (var worker : lstWorker)\n            worker.join();\n\n        var finalRes = lstWorkerRes.stream().max((lhs, rhs) -> lhs.numDiv - rhs.numDiv).get();\n\n        var timeElapsed = Duration.between(tpStart, Instant.now());\n\n\n        System.out.println(\"The integer which has largest number of divisors is \" + finalRes.value);\n        System.out.println(\"The largest number of divisor is \" + finalRes.numDiv);\n        System.out.println(\"Time elapsed = \" + timeElapsed.toNanos() / 1e9);\n\n\n        /*\n         * BETTER WAY (avoiding synchronization of lstWorkerRes):\n         *\n         * - Initialize lstWorkerRes with null objects.\n         *   Of course, the number of objects is NUM_THREADS.\n         *\n         * - In thread function:\n         *   lstWorkerRes.set(threadIndex, new WorkerResult(resValue, resNumDiv));\n         */\n    }\n\n\n    private static List<WorkerArg> prepareArg(int rangeStart, int rangeEnd, int numThreads) {\n        int rangeA, rangeB, rangeBlock;\n\n        rangeBlock = (rangeEnd - rangeStart + 1) / numThreads;\n        rangeA = rangeStart;\n\n        var lstWorkerArg = new ArrayList<WorkerArg>();\n\n        for (int i = 0; i < numThreads; ++i, rangeA += rangeBlock) {\n            rangeB = rangeA + rangeBlock - 1;\n\n            if (i == numThreads - 1)\n                rangeB = rangeEnd;\n\n            lstWorkerArg.add(new WorkerArg(rangeA, rangeB));\n        }\n\n        return lstWorkerArg;\n    }\n\n\n\n    private record WorkerArg(int iStart, int iEnd) { }\n\n\n\n    private record WorkerResult(int value, int numDiv) { }\n\n}\n"
  },
  {
    "path": "java/src/exer01_max_div/AppC.java",
    "content": "/*\n * MAXIMUM NUMBER OF DIVISORS\n */\n\npackage exer01_max_div;\n\nimport java.time.Duration;\nimport java.time.Instant;\nimport java.util.ArrayList;\nimport java.util.List;\n\n\n\npublic class AppC {\n\n    public static void main(String[] args) throws InterruptedException {\n        final int RANGE_START = 1;\n        final int RANGE_END = 100000;\n        final int NUM_THREADS = 8;\n\n\n        var lstWorkerArg = prepareArg(RANGE_START, RANGE_END, NUM_THREADS);\n        var finalRes = new WorkerResult();\n\n\n        var lstWorker = lstWorkerArg.stream().map(arg -> new Thread(() -> {\n\n            int resValue = 0;\n            int resNumDiv = 0;\n\n            for (int i = arg.iStart; i <= arg.iEnd; ++i) {\n                int numDiv = 0;\n\n                for (int j = i / 2; j > 0; --j)\n                    if (i % j == 0)\n                        ++numDiv;\n\n                if (resNumDiv < numDiv) {\n                    resNumDiv = numDiv;\n                    resValue = i;\n                }\n            }\n\n            finalRes.update(resValue, resNumDiv);\n\n        })).toList();\n\n\n        var tpStart = Instant.now();\n\n\n        for (var worker : lstWorker)\n            worker.start();\n\n        for (var worker : lstWorker)\n            worker.join();\n\n\n        var timeElapsed = Duration.between(tpStart, Instant.now());\n\n\n        System.out.println(\"The integer which has largest number of divisors is \" + finalRes.value);\n        System.out.println(\"The largest number of divisor is \" + finalRes.numDiv);\n        System.out.println(\"Time elapsed = \" + timeElapsed.toNanos() / 1e9);\n    }\n\n\n    private static List<WorkerArg> prepareArg(int rangeStart, int rangeEnd, int numThreads) {\n        int rangeA, rangeB, rangeBlock;\n\n        rangeBlock = (rangeEnd - rangeStart + 1) / numThreads;\n        rangeA = rangeStart;\n\n        var lstWorkerArg = new ArrayList<WorkerArg>();\n\n        for (int i = 0; i < numThreads; ++i, rangeA += rangeBlock) {\n            rangeB = rangeA + rangeBlock - 1;\n\n            if (i == numThreads - 1)\n                rangeB = rangeEnd;\n\n            lstWorkerArg.add(new WorkerArg(rangeA, rangeB));\n        }\n\n        return lstWorkerArg;\n    }\n\n\n\n    private record WorkerArg(int iStart, int iEnd) { }\n\n\n\n    private static class WorkerResult {\n        public int value = 0;\n        public int numDiv = 0;\n\n        public void update(int value, int numDiv) {\n            synchronized (this) {\n                if (this.numDiv < numDiv) {\n                    this.numDiv = numDiv;\n                    this.value = value;\n                }\n            }\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer02_producer_consumer/AppA01.java",
    "content": "/*\n * THE PRODUCER-CONSUMER PROBLEM\n *\n * SOLUTION TYPE A: USING BLOCKING QUEUES\n *      Version A01: 1 slow producer, 1 fast consumer\n */\n\npackage exer02_producer_consumer;\n\nimport java.util.concurrent.BlockingQueue;\nimport java.util.concurrent.LinkedBlockingQueue;\n\n\n\npublic class AppA01 {\n\n    public static void main(String[] args) {\n        var queue = new LinkedBlockingQueue<Integer>();\n        new Thread(() -> producer(queue)).start();\n        new Thread(() -> consumer(queue)).start();\n    }\n\n\n    private static void producer(BlockingQueue<Integer> queue) {\n        int i = 1;\n\n        for (;; ++i) {\n            try {\n                queue.put(i);\n                Thread.sleep(1000);\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n    }\n\n\n    private static void consumer(BlockingQueue<Integer> queue) {\n        for (;;) {\n            try {\n                int data = queue.take();\n                System.out.println(\"Consumer \" + data);\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer02_producer_consumer/AppA02.java",
    "content": "/*\n * THE PRODUCER-CONSUMER PROBLEM\n *\n * SOLUTION TYPE A: USING BLOCKING QUEUES\n *      Version A02: 2 slow producers, 1 fast consumer\n */\n\npackage exer02_producer_consumer;\n\nimport java.util.concurrent.BlockingQueue;\nimport java.util.concurrent.LinkedBlockingQueue;\n\n\n\npublic class AppA02 {\n\n    public static void main(String[] args) {\n        var queue = new LinkedBlockingQueue<Integer>();\n\n        new Thread(() -> producer(queue)).start();\n        new Thread(() -> producer(queue)).start();\n\n        new Thread(() -> consumer(queue)).start();\n    }\n\n\n    private static void producer(BlockingQueue<Integer> queue) {\n        int i = 1;\n\n        for (;; ++i) {\n            try {\n                queue.put(i);\n                Thread.sleep(1000);\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n    }\n\n\n    private static void consumer(BlockingQueue<Integer> queue) {\n        for (;;) {\n            try {\n                int data = queue.take();\n                System.out.println(\"Consumer \" + data);\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer02_producer_consumer/AppA03.java",
    "content": "/*\n * THE PRODUCER-CONSUMER PROBLEM\n *\n * SOLUTION TYPE A: USING BLOCKING QUEUES\n *      Version A03: 1 slow producer, 2 fast consumers\n */\n\npackage exer02_producer_consumer;\n\nimport java.util.concurrent.BlockingQueue;\nimport java.util.concurrent.LinkedBlockingQueue;\n\n\n\npublic class AppA03 {\n\n    public static void main(String[] args) {\n        var queue = new LinkedBlockingQueue<Integer>();\n\n        new Thread(() -> producer(queue)).start();\n\n        new Thread(() -> consumer(\"foo\", queue)).start();\n        new Thread(() -> consumer(\"bar\", queue)).start();\n    }\n\n\n    private static void producer(BlockingQueue<Integer> queue) {\n        int i = 1;\n\n        for (;; ++i) {\n            try {\n                queue.put(i);\n                Thread.sleep(1000);\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n    }\n\n\n    private static void consumer(String name, BlockingQueue<Integer> queue) {\n        for (;;) {\n            try {\n                int data = queue.take();\n                System.out.println(\"Consumer %s: %d\".formatted(name, data));\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer02_producer_consumer/AppA04.java",
    "content": "/*\n * THE PRODUCER-CONSUMER PROBLEM\n *\n * SOLUTION TYPE A: USING BLOCKING QUEUES\n *      Version A04: Multiple fast producers, multiple slow consumers\n */\n\npackage exer02_producer_consumer;\n\nimport java.util.concurrent.ArrayBlockingQueue;\nimport java.util.concurrent.BlockingQueue;\nimport java.util.stream.IntStream;\n\n\n\npublic class AppA04 {\n\n    public static void main(String[] args) {\n        var queue = new ArrayBlockingQueue<Integer>(5);\n\n\n        final int NUM_PRODUCERS = 3;\n        final int NUM_CONSUMERS = 2;\n\n\n        IntStream.range(0, NUM_PRODUCERS).forEach(\n                i -> new Thread(() -> producer(queue, i * 1000)).start()\n        );\n\n        IntStream.range(0, NUM_CONSUMERS).forEach(\n                i -> new Thread(() -> consumer(queue)).start()\n        );\n    }\n\n\n    private static void producer(BlockingQueue<Integer> queue, int startValue) {\n        int i = 1;\n\n        for (;; ++i) {\n            try {\n                queue.put(i + startValue);\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n    }\n\n\n    private static void consumer(BlockingQueue<Integer> queue) {\n        for (;;) {\n            try {\n                int data = queue.take();\n                System.out.println(\"Consumer \" + data);\n                Thread.sleep(1000);\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer02_producer_consumer/AppB01.java",
    "content": "/*\n * THE PRODUCER-CONSUMER PROBLEM\n *\n * SOLUTION TYPE B: USING SEMAPHORES\n *      Version B01: 1 slow producer, 1 fast consumer\n */\n\npackage exer02_producer_consumer;\n\nimport java.util.LinkedList;\nimport java.util.Queue;\nimport java.util.concurrent.Semaphore;\n\n\n\npublic class AppB01 {\n\n    public static void main(String[] args) {\n        var semFill = new Semaphore(0);     // item produced\n        var semEmpty = new Semaphore(1);    // remaining space in queue\n\n        Queue<Integer> queue = new LinkedList<>();\n\n        new Thread(() -> producer(semFill, semEmpty, queue)).start();\n        new Thread(() -> consumer(semFill, semEmpty, queue)).start();\n    }\n\n\n    private static void producer(Semaphore semFill, Semaphore semEmpty, Queue<Integer> queue) {\n        int i = 1;\n\n        for (;; ++i) {\n            try {\n                semEmpty.acquire();\n\n                queue.add(i);\n                Thread.sleep(1000);\n\n                semFill.release();\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n    }\n\n\n    private static void consumer(Semaphore semFill, Semaphore semEmpty, Queue<Integer> queue) {\n        for (;;) {\n            try {\n                semFill.acquire();\n\n                int data = queue.remove();\n                System.out.println(\"Consumer \" + data);\n\n                semEmpty.release();\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer02_producer_consumer/AppB02.java",
    "content": "/*\n * THE PRODUCER-CONSUMER PROBLEM\n *\n * SOLUTION TYPE B: USING SEMAPHORES\n *      Version B02: 2 slow producers, 1 fast consumer\n */\n\npackage exer02_producer_consumer;\n\nimport java.util.LinkedList;\nimport java.util.Queue;\nimport java.util.concurrent.Semaphore;\n\n\n\npublic class AppB02 {\n\n    public static void main(String[] args) {\n        var semFill = new Semaphore(0);     // item produced\n        var semEmpty = new Semaphore(1);    // remaining space in queue\n\n        Queue<Integer> queue = new LinkedList<>();\n\n        new Thread(() -> producer(semFill, semEmpty, queue, 0)).start();\n        new Thread(() -> producer(semFill, semEmpty, queue, 1000)).start();\n\n        new Thread(() -> consumer(semFill, semEmpty, queue)).start();\n    }\n\n\n    private static void producer(Semaphore semFill, Semaphore semEmpty,\n                                 Queue<Integer> queue, int startValue) {\n        int i = 1;\n\n        for (;; ++i) {\n            try {\n                semEmpty.acquire();\n\n                queue.add(i + startValue);\n                Thread.sleep(1000);\n\n                semFill.release();\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n    }\n\n\n    private static void consumer(Semaphore semFill, Semaphore semEmpty, Queue<Integer> queue) {\n        for (;;) {\n            try {\n                semFill.acquire();\n\n                int data = queue.remove();\n                System.out.println(\"Consumer \" + data);\n\n                semEmpty.release();\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer02_producer_consumer/AppB03.java",
    "content": "/*\n * THE PRODUCER-CONSUMER PROBLEM\n *\n * SOLUTION TYPE B: USING SEMAPHORES\n *      Version B03: 2 fast producers, 1 slow consumer\n */\n\npackage exer02_producer_consumer;\n\nimport java.util.LinkedList;\nimport java.util.Queue;\nimport java.util.concurrent.Semaphore;\n\n\n\npublic class AppB03 {\n\n    public static void main(String[] args) {\n        var semFill = new Semaphore(0);     // item produced\n        var semEmpty = new Semaphore(1);    // remaining space in queue\n\n        Queue<Integer> queue = new LinkedList<>();\n\n        new Thread(() -> producer(semFill, semEmpty, queue, 0)).start();\n        new Thread(() -> producer(semFill, semEmpty, queue, 1000)).start();\n\n        new Thread(() -> consumer(semFill, semEmpty, queue)).start();\n    }\n\n\n    private static void producer(Semaphore semFill, Semaphore semEmpty,\n                                 Queue<Integer> queue, int startValue) {\n        int i = 1;\n\n        for (;; ++i) {\n            try {\n                semEmpty.acquire();\n                queue.add(i + startValue);\n                semFill.release();\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n    }\n\n\n    private static void consumer(Semaphore semFill, Semaphore semEmpty, Queue<Integer> queue) {\n        for (;;) {\n            try {\n                semFill.acquire();\n\n                int data = queue.remove();\n                System.out.println(\"Consumer \" + data);\n                Thread.sleep(1000);\n\n                semEmpty.release();\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer02_producer_consumer/AppB04.java",
    "content": "/*\n * THE PRODUCER-CONSUMER PROBLEM\n *\n * SOLUTION TYPE B: USING SEMAPHORES\n *      Version B04: Multiple fast producers, multiple slow consumers\n */\n\npackage exer02_producer_consumer;\n\nimport java.util.LinkedList;\nimport java.util.Queue;\nimport java.util.concurrent.Semaphore;\nimport java.util.stream.IntStream;\n\n\n\npublic class AppB04 {\n\n    public static void main(String[] args) {\n        var semFill = new Semaphore(0);     // item produced\n        var semEmpty = new Semaphore(1);    // remaining space in queue\n\n        Queue<Integer> queue = new LinkedList<>();\n\n\n        final int NUM_PRODUCERS = 3;\n        final int NUM_CONSUMERS = 2;\n\n\n        IntStream.range(0, NUM_PRODUCERS).forEach(\n                i -> new Thread(() -> producer(semFill, semEmpty, queue, i * 1000)).start()\n        );\n\n        IntStream.range(0, NUM_CONSUMERS).forEach(\n                i -> new Thread(() -> consumer(semFill, semEmpty, queue)).start()\n        );\n    }\n\n\n    private static void producer(Semaphore semFill, Semaphore semEmpty,\n                                 Queue<Integer> queue, int startValue) {\n        int i = 1;\n\n        for (;; ++i) {\n            try {\n                semEmpty.acquire();\n                queue.add(i + startValue);\n                semFill.release();\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n    }\n\n\n    private static void consumer(Semaphore semFill, Semaphore semEmpty, Queue<Integer> queue) {\n        for (;;) {\n            try {\n                semFill.acquire();\n\n                int data = queue.remove();\n                System.out.println(\"Consumer \" + data);\n                Thread.sleep(1000);\n\n                semEmpty.release();\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer02_producer_consumer/AppC.java",
    "content": "/*\n * THE PRODUCER-CONSUMER PROBLEM\n *\n * SOLUTION TYPE C: USING CONDITION VARIABLES & MONITORS\n *      Multiple fast producers, multiple slow consumers\n */\n\npackage exer02_producer_consumer;\n\nimport java.util.LinkedList;\nimport java.util.Queue;\nimport java.util.stream.IntStream;\n\n\n\npublic class AppC {\n\n    public static void main(String[] args) {\n        var monitor = new ProdConsMonitor<Integer>();\n        Queue<Integer> queue = new LinkedList<>();\n\n\n        final int MAX_QUEUE_SIZE = 6;\n        final int NUM_PRODUCERS = 3;\n        final int NUM_CONSUMERS = 2;\n\n\n        monitor.init(MAX_QUEUE_SIZE, queue);\n\n\n        IntStream.range(0, NUM_PRODUCERS).forEach(\n                i -> new Thread(() -> producer(monitor, i * 1000)).start()\n        );\n\n        IntStream.range(0, NUM_CONSUMERS).forEach(\n                i -> new Thread(() -> consumer(monitor)).start()\n        );\n    }\n\n\n    private static void producer(ProdConsMonitor<Integer> monitor, int startValue) {\n        int i = 1;\n\n        for (;; ++i) {\n            try {\n                monitor.add(i + startValue);\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n    }\n\n\n    private static void consumer(ProdConsMonitor<Integer> monitor) {\n        for (;;) {\n            try {\n                int data = monitor.remove();\n                System.out.println(\"Consumer \" + data);\n                Thread.sleep(1000);\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n    }\n\n}\n\n\n\nclass ProdConsMonitor<T> {\n    private Queue<T> queue;\n    private int maxQueueSize;\n\n    private Object condFull = new Object();\n    private Object condEmpty = new Object();\n\n\n    public void init(int maxQueueSize, Queue<T> queue) {\n        this.maxQueueSize = maxQueueSize;\n        this.queue = queue;\n    }\n\n\n    public void add(T item) throws InterruptedException {\n        synchronized (condFull) {\n            while (queue.size() == maxQueueSize) {\n                condFull.wait();\n            }\n\n            synchronized (queue) {\n                queue.add(item);\n            }\n        }\n\n        synchronized (condEmpty) {\n            if (queue.size() == 1) {\n                condEmpty.notify();\n            }\n        }\n    }\n\n\n    public T remove() throws InterruptedException {\n        T item = null;\n\n        synchronized (condEmpty) {\n            while (queue.size() == 0) {\n                condEmpty.wait();\n            }\n\n            synchronized (queue) {\n                item = queue.remove();\n            }\n        }\n\n        synchronized (condFull) {\n            if (queue.size() == maxQueueSize - 1) {\n                condFull.notify();\n            }\n        }\n\n        return item;\n    }\n}\n"
  },
  {
    "path": "java/src/exer03_readers_writers/AppA.java",
    "content": "/*\n * THE READERS-WRITERS PROBLEM\n * Solution for the first readers-writers problem\n */\n\npackage exer03_readers_writers;\n\nimport java.util.Random;\nimport java.util.concurrent.Semaphore;\nimport java.util.stream.IntStream;\n\n\n\npublic class AppA {\n\n    public static void main(String[] args) {\n        final int NUM_READERS = 8;\n        final int NUM_WRITERS = 6;\n\n\n        var rand = new Random();\n\n\n        var lstThReader = IntStream.range(0, NUM_READERS).mapToObj(i -> new Thread(() -> {\n            try {\n                doTaskReader(rand.nextInt(3));\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }));\n\n\n        var lstThWriter = IntStream.range(0, NUM_WRITERS).mapToObj(i -> new Thread(() -> {\n            try {\n                doTaskWriter(rand.nextInt(3));\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }));\n\n\n        lstThReader.forEach(Thread::start);\n        lstThWriter.forEach(Thread::start);\n    }\n\n\n    private static void doTaskWriter(int delayTime) throws InterruptedException {\n        var rand = new Random();\n        Thread.sleep(1000 * delayTime);\n\n        Global.mutResource.acquire();\n\n        Global.resource = rand.nextInt(100);\n        System.out.println(\"Write \" + Global.resource);\n\n        Global.mutResource.release();\n    }\n\n\n    private static void doTaskReader(int delayTime) throws InterruptedException {\n        Thread.sleep(1000 * delayTime);\n\n        // Increase reader count\n        synchronized (Global.mutReaderCount) {\n            Global.readerCount += 1;\n\n            if (1 == Global.readerCount)\n                Global.mutResource.acquire();\n        }\n\n        // Do the reading\n        int data = Global.resource;\n        System.out.println(\"Read \" + data);\n\n        // Decrease reader count\n        synchronized (Global.mutReaderCount) {\n            Global.readerCount -= 1;\n\n            if (0 == Global.readerCount)\n                Global.mutResource.release();\n        }\n    }\n\n\n\n    private static class Global {\n        public static final Semaphore mutResource = new Semaphore(1);\n        public static final Object mutReaderCount = new Object();\n\n        public static volatile int resource = 0;\n        public static int readerCount = 0;\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer03_readers_writers/AppB.java",
    "content": "/*\n * THE READERS-WRITERS PROBLEM\n * Solution for the third readers-writers problem\n */\n\npackage exer03_readers_writers;\n\nimport java.util.Random;\nimport java.util.concurrent.Semaphore;\nimport java.util.stream.IntStream;\n\n\n\npublic class AppB {\n\n    public static void main(String[] args) {\n        final int NUM_READERS = 8;\n        final int NUM_WRITERS = 6;\n\n\n        var rand = new Random();\n\n\n        var lstThReader = IntStream.range(0, NUM_READERS).mapToObj(i -> new Thread(() -> {\n            try {\n                doTaskReader(rand.nextInt(3));\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }));\n\n\n        var lstThWriter = IntStream.range(0, NUM_WRITERS).mapToObj(i -> new Thread(() -> {\n            try {\n                doTaskWriter(rand.nextInt(3));\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }));\n\n\n        lstThReader.forEach(Thread::start);\n        lstThWriter.forEach(Thread::start);\n    }\n\n\n    private static void doTaskWriter(int delayTime) throws InterruptedException {\n        var rand = new Random();\n        Thread.sleep(1000 * delayTime);\n\n        synchronized (Global.mutServiceQueue) {\n            Global.mutResource.acquire();\n        }\n\n        Global.resource = rand.nextInt(100);\n        System.out.println(\"Write \" + Global.resource);\n\n        Global.mutResource.release();\n    }\n\n\n    private static void doTaskReader(int delayTime) throws InterruptedException {\n        Thread.sleep(1000 * delayTime);\n\n        synchronized (Global.mutServiceQueue) {\n            // Increase reader count\n            synchronized (Global.mutReaderCount) {\n                Global.readerCount += 1;\n\n                if (1 == Global.readerCount)\n                    Global.mutResource.acquire();\n            }\n        }\n\n        // Do the reading\n        int data = Global.resource;\n        System.out.println(\"Read \" + data);\n\n        // Decrease reader count\n        synchronized (Global.mutReaderCount) {\n            Global.readerCount -= 1;\n\n            if (0 == Global.readerCount)\n                Global.mutResource.release();\n        }\n    }\n\n\n\n    private static class Global {\n        public static final Object mutServiceQueue = new Object();\n\n        public static final Semaphore mutResource = new Semaphore(1);\n        public static final Object mutReaderCount = new Object();\n\n        public static volatile int resource = 0;\n        public static int readerCount = 0;\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer04_dining_philosophers/AppA.java",
    "content": "/*\n * THE DINING PHILOSOPHERS PROBLEM\n */\n\npackage exer04_dining_philosophers;\n\nimport java.util.concurrent.Semaphore;\nimport java.util.stream.IntStream;\n\n\n\npublic class AppA {\n\n    public static void main(String[] args) {\n        final int NUM_PHILOSOPHERS = 5;\n\n\n        var chopstick = new Semaphore[NUM_PHILOSOPHERS];\n\n        for (int i = 0; i < NUM_PHILOSOPHERS; ++i)\n            chopstick[i] = new Semaphore(1);\n\n\n        var lstTh = IntStream.range(0, NUM_PHILOSOPHERS).mapToObj(i -> new Thread(() -> {\n\n            int n = NUM_PHILOSOPHERS;\n\n            try {\n                Thread.sleep(1000);\n\n                chopstick[i].acquire();\n                chopstick[(i + 1) % n].acquire();\n\n                System.out.println(\"Philosopher #\" + i + \" is eating the rice\");\n\n                chopstick[(i + 1) % n].release();\n                chopstick[i].release();\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n\n        }));\n\n\n        lstTh.forEach(Thread::start);\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer04_dining_philosophers/AppB.java",
    "content": "/*\n * THE DINING PHILOSOPHERS PROBLEM\n */\n\npackage exer04_dining_philosophers;\n\nimport java.util.stream.IntStream;\n\n\n\npublic class AppB {\n\n    public static void main(String[] args) {\n        final int NUM_PHILOSOPHERS = 5;\n\n\n        var chopstick = new Object[NUM_PHILOSOPHERS];\n\n        for (int i = 0; i < NUM_PHILOSOPHERS; ++i)\n            chopstick[i] = new Object();\n\n\n        var lstTh = IntStream.range(0, NUM_PHILOSOPHERS).mapToObj(i -> new Thread(() -> {\n\n            int n = NUM_PHILOSOPHERS;\n\n            try {\n                Thread.sleep(1000);\n\n                synchronized (chopstick[i]) {\n                    synchronized (chopstick[(i + 1) % n]) {\n                        System.out.println(\"Philosopher #\" + i + \" is eating the rice\");\n                    }\n                }\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n\n        }));\n\n\n        lstTh.forEach(Thread::start);\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer05_product_matrix/AppA.java",
    "content": "/*\n * MATRIX-VECTOR MULTIPLICATION\n */\n\npackage exer05_product_matrix;\n\nimport java.util.Arrays;\nimport java.util.stream.IntStream;\n\n\n\npublic class AppA {\n\n    public static void main(String[] args) throws InterruptedException {\n        double[][] A = {\n            { 1, 2, 3 },\n            { 4, 5, 6 },\n            { 7, 8, 9 }\n        };\n\n        double[] b = {\n            3,\n            -1,\n            0\n        };\n\n        var result = getProduct(A, b);\n\n        System.out.println(Arrays.toString(result));\n    }\n\n\n    private static double[] getProduct(double[][] mat, double[] vec) throws InterruptedException {\n        // Assume that size of mat and vec are both eligible\n        int sizeRowMat = mat.length;\n        // int sizeColMat = mat[0].length;\n        // int sizeVec = vec.length;\n\n        var result = new double[sizeRowMat];\n\n        var lstTh = IntStream.range(0, sizeRowMat).mapToObj(i -> new Thread(() -> {\n            var u = mat[i];\n            var v = vec;\n            result[i] = MyUtil.getScalarProduct(u, v);\n        })).toList();\n\n        for (Thread th : lstTh)\n            th.start();\n\n        for (Thread th : lstTh)\n            th.join();\n\n        return result;\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer05_product_matrix/AppB.java",
    "content": "/*\n * MATRIX-MATRIX MULTIPLICATION (DOT PRODUCT)\n */\n\npackage exer05_product_matrix;\n\nimport java.util.ArrayList;\nimport java.util.stream.IntStream;\n\n\n\npublic class AppB {\n\n    public static void main(String[] args) throws InterruptedException {\n        double[][] A = {\n            { 1, 3, 5 },\n            { 2, 4, 6 },\n        };\n\n        double[][] B = {\n            { 1, 0, 1, 0 },\n            { 0, 1, 0, 1 },\n            { 1, 0, 0, -2 }\n        };\n\n        double[][] result = getProduct(A, B);\n\n        MyUtil.printMatrix(result);\n    }\n\n\n    private static double[][] getProduct(double[][] matA, double[][] matB) throws InterruptedException {\n        // Assume that size of matA and matB are both eligible\n        int sizeRowA = matA.length;\n        int sizeColB = matB[0].length;\n\n\n        double[][] matBT = MyUtil.getTransposeMatrix(matB);\n        var result = new double[sizeRowA][sizeColB];\n        var lstTh = new ArrayList<Thread>();\n\n\n        IntStream.range(0, sizeRowA).forEach(i ->\n            IntStream.range(0, sizeColB).forEach(j -> {\n                var u = matA[i];\n                var v = matBT[j];\n\n                lstTh.add(new Thread(() -> {\n                    result[i][j] = MyUtil.getScalarProduct(u, v);\n                }));\n            })\n        );\n\n\n        for (var th : lstTh)\n            th.start();\n\n        for (var th : lstTh)\n            th.join();\n\n\n        return result;\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer05_product_matrix/MyUtil.java",
    "content": "package exer05_product_matrix;\n\nimport java.text.NumberFormat;\n\n\n\npublic class MyUtil {\n\n    private static final NumberFormat nf;\n\n\n    static {\n        nf = NumberFormat.getInstance();\n        nf.setMaximumFractionDigits(1);\n    }\n\n\n    public static double getScalarProduct(double[] u, double[] v) {\n        double sum = 0;\n        int sizeVector = u.length;\n\n        for (int i = sizeVector - 1; i >= 0; --i) {\n            sum += u[i] * v[i];\n        }\n\n        return sum;\n    }\n\n\n    public static double[][] getTransposeMatrix(double[][] input) {\n        double[][] output;\n\n        int numRow = input.length;\n        int numCol = input[0].length;\n\n        output = new double[numCol][numRow];\n\n        for (int i = 0; i < numRow; ++i)\n            for (int j = 0; j < numCol; ++j)\n                output[j][i] = input[i][j];\n\n        return output;\n    }\n\n\n    public static void printMatrix(double[][] mat) {\n        for (var row : mat) {\n            for (var value : row)\n                System.out.print(\"\\t\" + nf.format(value));\n\n            System.out.println();\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer06_blocking_queue/AppA.java",
    "content": "/*\n * BLOCKING QUEUE IMPLEMENTATION\n * Version A: Synchronous queues\n */\n\npackage exer06_blocking_queue;\n\nimport java.util.concurrent.Semaphore;\n\n\n\npublic class AppA {\n\n    public static void main(String[] args) {\n        final var queue = new MySynchronousQueue<String>();\n        new Thread(() -> producer(queue)).start();\n        new Thread(() -> consumer(queue)).start();\n    }\n\n\n    private static void producer(MySynchronousQueue<String> queue) {\n        String[] arr = { \"lorem\", \"ipsum\", \"dolor\" };\n\n        try {\n            for (var data : arr) {\n                System.out.println(\"Producer: \" + data);\n                queue.put(data);\n                System.out.println(\"Producer: \" + data + \"\\t\\t\\t[done]\");\n            }\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n\n    private static void consumer(MySynchronousQueue<String> queue) {\n        String data;\n\n        try {\n            Thread.sleep(5000);\n\n            for (int i = 0; i < 3; ++i) {\n                data = queue.take();\n                System.out.println(\"\\tConsumer: \" + data);\n            }\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n\n\n    private static class MySynchronousQueue<T> {\n        private final Semaphore semPut = new Semaphore(1);\n        private final Semaphore semTake = new Semaphore(0);\n        private T element = null;\n\n\n        public void put(T value) throws InterruptedException {\n            semPut.acquire();\n            element = value;\n            semTake.release();\n        }\n\n\n        public T take() throws InterruptedException {\n            semTake.acquire();\n\n            T result = element;\n            element = null;\n\n            semPut.release();\n\n            return result;\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer06_blocking_queue/AppB01.java",
    "content": "/*\n * BLOCKING QUEUE IMPLEMENTATION\n * Version B01: General blocking queues\n *              Underlying mechanism: Semaphores\n */\n\npackage exer06_blocking_queue;\n\nimport java.util.LinkedList;\nimport java.util.Queue;\nimport java.util.concurrent.Semaphore;\n\n\n\npublic class AppB01 {\n\n    public static void main(String[] args) {\n        final var queue = new MyBlockingQueue<String>(2); // capacity = 2\n        new Thread(() -> producer(queue)).start();\n        new Thread(() -> consumer(queue)).start();\n    }\n\n\n    private static void producer(MyBlockingQueue<String> queue) {\n        String[] arr = { \"nice\", \"to\", \"meet\", \"you\" };\n\n        try {\n            for (var data : arr) {\n                System.out.println(\"Producer: \" + data);\n                queue.put(data);\n                System.out.println(\"Producer: \" + data + \"\\t\\t\\t[done]\");\n            }\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n\n    private static void consumer(MyBlockingQueue<String> queue) {\n        String data;\n\n        try {\n            Thread.sleep(5000);\n\n            for (int i = 0; i < 4; ++i) {\n                data = queue.take();\n                System.out.println(\"\\tConsumer: \" + data);\n\n                if (0 == i)\n                    Thread.sleep(5000);\n            }\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n\n    //////////////////////////////////////////////\n\n\n    private static class MyBlockingQueue<T> {\n        private final Semaphore semRemain;\n        private final Semaphore semFill;\n\n        private int capacity = 0;\n        private final Queue<T> queue;\n\n\n        public MyBlockingQueue(int capacity) {\n            if (capacity <= 0)\n                throw new IllegalArgumentException(\"capacity must be a positive integer\");\n\n            this.capacity = capacity;\n\n            semRemain = new Semaphore(this.capacity);\n            semFill = new Semaphore(0);\n\n            queue = new LinkedList<T>();\n        }\n\n\n        public void put(T value) throws InterruptedException {\n            semRemain.acquire();\n\n            synchronized (queue) {\n                queue.add(value);\n            }\n\n            semFill.release();\n        }\n\n\n        public T take() throws InterruptedException {\n            T result;\n            semFill.acquire();\n\n            synchronized (queue) {\n                result = queue.remove();\n            }\n\n            semRemain.release();\n            return result;\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer06_blocking_queue/AppB02.java",
    "content": "/*\n * BLOCKING QUEUE IMPLEMENTATION\n * Version B02: General blocking queues\n *              Underlying mechanism: Condition variables\n */\n\npackage exer06_blocking_queue;\n\nimport java.util.LinkedList;\nimport java.util.Queue;\n\n\n\npublic class AppB02 {\n\n    public static void main(String[] args) {\n        final var queue = new MyBlockingQueue<String>(2); // capacity = 2\n        new Thread(() -> producer(queue)).start();\n        new Thread(() -> consumer(queue)).start();\n    }\n\n\n    private static void producer(MyBlockingQueue<String> queue) {\n        String[] arr = { \"nice\", \"to\", \"meet\", \"you\" };\n\n        try {\n            for (var data : arr) {\n                System.out.println(\"Producer: \" + data);\n                queue.put(data);\n                System.out.println(\"Producer: \" + data + \"\\t\\t\\t[done]\");\n            }\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n\n    private static void consumer(MyBlockingQueue<String> queue) {\n        String data;\n\n        try {\n            Thread.sleep(5000);\n\n            for (int i = 0; i < 4; ++i) {\n                data = queue.take();\n                System.out.println(\"\\tConsumer: \" + data);\n\n                if (0 == i)\n                    Thread.sleep(5000);\n            }\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n\n    //////////////////////////////////////////////\n\n\n    private static class MyBlockingQueue<T> {\n        private final Object condEmpty = new Object();\n        private final Object condFull = new Object();\n\n        private int capacity = 0;\n        private final Queue<T> queue;\n\n\n        public MyBlockingQueue(int capacity) {\n            if (capacity <= 0)\n                throw new IllegalArgumentException(\"capacity must be a positive integer\");\n\n            this.capacity = capacity;\n            queue = new LinkedList<T>();\n        }\n\n\n        public void put(T value) throws InterruptedException {\n            synchronized (condFull) {\n                while (queue.size() >= capacity) {\n                    // Queue is full, must wait for 'take'\n                    condFull.wait();\n                }\n\n                synchronized (queue) {\n                    queue.add(value);\n                }\n            }\n\n            synchronized (condEmpty) {\n                condEmpty.notify();\n            }\n        }\n\n\n        public T take() throws InterruptedException {\n            T result;\n\n            synchronized (condEmpty) {\n                while (queue.isEmpty()) {\n                    // Queue is empty, must wait for 'put'\n                    condEmpty.wait();\n                }\n\n                synchronized (queue) {\n                    result = queue.remove();\n                }\n            }\n\n            synchronized (condFull) {\n                condFull.notify();\n            }\n\n            return result;\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer07_data_server/AppA.java",
    "content": "/*\n * THE DATA SERVER PROBLEM\n * Version A: Solving the problem using a condition variable\n */\n\npackage exer07_data_server;\n\n\n\npublic class AppA {\n\n    public static void main(String[] args) throws InterruptedException {\n        var server = new DataServer();\n        server.processRequest();\n    }\n\n\n    private static class DataServer {\n        private class Counter {\n            public int value;\n            public Counter(int value) {\n                this.value = value;\n            }\n        }\n\n\n        public void processRequest() throws InterruptedException {\n            final var lstFileName = new String[] { \"foo.html\", \"bar.json\" };\n            final var counter = new Counter(lstFileName.length);\n\n            // The server checks auth user while reading files, concurrently\n            new Thread(() -> processFiles(lstFileName, counter)).start();\n            checkAuthUser();\n\n            // The server waits for completion of loading files\n            synchronized (counter) {\n                while (counter.value > 0) {\n                    counter.wait(10000); // timeout = 10 seconds\n                }\n            }\n\n            System.out.println(\"\\nNow user is authorized and files are loaded\");\n            System.out.println(\"Do other tasks...\\n\");\n        }\n\n\n        // This task consumes CPU (and network bandwidth, maybe)\n        private void checkAuthUser() {\n            System.out.println(\"[   Auth   ] Start\");\n            // Send request to authenticator, check permissions, encrypt, decrypt...\n            sleepNoEx(20);\n            System.out.println(\"[   Auth   ] Done\");\n        }\n\n\n        // This task consumes disk\n        private void processFiles(String[] lstFileName, Counter counter) {\n            for (var fileName : lstFileName) {\n                // Read file\n                System.out.println(\"[ ReadFile ] Start \" + fileName);\n                sleepNoEx(10);\n                System.out.println(\"[ ReadFile ] Done  \" + fileName);\n\n                synchronized (counter) {\n                    --counter.value;\n                    counter.notify();\n                }\n\n                // Write log into disk\n                sleepNoEx(5);\n                System.out.println(\"[ WriteLog ]\");\n            }\n        }\n\n\n        private static void sleepNoEx(long seconds) {\n            try { Thread.sleep(1000 * seconds); }\n            catch (InterruptedException e) { }\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer07_data_server/AppB.java",
    "content": "/*\n * THE DATA SERVER PROBLEM\n * Version B: Solving the problem using a semaphore\n */\n\npackage exer07_data_server;\n\nimport java.util.concurrent.Semaphore;\n\n\n\npublic class AppB {\n\n    public static void main(String[] args) throws InterruptedException {\n        var server = new DataServer();\n        server.processRequest();\n    }\n\n\n    private static class DataServer {\n        public void processRequest() throws InterruptedException {\n            final var lstFileName = new String[] { \"foo.html\", \"bar.json\" };\n            final var sem = new Semaphore(0);\n\n            // The server checks auth user while reading files, concurrently\n            new Thread(() -> processFiles(lstFileName, sem)).start();\n            checkAuthUser();\n\n            // The server waits for completion of loading files\n            for (int i = lstFileName.length; i > 0; --i) {\n                sem.acquire();\n            }\n\n            System.out.println(\"\\nNow user is authorized and files are loaded\");\n            System.out.println(\"Do other tasks...\\n\");\n        }\n\n\n        // This task consumes CPU (and network bandwidth, maybe)\n        private void checkAuthUser() {\n            System.out.println(\"[   Auth   ] Start\");\n            // Send request to authenticator, check permissions, encrypt, decrypt...\n            sleepNoEx(20);\n            System.out.println(\"[   Auth   ] Done\");\n        }\n\n\n        // This task consumes disk\n        private void processFiles(String[] lstFileName, Semaphore sem) {\n            for (var fileName : lstFileName) {\n                // Read file\n                System.out.println(\"[ ReadFile ] Start \" + fileName);\n                sleepNoEx(10);\n                System.out.println(\"[ ReadFile ] Done  \" + fileName);\n\n                sem.release();\n\n                // Write log into disk\n                sleepNoEx(5);\n                System.out.println(\"[ WriteLog ]\");\n            }\n        }\n\n\n        private static void sleepNoEx(long seconds) {\n            try { Thread.sleep(1000 * seconds); }\n            catch (InterruptedException e) { }\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer07_data_server/AppC.java",
    "content": "/*\n * THE DATA SERVER PROBLEM\n * Version C: Solving the problem using a count-down latch\n */\n\npackage exer07_data_server;\n\nimport java.util.concurrent.CountDownLatch;\n\n\n\npublic class AppC {\n\n    public static void main(String[] args) throws InterruptedException {\n        var server = new DataServer();\n        server.processRequest();\n    }\n\n\n    private static class DataServer {\n        public void processRequest() throws InterruptedException {\n            final var lstFileName = new String[] { \"foo.html\", \"bar.json\" };\n            // Count down for 2 files\n            final var readFileLatch = new CountDownLatch(lstFileName.length);\n\n            // The server checks auth user while reading files, concurrently\n            new Thread(() -> processFiles(lstFileName, readFileLatch)).start();\n            checkAuthUser();\n\n            // The server waits for completion of loading files\n            readFileLatch.await();\n\n            System.out.println(\"\\nNow user is authorized and files are loaded\");\n            System.out.println(\"Do other tasks...\\n\");\n        }\n\n\n        // This task consumes CPU (and network bandwidth, maybe)\n        private void checkAuthUser() {\n            System.out.println(\"[   Auth   ] Start\");\n            // Send request to authenticator, check permissions, encrypt, decrypt...\n            sleepNoEx(20);\n            System.out.println(\"[   Auth   ] Done\");\n        }\n\n\n        // This task consumes disk\n        private void processFiles(String[] lstFileName, CountDownLatch latch) {\n            for (var fileName : lstFileName) {\n                // Read file\n                System.out.println(\"[ ReadFile ] Start \" + fileName);\n                sleepNoEx(10);\n                System.out.println(\"[ ReadFile ] Done  \" + fileName);\n\n                latch.countDown();\n\n                // Write log into disk\n                sleepNoEx(5);\n                System.out.println(\"[ WriteLog ]\");\n            }\n        }\n\n\n        private static void sleepNoEx(long seconds) {\n            try { Thread.sleep(1000 * seconds); }\n            catch (InterruptedException e) { }\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer07_data_server/AppD.java",
    "content": "/*\n * THE DATA SERVER PROBLEM\n * Version D: Solving the problem using a blocking queue\n */\n\npackage exer07_data_server;\n\nimport java.util.concurrent.BlockingQueue;\nimport java.util.concurrent.LinkedBlockingQueue;\n\n\n\npublic class AppD {\n\n    public static void main(String[] args) throws InterruptedException {\n        var server = new DataServer();\n        server.processRequest();\n    }\n\n\n    private static class DataServer {\n        public void processRequest() throws InterruptedException {\n            final var lstFileName = new String[] { \"foo.html\", \"bar.json\" };\n            final var queue = new LinkedBlockingQueue<String>();\n\n            // The server checks auth user while reading files, concurrently\n            new Thread(() -> processFiles(lstFileName, queue)).start();\n            checkAuthUser();\n\n            // The server waits for completion of loading files\n            for (int i = lstFileName.length; i > 0; --i) {\n                queue.take();\n            }\n\n            System.out.println(\"\\nNow user is authorized and files are loaded\");\n            System.out.println(\"Do other tasks...\\n\");\n        }\n\n\n        // This task consumes CPU (and network bandwidth, maybe)\n        private void checkAuthUser() {\n            System.out.println(\"[   Auth   ] Start\");\n            // Send request to authenticator, check permissions, encrypt, decrypt...\n            sleepNoEx(20);\n            System.out.println(\"[   Auth   ] Done\");\n        }\n\n\n        // This task consumes disk\n        private void processFiles(String[] lstFileName, BlockingQueue<String> queue) {\n            for (var fileName : lstFileName) {\n                // Read file\n                System.out.println(\"[ ReadFile ] Start \" + fileName);\n                sleepNoEx(10);\n                System.out.println(\"[ ReadFile ] Done  \" + fileName);\n\n                try {\n                    queue.put(fileName); // You may put file data here\n                }\n                catch (InterruptedException e) {\n                }\n\n                // Write log into disk\n                sleepNoEx(5);\n                System.out.println(\"[ WriteLog ]\");\n            }\n        }\n\n\n        private static void sleepNoEx(long seconds) {\n            try { Thread.sleep(1000 * seconds); }\n            catch (InterruptedException e) { }\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer08_exec_service/App.java",
    "content": "/*\n * EXECUTOR SERVICE & THREAD POOL IMPLEMENTATION\n */\n\npackage exer08_exec_service;\n\nimport java.util.ArrayList;\n\n\n\npublic class App {\n\n    public static void main(String[] args) throws InterruptedException {\n        final int NUM_THREADS = 2;\n        final int NUM_TASKS = 5;\n\n\n        var execService = new MyExecServiceV0B(NUM_THREADS);\n\n\n        var lstTask = new ArrayList<MyTask>();\n\n        for (int i = 0; i < NUM_TASKS; ++i)\n            lstTask.add(new MyTask((char)('A' + i)));\n\n\n        lstTask.forEach(task -> execService.submit(task));\n        System.out.println(\"All tasks are submitted\");\n\n\n        execService.waitTaskDone();\n        System.out.println(\"All tasks are completed\");\n\n\n        execService.shutdown();\n    }\n\n\n\n    private static class MyTask implements Runnable {\n        public char id;\n\n        public MyTask(char id) {\n            this.id = id;\n        }\n\n        @Override\n        public void run() {\n            System.out.println(\"Task \" + id + \" is starting\");\n\n            try {\n                Thread.sleep(3000);\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n\n            System.out.println(\"Task \" + id + \" is completed\");\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer08_exec_service/MyExecServiceV0A.java",
    "content": "/*\n * MY EXECUTOR SERVICE\n *\n * Version 0A: The easiest executor service\n * - It uses a blocking queue as underlying mechanism.\n */\n\npackage exer08_exec_service;\n\nimport java.util.LinkedList;\nimport java.util.List;\nimport java.util.concurrent.BlockingQueue;\nimport java.util.concurrent.LinkedBlockingQueue;\nimport java.util.stream.IntStream;\n\n\n\npublic final class MyExecServiceV0A {\n\n    private int numThreads = 0;\n    private List<Thread> lstTh = new LinkedList<>();\n    private final BlockingQueue<Runnable> taskPending = new LinkedBlockingQueue<>();\n\n\n\n    public MyExecServiceV0A(int numThreads) {\n        init(numThreads);\n    }\n\n\n\n    private void init(int inpNumThreads) {\n        numThreads = inpNumThreads;\n\n        lstTh = IntStream.range(0, numThreads)\n                .mapToObj(i -> new Thread(() -> threadWorkerFunc(this)))\n                .toList();\n\n        lstTh.forEach(Thread::start);\n    }\n\n\n\n    public void submit(Runnable task) {\n        taskPending.add(task);\n    }\n\n\n\n    public void waitTaskDone() {\n        // This ExecService is too simple,\n        // so there is no implementation for waitTaskDone()\n        try {\n            Thread.sleep(11000); // fake behaviour\n        } catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n\n\n    public void shutdown() {\n        // This ExecService is too simple,\n        // so there is no implementation for shutdown()\n        System.out.println(\"No implementation for shutdown().\");\n        System.out.println(\"You need to exit the app manually.\");\n    }\n\n\n\n    private static void threadWorkerFunc(MyExecServiceV0A thisPtr) {\n        Runnable task;\n\n        try {\n            for (;;) {\n                // WAIT FOR AN AVAILABLE PENDING TASK\n                task = thisPtr.taskPending.take();\n\n                // DO THE TASK\n                task.run();\n            }\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer08_exec_service/MyExecServiceV0B.java",
    "content": "/*\n * MY EXECUTOR SERVICE\n *\n * Version 0B: The easiest executor service\n * - It uses a blocking queue as underlying mechanism.\n * - It supports waitTaskDone() and shutdown().\n */\n\npackage exer08_exec_service;\n\nimport java.util.LinkedList;\nimport java.util.List;\nimport java.util.concurrent.BlockingQueue;\nimport java.util.concurrent.LinkedBlockingQueue;\nimport java.util.concurrent.atomic.AtomicInteger;\nimport java.util.stream.IntStream;\n\n\n\npublic final class MyExecServiceV0B {\n\n    private int numThreads = 0;\n    private List<Thread> lstTh = new LinkedList<>();\n\n    private final BlockingQueue<Runnable> taskPending = new LinkedBlockingQueue<>();\n    private final AtomicInteger counterTaskRunning = new AtomicInteger();\n\n    private volatile boolean forceThreadShutdown = false;\n\n    private static final Runnable emptyTask = () -> { };\n\n\n\n    public MyExecServiceV0B(int numThreads) {\n        init(numThreads);\n    }\n\n\n\n    private void init(int inpNumThreads) {\n        numThreads = inpNumThreads;\n        counterTaskRunning.set(0);\n        forceThreadShutdown = false;\n\n        lstTh = IntStream.range(0, numThreads)\n                .mapToObj(i -> new Thread(() -> threadWorkerFunc(this)))\n                .toList();\n\n        lstTh.forEach(Thread::start);\n    }\n\n\n\n    public void submit(Runnable task) {\n        taskPending.add(task);\n    }\n\n\n\n    public void waitTaskDone() {\n        // This ExecService is too simple,\n        // so there is no good implementation for waitTaskDone()\n        try {\n            while (!taskPending.isEmpty() || counterTaskRunning.get() > 0) {\n                Thread.sleep(1000);\n                // Thread.yield();\n            }\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n\n\n    public void shutdown() {\n        forceThreadShutdown = true;\n        taskPending.clear();\n\n        // Invoke blocked threads by adding \"empty\" tasks\n        for (int i = 0; i < numThreads; ++i) {\n            taskPending.add(emptyTask);\n        }\n\n        for (var th : lstTh) {\n            try {\n                th.join();\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n\n        numThreads = 0;\n//      lstTh.clear();\n    }\n\n\n\n    private static void threadWorkerFunc(MyExecServiceV0B thisPtr) {\n        Runnable task;\n\n        try {\n            for (;;) {\n                // WAIT FOR AN AVAILABLE PENDING TASK\n                task = thisPtr.taskPending.take();\n\n                // If shutdown() was called, then exit the function\n                if (thisPtr.forceThreadShutdown) {\n                    break;\n                }\n\n                // DO THE TASK\n                thisPtr.counterTaskRunning.incrementAndGet();\n                task.run();\n                thisPtr.counterTaskRunning.decrementAndGet();\n            }\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer08_exec_service/MyExecServiceV1A.java",
    "content": "/*\n * MY EXECUTOR SERVICE\n *\n * Version 1A: Simple executor service\n * - Method \"waitTaskDone\" invokes thread sleeps in loop (which can cause performance problems).\n */\n\npackage exer08_exec_service;\n\nimport java.util.LinkedList;\nimport java.util.List;\nimport java.util.Queue;\nimport java.util.concurrent.atomic.AtomicInteger;\nimport java.util.stream.IntStream;\n\n\n\npublic final class MyExecServiceV1A {\n\n    private int numThreads = 0;\n    private List<Thread> lstTh = new LinkedList<>();\n\n    private final Queue<Runnable> taskPending = new LinkedList<>();\n    private final AtomicInteger counterTaskRunning = new AtomicInteger();\n\n    private volatile boolean forceThreadShutdown = false;\n\n\n\n    public MyExecServiceV1A(int numThreads) {\n        init(numThreads);\n    }\n\n\n\n    private void init(int inpNumThreads) {\n//      shutdown();\n\n        numThreads = inpNumThreads;\n        counterTaskRunning.set(0);\n        forceThreadShutdown = false;\n\n        lstTh = IntStream.range(0, numThreads)\n                .mapToObj(i -> new Thread(() -> threadWorkerFunc(this)))\n                .toList();\n\n        lstTh.forEach(Thread::start);\n    }\n\n\n\n    public void submit(Runnable task) {\n        synchronized (taskPending) {\n            taskPending.add(task);\n            taskPending.notify();\n        }\n    }\n\n\n\n    public void waitTaskDone() {\n        boolean done = false;\n\n        try {\n            for (;;) {\n                synchronized (taskPending) {\n                    if (taskPending.isEmpty() && 0 == counterTaskRunning.get()) {\n                        done = true;\n                    }\n                }\n\n                if (done)\n                    break;\n\n                Thread.sleep(1000);\n                // Thread.yield();\n            }\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n\n\n    public void shutdown() {\n        synchronized (taskPending) {\n            forceThreadShutdown = true;\n            taskPending.clear();\n            taskPending.notifyAll();\n        }\n\n        for (var th : lstTh) {\n            try {\n                th.join();\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n\n        numThreads = 0;\n//      lstTh.clear();\n    }\n\n\n\n    private static void threadWorkerFunc(MyExecServiceV1A thisPtr) {\n        var taskPending = thisPtr.taskPending;\n        var counterTaskRunning = thisPtr.counterTaskRunning;\n\n        Runnable task;\n\n        try {\n            for (;;) {\n                // WAIT FOR AN AVAILABLE PENDING TASK\n                synchronized (taskPending) {\n                    while (taskPending.isEmpty() && false == thisPtr.forceThreadShutdown) {\n                        taskPending.wait();\n                    }\n\n                    if (thisPtr.forceThreadShutdown) {\n                        break;\n                    }\n\n                    // GET THE TASK FROM THE PENDING QUEUE\n                    task = taskPending.remove();\n\n                    counterTaskRunning.getAndIncrement();\n                }\n\n                // DO THE TASK\n                task.run();\n                counterTaskRunning.getAndDecrement();\n            }\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer08_exec_service/MyExecServiceV1B.java",
    "content": "/*\n * MY EXECUTOR SERVICE\n *\n * Version 1B: Simple executor service\n * - Method \"waitTaskDone\" uses a condition variable to synchronize.\n */\n\npackage exer08_exec_service;\n\nimport java.util.LinkedList;\nimport java.util.List;\nimport java.util.Queue;\nimport java.util.stream.IntStream;\n\n\n\npublic final class MyExecServiceV1B {\n\n    private int numThreads = 0;\n    private List<Thread> lstTh = new LinkedList<>();\n\n    private final Queue<Runnable> taskPending = new LinkedList<>();\n\n    private int counterTaskRunning;\n    private final Object lkTaskRunning = new Object();\n\n    private volatile boolean forceThreadShutdown = false;\n\n\n\n    public MyExecServiceV1B(int numThreads) {\n        init(numThreads);\n    }\n\n\n\n    private void init(int inpNumThreads) {\n//      shutdown();\n\n        numThreads = inpNumThreads;\n        counterTaskRunning = 0;\n        forceThreadShutdown = false;\n\n        lstTh = IntStream.range(0, numThreads)\n                .mapToObj(i -> new Thread(() -> threadWorkerFunc(this)))\n                .toList();\n\n        lstTh.forEach(Thread::start);\n    }\n\n\n\n    public void submit(Runnable task) {\n        synchronized (taskPending) {\n            taskPending.add(task);\n            taskPending.notify();\n        }\n    }\n\n\n\n    public void waitTaskDone() {\n        for (;;) {\n            synchronized (taskPending) {\n                if (taskPending.isEmpty()) {\n                    synchronized (lkTaskRunning) {\n                        try {\n                            while (counterTaskRunning > 0) {\n                                lkTaskRunning.wait();\n                            }\n\n                            // no pending task and no running task\n                            break;\n                        }\n                        catch (InterruptedException e) {\n                            e.printStackTrace();\n                        }\n                    }\n                }\n            }\n        }\n    }\n\n\n\n    public void shutdown() {\n        synchronized (taskPending) {\n            forceThreadShutdown = true;\n            taskPending.clear();\n            taskPending.notifyAll();\n        }\n\n        for (var th : lstTh) {\n            try {\n                th.join();\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n\n        numThreads = 0;\n//      lstTh.clear();\n    }\n\n\n\n    private static void threadWorkerFunc(MyExecServiceV1B thisPtr) {\n        var taskPending = thisPtr.taskPending;\n        var lkTaskRunning = thisPtr.lkTaskRunning;\n\n        Runnable task;\n\n        try {\n            for (;;) {\n                // WAIT FOR AN AVAILABLE PENDING TASK\n                synchronized (taskPending) {\n                    while (taskPending.isEmpty() && false == thisPtr.forceThreadShutdown) {\n                        taskPending.wait();\n                    }\n\n                    if (thisPtr.forceThreadShutdown) {\n                        break;\n                    }\n\n                    // GET THE TASK FROM THE PENDING QUEUE\n                    task = taskPending.remove();\n\n                    ++thisPtr.counterTaskRunning;\n                }\n\n                // DO THE TASK\n                task.run();\n\n                synchronized (lkTaskRunning) {\n                    --thisPtr.counterTaskRunning;\n\n                    if (0 == thisPtr.counterTaskRunning) {\n                        lkTaskRunning.notify();\n                    }\n                }\n            }\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer08_exec_service/MyExecServiceV2A.java",
    "content": "/*\n * MY EXECUTOR SERVICE\n *\n * Version 2A: The executor service storing running tasks\n * - Method \"waitTaskDone\" uses a semaphore to synchronize.\n */\n\npackage exer08_exec_service;\n\nimport java.util.LinkedList;\nimport java.util.List;\nimport java.util.Queue;\nimport java.util.concurrent.Semaphore;\nimport java.util.stream.IntStream;\n\n\n\npublic final class MyExecServiceV2A {\n\n    private int numThreads = 0;\n    private List<Thread> lstTh = new LinkedList<>();\n\n    private final Queue<Runnable> taskPending = new LinkedList<>();\n    private final Queue<Runnable> taskRunning = new LinkedList<>();\n\n    private final Semaphore counterTaskRunning = new Semaphore(0);\n\n    private volatile boolean forceThreadShutdown = false;\n\n\n\n    public MyExecServiceV2A(int numThreads) {\n        init(numThreads);\n    }\n\n\n\n    private void init(int inpNumThreads) {\n//      shutdown();\n\n        numThreads = inpNumThreads;\n        forceThreadShutdown = false;\n\n        lstTh = IntStream.range(0, numThreads)\n                .mapToObj(i -> new Thread(() -> threadWorkerFunc(this)))\n                .toList();\n\n        lstTh.forEach(Thread::start);\n    }\n\n\n\n    public void submit(Runnable task) {\n        synchronized (taskPending) {\n            taskPending.add(task);\n            taskPending.notify();\n        }\n    }\n\n\n\n    public void waitTaskDone() {\n        for (;;) {\n            try {\n                counterTaskRunning.acquire();\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n\n            synchronized (taskPending) {\n                synchronized (taskRunning) {\n                    if (taskPending.isEmpty() && taskRunning.isEmpty()\n                        /* && 0 == counterTaskRunning.availablePermits() */\n                    )\n                        break;\n                }\n            }\n        }\n    }\n\n\n\n    public void shutdown() {\n        synchronized (taskPending) {\n            forceThreadShutdown = true;\n            taskPending.clear();\n            taskPending.notifyAll();\n        }\n\n        for (var th : lstTh) {\n            try {\n                th.join();\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n\n        numThreads = 0;\n//      lstTh.clear();\n        taskRunning.clear();\n\n        counterTaskRunning.release(counterTaskRunning.availablePermits());\n    }\n\n\n\n    private static void threadWorkerFunc(MyExecServiceV2A thisPtr) {\n        var taskPending = thisPtr.taskPending;\n        var taskRunning = thisPtr.taskRunning;\n        var counterTaskRunning = thisPtr.counterTaskRunning;\n\n        Runnable task;\n\n        try {\n            for (;;) {\n                synchronized (taskPending) {\n                    // WAIT FOR AN AVAILABLE PENDING TASK\n                    while (taskPending.isEmpty() && false == thisPtr.forceThreadShutdown) {\n                        taskPending.wait();\n                    }\n\n                    if (thisPtr.forceThreadShutdown) {\n                        break;\n                    }\n\n                    // GET THE TASK FROM THE PENDING QUEUE\n                    task = taskPending.remove();\n\n                    // PUSH IT TO THE RUNNING QUEUE\n                    synchronized (taskRunning) {\n                        taskRunning.add(task);\n                    }\n                }\n\n                // DO THE TASK\n                task.run();\n\n                // REMOVE IT FROM THE RUNNING QUEUE\n                synchronized (taskRunning) {\n                    taskRunning.remove(task);\n                }\n\n                counterTaskRunning.release();\n            }\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n}\n"
  },
  {
    "path": "java/src/exer08_exec_service/MyExecServiceV2B.java",
    "content": "/*\n * MY EXECUTOR SERVICE\n *\n * Version 2B: The executor service storing running tasks\n * - Method \"waitTaskDone\" uses a condition variable to synchronize.\n */\n\npackage exer08_exec_service;\n\nimport java.util.LinkedList;\nimport java.util.List;\nimport java.util.Queue;\nimport java.util.stream.IntStream;\n\n\n\npublic final class MyExecServiceV2B {\n\n    private int numThreads = 0;\n    private List<Thread> lstTh = new LinkedList<>();\n\n    private final Queue<Runnable> taskPending = new LinkedList<>();\n    private final Queue<Runnable> taskRunning = new LinkedList<>();\n\n    private volatile boolean forceThreadShutdown = false;\n\n\n\n    public MyExecServiceV2B(int numThreads) {\n        init(numThreads);\n    }\n\n\n\n    private void init(int inpNumThreads) {\n//      shutdown();\n\n        numThreads = inpNumThreads;\n        forceThreadShutdown = false;\n\n        lstTh = IntStream.range(0, numThreads)\n                .mapToObj(i -> new Thread(() -> threadWorkerFunc(this)))\n                .toList();\n\n        lstTh.forEach(Thread::start);\n    }\n\n\n\n    public void submit(Runnable task) {\n        synchronized (taskPending) {\n            taskPending.add(task);\n            taskPending.notify();\n        }\n    }\n\n\n\n//    public void waitTaskDoneBad() {\n//        try {\n//            for (;;) {\n//                synchronized (taskRunning) {\n//                    while (!taskRunning.isEmpty())\n//                        taskRunning.wait();\n//\n//                    synchronized (taskPending) {\n//                        if (taskPending.isEmpty())\n//                            break;\n//                    }\n//                }\n//            }\n//        }\n//        catch (InterruptedException e) {\n//            e.printStackTrace();\n//        }\n//    }\n\n\n\n    public void waitTaskDone() {\n        try {\n            for (;;) {\n                synchronized (taskPending) {\n                    if (taskPending.isEmpty()) {\n                        synchronized (taskRunning) {\n                            while (!taskRunning.isEmpty())\n                                taskRunning.wait();\n\n                            // no pending task and no running task\n                            break;\n                        }\n                    }\n                }\n            }\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n\n\n    public void shutdown() {\n        synchronized (taskPending) {\n            forceThreadShutdown = true;\n            taskPending.clear();\n            taskPending.notifyAll();\n        }\n\n        for (var th : lstTh) {\n            try {\n                th.join();\n            }\n            catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        }\n\n        numThreads = 0;\n//      lstTh.clear();\n        taskRunning.clear();\n    }\n\n\n\n    private static void threadWorkerFunc(MyExecServiceV2B thisPtr) {\n        var taskPending = thisPtr.taskPending;\n        var taskRunning = thisPtr.taskRunning;\n        Runnable task;\n\n        try {\n            for (;;) {\n                synchronized (taskPending) {\n                    // WAIT FOR AN AVAILABLE PENDING TASK\n                    while (taskPending.isEmpty() && false == thisPtr.forceThreadShutdown) {\n                        taskPending.wait();\n                    }\n\n                    if (thisPtr.forceThreadShutdown) {\n                        break;\n                    }\n\n                    // GET THE TASK FROM THE PENDING QUEUE\n                    task = taskPending.remove();\n\n                    // PUSH IT TO THE RUNNING QUEUE\n                    synchronized (taskRunning) {\n                        taskRunning.add(task);\n                    }\n                }\n\n                // DO THE TASK\n                task.run();\n\n                // REMOVE IT FROM THE RUNNING QUEUE\n                synchronized (taskRunning) {\n                    taskRunning.remove(task);\n                    taskRunning.notify();\n                }\n            }\n        }\n        catch (InterruptedException e) {\n            e.printStackTrace();\n        }\n    }\n\n}\n"
  },
  {
    "path": "js-nodejs/.gitignore",
    "content": "/package-lock.json\n/node_modules/"
  },
  {
    "path": "js-nodejs/demo00.js",
    "content": "/*\nINTRODUCTION TO MULTITHREADING\nYou should try running this app several times and see results.\n*/\n\nimport * as mylib from './mylib.js';\nimport { Worker, isMainThread } from 'worker_threads';\n\n\nconst workerFunc = async () => {\n  for (let i = 0; i < 300; ++i) {\n    await mylib.sleep(1);\n    process.stdout.write('B');\n  }\n};\n\n\nconst mainFunc = async () => {\n  const worker = new Worker(new URL(import.meta.url));\n\n  for (let i = 0; i < 300; ++i) {\n    await mylib.sleep(1);\n    process.stdout.write('A');\n  }\n};\n\n\nif (isMainThread) {\n  await mainFunc();\n} else {\n  await workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/demo01a-hello.js",
    "content": "/*\nHELLO WORLD VERSION MULTITHREADING\nVersion A: Using the same source code file for both main thread and worker thread\n*/\n\nimport { Worker, isMainThread } from 'worker_threads';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\nconst workerFunc = () => {\n  console.log('Hello from example thread');\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\nconst mainFunc = () => {\n  const worker = new Worker(new URL(import.meta.url));\n  console.log('Hello from main thread');\n};\n\n\nif (isMainThread) {\n  // If this is main thread then execute mainFunc()\n  mainFunc();\n} else {\n  // If this is worker thread then execute workerFunc()\n  workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/demo01b-hello-worker.js",
    "content": "console.log('Hello from example thread');\n"
  },
  {
    "path": "js-nodejs/demo01b-hello.js",
    "content": "/*\nHELLO WORLD VERSION MULTITHREADING\nVersion B: Using the individual source code files:\n- Main thread: demo01b-hello.js\n- Worker thread: demo01b-hello-worker.js\n*/\n\nimport * as path from 'path';\nimport { Worker, isMainThread } from 'worker_threads';\n\nconst workerFileName = `./${path.parse(import.meta.url).name}-worker.js`;\nconst worker = new Worker(workerFileName);\n\nconsole.log('Hello from main thread');\n"
  },
  {
    "path": "js-nodejs/demo02-join.js",
    "content": "/*\nTHREAD JOINS\n*/\n\nimport * as mylib from './mylib.js';\nimport { isMainThread } from 'worker_threads';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nconst doHeavyTask = async () => {\n  // Do a heavy task, which takes a little time\n  for (let i = 0; i < 2000000000; ++i);\n  console.log('Done!');\n};\n\nconst workerFunc = async () => {\n  try {\n    await doHeavyTask();\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst mainFunc = async () => {\n  try {\n\n    const [worker, prom] = mylib.createThread(new URL(import.meta.url));\n    console.log('Begin creating worker thread...');\n    await prom; // join worker thread (i.e. waiting for the thread completion)\n    console.log('Good bye!');\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nif (isMainThread) {\n  await mainFunc();\n} else {\n  await workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/demo03a-pass-arg.js",
    "content": "/*\nPASSING ARGUMENTS\n*/\n\nimport { isMainThread, Worker, workerData } from 'worker_threads';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nconst doTask = input => {\n  console.log('Input value is', input);\n};\n\nconst workerFunc = () => {\n  try {\n    doTask(workerData);\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst mainFunc = () => {\n  try {\n\n    const worker = new Worker(\n      new URL(import.meta.url),\n      {\n        workerData: 12345\n      }\n    );\n\n    const worker2 = new Worker(\n      new URL(import.meta.url),\n      {\n        workerData: {\n          name: 'foo',\n          types: [9, 0, 55]\n        }\n      }\n    );\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nif (isMainThread) {\n  mainFunc();\n} else {\n  workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/demo03b-pass-arg.js",
    "content": "/*\nPASSING ARGUMENTS\n*/\n\nimport * as mylib from './mylib.js';\nimport { isMainThread, workerData } from 'worker_threads';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nconst doTask = input => {\n  console.log('Input value is', input);\n};\n\nconst workerFunc = () => {\n  try {\n    doTask(workerData);\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst mainFunc = async () => {\n  try {\n\n    const [worker, prom] = mylib.createThread(\n      new URL(import.meta.url), 12345\n    );\n\n    const [worker2, prom2] = mylib.createThread(\n      new URL(import.meta.url),\n      {\n          name: 'foo',\n          types: [9, 0, 55]\n      }\n    );\n\n    console.log('Created 2 worker threads');\n    await Promise.all([prom, prom2]);\n    console.log('Good bye');\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nif (isMainThread) {\n  await mainFunc();\n} else {\n  workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/demo04-sleep.js",
    "content": "/*\nSLEEP\n*/\n\nimport * as mylib from './mylib.js';\nimport { isMainThread, workerData } from 'worker_threads';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nconst doTask = async (name, timems) => {\n  console.log(name, 'is sleeping');\n  await mylib.sleep(timems);\n  console.log(name, 'wakes up');\n};\n\nconst workerFunc = async () => {\n  try {\n    await doTask(workerData.name, workerData.timems);\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst mainFunc = async () => {\n  try {\n\n    const [workerFoo, promFoo] = mylib.createThread(\n      new URL(import.meta.url),\n      { name: 'foo', timems: 3000 }\n    );\n\n    const [workerBar, promBar] = mylib.createThread(\n      new URL(import.meta.url),\n      { name: 'bar', timems: 2900 }\n    );\n\n    console.log('Created 2 worker threads');\n    await Promise.all([promFoo, promBar]);\n    console.log('Good bye');\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nif (isMainThread) {\n  await mainFunc();\n} else {\n  await workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/demo05-id.js",
    "content": "/*\nGETTING THREAD'S ID\n*/\n\nimport * as mylib from './mylib.js';\nimport { isMainThread, workerData } from 'worker_threads';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nconst doTask = async () => {\n  await mylib.sleep(2000);\n  console.log('Done');\n};\n\nconst workerFunc = async () => {\n  try {\n    await doTask();\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst mainFunc = async () => {\n  try {\n\n    const [worker, prom] = mylib.createThread(new URL(import.meta.url));\n    const [worker2, prom2] = mylib.createThread(new URL(import.meta.url));\n    console.log(worker.threadId, worker2.threadId);\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nif (isMainThread) {\n  await mainFunc();\n} else {\n  await workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/demo06a-list-threads.js",
    "content": "/*\nLIST OF MULTIPLE THREADS\nVERSION A: Using standard 'Worker' class\n*/\n\nimport { Worker, isMainThread, workerData } from 'worker_threads';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nconst doTask = index => {\n  console.log(index);\n};\n\nconst workerFunc = () => {\n  try {\n    doTask(workerData);\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst mainFunc = () => {\n  try {\n\n    const NUM_THREADS = 5;\n    const lstTh = new Array(NUM_THREADS);\n\n    for (let i = 0; i < NUM_THREADS; ++i) {\n      lstTh[i] = new Worker(new URL(import.meta.url), { workerData: i });\n    }\n\n    console.log('Good bye');\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nif (isMainThread) {\n  mainFunc();\n} else {\n  workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/demo06b-list-threads.js",
    "content": "/*\nLIST OF MULTIPLE THREADS\nVERSION B: Using mylib.createThread\n*/\n\nimport * as mylib from './mylib.js';\nimport { isMainThread, workerData } from 'worker_threads';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nconst doTask = index => {\n  console.log(index);\n};\n\nconst workerFunc = () => {\n  try {\n    doTask(workerData);\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst mainFunc = async () => {\n  try {\n\n    const NUM_THREADS = 5;\n    const lstTh = new Array(NUM_THREADS);\n\n    for (let i = 0; i < NUM_THREADS; ++i) {\n      lstTh[i] = mylib.createThread(new URL(import.meta.url), i);\n    }\n\n    await Promise.all(lstTh.map(([_,pr]) => pr));\n    console.log('Good bye');\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nif (isMainThread) {\n  await mainFunc();\n} else {\n  workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/demo07-terminate.js",
    "content": "/*\nFORCING A THREAD TO TERMINATE (i.e. killing the thread)\n*/\n\nimport * as mylib from './mylib.js';\nimport { isMainThread, parentPort } from 'worker_threads';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nlet workerThreadIsRunning = true;\n\nconst doTask = async () => {\n  while (workerThreadIsRunning) {\n    console.log('Running...');\n    await mylib.sleep(2000);\n  }\n};\n\nconst onRecvMessage = msg => {\n  if (msg === 'term') {\n    workerThreadIsRunning = false;\n  }\n};\n\nconst workerFunc = async () => {\n  try {\n    parentPort.once('message', onRecvMessage);\n    await doTask();\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst mainFunc = async () => {\n  try {\n\n    const [worker, prom] = mylib.createThread(new URL(import.meta.url));\n    await mylib.sleep(6000);\n    worker.postMessage('term');\n    await prom;\n    console.log('Worker thread is terminated');\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nif (isMainThread) {\n  await mainFunc();\n} else {\n  await workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/demo08a-return-value.js",
    "content": "/*\nGETTING RETURNED VALUES FROM THREADS\n*/\n\nimport * as mylib from './mylib.js';\nimport { isMainThread, workerData, parentPort } from 'worker_threads';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nconst doubleValue = x => {\n  return x * 2;\n};\n\nconst squareValue = x => {\n  return x * x;\n};\n\nconst workerFunc = () => {\n  try {\n    const [command, inp] = workerData;\n    let result;\n    switch (command) {\n      case 'double':\n        result = doubleValue(inp);\n        console.log({command, inp, result});\n        break;\n      case 'square':\n        result = squareValue(inp);\n        console.log({command, inp, result});\n        break;\n    }\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst mainFunc = () => {\n  try {\n\n    mylib.createThread(new URL(import.meta.url), ['double', 5]);\n    mylib.createThread(new URL(import.meta.url), ['double', 80]);\n    mylib.createThread(new URL(import.meta.url), ['square', 7]);\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nif (isMainThread) {\n  mainFunc();\n} else {\n  workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/demo08b-return-value.js",
    "content": "/*\nGETTING RETURNED VALUES FROM THREADS\n*/\n\nimport * as mylib from './mylib.js';\nimport { isMainThread, parentPort } from 'worker_threads';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nconst doubleValue = x => {\n  return x * 2;\n};\n\nconst squareValue = x => {\n  return x * x;\n};\n\nconst onRecvMessage = message => {\n  const [command, inp] = message;\n  let result;\n  switch (command) {\n    case 'double':\n      result = doubleValue(inp);\n      parentPort.postMessage({command, inp, result});\n      break;\n    case 'square':\n      result = squareValue(inp);\n      parentPort.postMessage({command, inp, result});\n      break;\n    case 'exit':\n      parentPort.unref();\n      // parentPort.close();\n  }\n};\n\nconst workerFunc = () => {\n  try {\n    parentPort.on('message', onRecvMessage);\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst mainFunc = async () => {\n  try {\n\n    const [worker, prom] = mylib.createThread(new URL(import.meta.url));\n    worker.on('message', console.log);\n    worker.postMessage(['double', 5]);\n    worker.postMessage(['double', 80]);\n    worker.postMessage(['square', 7]);\n    worker.postMessage(['exit']);\n    await prom;\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nif (isMainThread) {\n  await mainFunc();\n} else {\n  workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/demo11a-exec-service.js",
    "content": "/*\nEXECUTOR SERVICES AND THREAD POOLS\n*/\n\nimport { isMainThread } from 'worker_threads';\nimport { Piscina } from 'piscina';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nexport const printMsg = () => {\n  console.log('Hello Nodejs');\n};\n\nexport const addTwoNumbers = ({a, b}) => {\n  return a + b;\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst mainFunc = async () => {\n  try {\n    const execService = new Piscina({filename: new URL(import.meta.url).href});\n\n    await execService.run(undefined, { name: 'printMsg'});\n\n    const result = await execService.run({a: 1000, b: -400}, { name: 'addTwoNumbers'});\n    console.log('Result is', result);\n\n    await execService.destroy();\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nif (isMainThread) {\n  await mainFunc();\n}\n"
  },
  {
    "path": "js-nodejs/demo11b-exec-service.js",
    "content": "/*\nEXECUTOR SERVICES AND THREAD POOLS\n*/\n\nimport * as mylib from './mylib.js';\nimport { isMainThread } from 'worker_threads';\nimport { Piscina } from 'piscina';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nexport const doTask = async ({index}) => {\n  const id = String.fromCharCode(65 + index);\n  console.log(`Task ${id} is starting`);\n  await mylib.sleep(3000);\n  console.log(`Task ${id} is completed`);\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst mainFunc = async () => {\n  try {\n    const NUM_THREADS = 2;\n    const NUM_TASKS = 5;\n    const lstPromRes = [];\n\n    const execService = new Piscina({\n      filename: new URL(import.meta.url).href,\n      minThreads: NUM_THREADS,\n      maxThreads: NUM_THREADS\n    });\n\n    for (let i = 0; i < NUM_TASKS; ++i) {\n      lstPromRes.push(execService.run({ index: i }, { name: 'doTask'}));\n    }\n    console.log('All tasks are submitted');\n\n    await Promise.all(lstPromRes);\n    console.log('All tasks are completed');\n\n    await execService.destroy();\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nif (isMainThread) {\n  await mainFunc();\n}\n"
  },
  {
    "path": "js-nodejs/demo12-race-condition.js",
    "content": "/*\nRACE CONDITIONS\n*/\n\nimport * as mylib from './mylib.js';\nimport { isMainThread, workerData } from 'worker_threads';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nconst doTask = async index => {\n  await mylib.sleep(1000);\n  console.log(index);\n};\n\nconst workerFunc = async () => {\n  try {\n    await doTask(workerData);\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst mainFunc = () => {\n  try {\n\n    const NUM_THREADS = 4;\n    const lstTh = new Array(NUM_THREADS);\n\n    for (let i = 0; i < NUM_THREADS; ++i) {\n      lstTh[i] = mylib.createThread(new URL(import.meta.url), i);\n    }\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nif (isMainThread) {\n  mainFunc();\n} else {\n  await workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/demo12b01-data-race-single.js",
    "content": "/*\nDATA RACES\nVersion 01: Without multithreading\n*/\n\nconst getResult = N => {\n  const a = new Int8Array(N + 1).fill(0);\n  for (let i = 1; i <= N; ++i) {\n    if (i % 2 === 0 || i % 3 === 0)\n      a[i] = 1;\n  }\n  const result = a.reduce((x, y) => x + y, 0);\n  return result;\n};\n\n\nconst mainFunc = () => {\n  const N = 1024;\n  const result = getResult(N);\n  console.log('Number of integers that are divisible by 2 or 3 is:', result);\n};\n\n\nmainFunc();\n"
  },
  {
    "path": "js-nodejs/demo12b02-data-race-multi.js",
    "content": "/*\nDATA RACES\nVersion 02: Multithreading\n*/\n\nimport * as mylib from './mylib.js';\nimport { isMainThread, workerData } from 'worker_threads';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nconst markDiv2 = (a, N) => {\n  for (let i = 2; i <= N; i += 2) {\n    a[i] = 1;\n  }\n};\n\nconst markDiv3 = (a, N) => {\n  for (let i = 3; i <= N; i += 3) {\n    a[i] = 1;\n  }\n};\n\nconst workerFunc = () => {\n  try {\n    const [action, a, N] = workerData;\n    switch (action) {\n      case 'markDiv2':\n        markDiv2(a, N);\n        break;\n      case 'markDiv3':\n        markDiv3(a, N);\n        break;\n    }\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst mainFunc = async () => {\n  try {\n\n    const N = 1024;\n    const a = new Int8Array(new SharedArrayBuffer((N + 1) * Int8Array.BYTES_PER_ELEMENT));\n\n    const [wkr2, prom2] = mylib.createThread(new URL(import.meta.url), [\n      'markDiv2', a, N\n    ]);\n    const [wkr3, prom3] = mylib.createThread(new URL(import.meta.url), [\n      'markDiv3', a, N\n    ]);\n\n    await Promise.all([prom2, prom3]);\n\n    const result = a.reduce((x, y) => x + y, 0);\n    console.log('Number of integers that are divisible by 2 or 3 is:', result);\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nif (isMainThread) {\n  await mainFunc();\n} else {\n  workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/demo12c-race-cond-data-race.js",
    "content": "/*\nRACE CONDITIONS AND DATA RACES\n*/\n\nimport * as mylib from './mylib.js';\nimport { isMainThread, workerData } from 'worker_threads';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nconst increaseCounter = async bufArr => {\n  await mylib.sleep(1000);\n  for (let i = 0; i < 1000; ++i) {\n    // increase counter by one\n    ++bufArr[0];\n  }\n};\n\nconst workerFunc = async () => {\n  try {\n    await increaseCounter(workerData);\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst mainFunc = async () => {\n  try {\n\n    const bufArr = new Int32Array(new SharedArrayBuffer(Int32Array.BYTES_PER_ELEMENT));\n    // counter is the first element of bufArr\n\n    const NUM_THREADS = 16;\n    const lstTh = new Array(NUM_THREADS);\n\n    for (let i = 0; i < NUM_THREADS; ++i) {\n      lstTh[i] = mylib.createThread(new URL(import.meta.url), bufArr);\n    }\n\n    await Promise.all(lstTh.map(([_,pr]) => pr));\n\n    console.log('counter =', bufArr[0]);\n    // We are NOT sure that counter = 16000\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nif (isMainThread) {\n  await mainFunc();\n} else {\n  await workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/demo13-mutex.js",
    "content": "/*\nMUTEXES\n*/\n\nimport * as mylib from './mylib.js';\nimport { Mutex } from './mylib_mutex.js';\nimport { isMainThread, workerData } from 'worker_threads';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nconst increaseCounter = async (bufArr, mutex) => {\n  await mylib.sleep(1000);\n\n  mutex.acquire();\n\n  try {\n    for (let i = 0; i < 1000; ++i) {\n      // increase counter by one\n      ++bufArr[0];\n    }\n  } finally {\n\n    mutex.release();\n\n  }\n\n  // or...\n  // mutex.runExclusive(() => {\n  //   for (let i = 0; i < 1000; ++i) {\n  //     // increase counter by one\n  //     ++bufArr[0];\n  //   }\n  // });\n};\n\nconst workerFunc = async () => {\n  try {\n    const [bufArr, mutexsab] = workerData;\n    const mutex = new Mutex(mutexsab);\n    await increaseCounter(bufArr, mutex);\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst mainFunc = async () => {\n  try {\n\n    const mutex = new Mutex();\n    const bufArr = new Int32Array(new SharedArrayBuffer(Int32Array.BYTES_PER_ELEMENT));\n    // counter is the first element of bufArr\n\n    const NUM_THREADS = 16;\n    const lstTh = new Array(NUM_THREADS);\n\n    for (let i = 0; i < NUM_THREADS; ++i) {\n      lstTh[i] = mylib.createThread(new URL(import.meta.url), [\n        bufArr, mutex.getSAB()\n      ]);\n    }\n\n    await Promise.all(lstTh.map(([_,pr]) => pr));\n\n    console.log('counter =', bufArr[0]);\n    // We are sure that counter = 16000\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nif (isMainThread) {\n  await mainFunc();\n} else {\n  await workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/demo13ex-mutex-problem.js",
    "content": "/*\nMUTEXES\nVersion EX\n\nIn this demo, we will face the race condition.\n                                                          -------------------------------------> Time\n                                                          |\nSend request '/register?id=Teo&name=Tran Teo'             |  ooooo\nSend request '/register?id=Teo&name=Le Teo'               |     xxxxx\nCheck database and create user(id='Teo', name='Tran Teo') |       oooooooooo\nCheck database and create user(id='Teo', name='Le Teo')   |          xxxxxxxxxx\n\nThe final result is that the database contains user(id='Teo', name='Le Teo').\nDue to the slow process time from server, user 'Tran Teo' is overwritten by user 'Le Teo'.\n\n\nWe do not want this result. What we want is:\n- Request '/register?id=Teo&name=Tran Teo' comes first, so user(id='Teo', name='Tran Teo') is created first.\n- Request '/register?id=Teo&name=Le Teo'   comes later, and this results a failure due to id 'Teo' is existed.\n\n                                                          -------------------------------------> Time\n                                                          |\nSend request '/register?id=Teo&name=Tran Teo'             |  ooooo\nSend request '/register?id=Teo&name=Le Teo'               |     (wait)......xxxxx\nCheck database and create user(id='Teo', name='Tran Teo') |       oooooooooo\nCheck database and create user(id='Teo', name='Le Teo')   |                      xxxxxxxxxx\n\n*/\n\nimport * as mylib from './mylib.js';\nimport express from 'express';\n\n\nconst createServer = () => {\n  const users = new Map();\n  const app = express();\n\n  const createUser = async (userid, username) => {\n    if (!userid || !username) {\n      return false;\n    }\n    if (users.has(userid)) {\n      return false;\n    }\n    // Assume that creating user takes a little time\n    await mylib.sleep(1000);\n    users.set(userid, username);\n    return true;\n  };\n\n  app.get('/register', async (req, res) => {\n    const ret = await createUser(req.query.id, req.query.name);\n    res.status(200).send(ret).end();\n  });\n\n  app.get('/', (req, res) => {\n    const resStr = JSON.stringify(\n      [...users].map(([k,v]) => ({ id: k, name: v }))\n    , null, 2);\n    res.status(200).setHeader('Content-Type', 'application/json');\n    res.send(resStr).end();\n  });\n\n  const server = app.listen(8081);\n  return server;\n};\n\n\nconst runClient = async () => {\n  const registerUrl1 = new URL('/register?id=teo&name=Tran Teo', 'http://localhost:8081');\n  const registerUrl2 = new URL('/register?id=teo&name=Le Teo', 'http://localhost:8081');\n  const infoUrl = new URL('/', 'http://localhost:8081');\n\n  const prom1 = mylib.makeHttpGet(registerUrl1);\n  await mylib.sleep(200);\n\n  const prom2 = mylib.makeHttpGet(registerUrl2);\n  await mylib.sleep(200);\n\n  await Promise.all([prom1, prom2]);\n\n  const result = await mylib.makeHttpGet(infoUrl);\n  console.log(result);\n};\n\n\nconst server = createServer();\nawait runClient();\nserver.close();\n"
  },
  {
    "path": "js-nodejs/demo13ex-mutex-solve.js",
    "content": "/*\nMUTEXES\nVersion EX\n\nUsing a mutex to solve the problem.\n*/\n\nimport * as mylib from './mylib.js';\nimport express from 'express';\nimport { Mutex } from 'async-mutex';\n\n\nconst createServer = () => {\n  const users = new Map();\n  const app = express();\n  const mutex = new Mutex();\n\n  const createUser = async (userid, username) => {\n    if (!userid || !username) {\n      return false;\n    }\n    if (users.has(userid)) {\n      return false;\n    }\n    // Assume that creating user takes a little time\n    await mylib.sleep(1000);\n    users.set(userid, username);\n    return true;\n  };\n\n  app.get('/register', async (req, res) => {\n    const ret = await mutex.runExclusive(() => createUser(req.query.id, req.query.name));\n    // const ret = await createUser(req.query.id, req.query.name);\n    res.status(200).send(ret).end();\n  });\n\n  app.get('/', (req, res) => {\n    const resStr = JSON.stringify(\n      [...users].map(([k,v]) => ({ id: k, name: v }))\n    , null, 2);\n    res.status(200).setHeader('Content-Type', 'application/json');\n    res.send(resStr).end();\n  });\n\n  const server = app.listen(8081);\n  return server;\n};\n\n\nconst runClient = async () => {\n  const registerUrl1 = new URL('/register?id=teo&name=Tran Teo', 'http://localhost:8081');\n  const registerUrl2 = new URL('/register?id=teo&name=Le Teo', 'http://localhost:8081');\n  const infoUrl = new URL('/', 'http://localhost:8081');\n\n  const prom1 = mylib.makeHttpGet(registerUrl1);\n  await mylib.sleep(200);\n\n  const prom2 = mylib.makeHttpGet(registerUrl2);\n  await mylib.sleep(200);\n\n  await Promise.all([prom1, prom2]);\n\n  const result = await mylib.makeHttpGet(infoUrl);\n  console.log(result);\n};\n\n\nconst server = createServer();\nawait runClient();\nserver.close();\n"
  },
  {
    "path": "js-nodejs/demo15a-deadlock.js",
    "content": "/*\nDEADLOCK\nVersion A\n*/\n\nimport * as mylib from './mylib.js';\nimport { Mutex } from './mylib_mutex.js';\nimport { isMainThread, workerData } from 'worker_threads';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nconst doTask = async (name, mutex) => {\n  mutex.acquire();\n\n  console.log(name, 'acquired resource');\n\n  // mutex.release(); // Forget this statement ==> deadlock\n};\n\nconst workerFunc = async () => {\n  try {\n    const [name, mutexsab] = workerData;\n    const mutex = new Mutex(mutexsab);\n    doTask(name, mutex);\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst mainFunc = async () => {\n  try {\n\n    const mutex = new Mutex();\n\n    const [wkFoo, promFoo] = mylib.createThread(new URL(import.meta.url), [\n      'foo', mutex.getSAB()\n    ]);\n    const [wkBar, promBar] = mylib.createThread(new URL(import.meta.url), [\n      'bar', mutex.getSAB()\n    ]);\n\n    await Promise.all([promFoo, promBar]);\n\n    console.log('You will never see this statement due to deadlock!');\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nif (isMainThread) {\n  await mainFunc();\n} else {\n  await workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/demo15b-deadlock.js",
    "content": "/*\nDEADLOCK\nVersion B\n*/\n\nimport * as mylib from './mylib.js';\nimport { Mutex } from './mylib_mutex.js';\nimport { isMainThread, workerData } from 'worker_threads';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nconst foo = async (mutResourceA, mutResourceB) => {\n  mutResourceA.acquire();\n\n  console.log('foo acquired resource A');\n  await mylib.sleep(2000);\n\n  mutResourceB.acquire();\n  console.log('foo acquired resource B');\n  mutResourceB.release();\n\n  mutResourceA.release();\n};\n\nconst bar = async (mutResourceA, mutResourceB) => {\n  mutResourceB.acquire();\n\n  console.log('bar acquired resource B');\n  await mylib.sleep(2000);\n\n  mutResourceA.acquire();\n  console.log('bar acquired resource A');\n  mutResourceA.release();\n\n  mutResourceB.release();\n};\n\nconst workerFunc = async () => {\n  try {\n    const [action, mutexsabA, mutexsabB] = workerData;\n    const mutResourceA = new Mutex(mutexsabA);\n    const mutResourceB = new Mutex(mutexsabB);\n\n    switch (action) {\n      case 'foo':\n        await foo(mutResourceA, mutResourceB);\n        break;\n      case 'bar':\n        await bar(mutResourceA, mutResourceB);\n        break;\n    }\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst mainFunc = async () => {\n  try {\n\n    const mutResourceA = new Mutex();\n    const mutResourceB = new Mutex();\n\n    const [wkFoo, promFoo] = mylib.createThread(new URL(import.meta.url), [\n      'foo', mutResourceA.getSAB(), mutResourceB.getSAB()\n    ]);\n    const [wkBar, promBar] = mylib.createThread(new URL(import.meta.url), [\n      'bar', mutResourceA.getSAB(), mutResourceB.getSAB()\n    ]);\n\n    await Promise.all([promFoo, promBar]);\n\n    console.log('You will never see this statement due to deadlock!');\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nif (isMainThread) {\n  await mainFunc();\n} else {\n  await workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/demo15ex-deadlock.js",
    "content": "/*\nDEADLOCK\nVersion EX\n*/\n\nimport * as mylib from './mylib.js';\nimport express from 'express';\nimport { Mutex } from 'async-mutex';\n\n\nconst createServer = () => {\n  const app = express();\n  const mutex = new Mutex();\n\n  const doTask = async name => {\n    const release = await mutex.acquire();\n    try {\n      console.log(`Server: ${name} acquired resource`);\n    } finally {\n      // release(); // Forget this statement ==> deadlock\n    }\n  };\n\n  app.get('/', async (req, res) => {\n    const name = req.query.name;\n    await doTask(name);\n    res.status(200).setHeader('Content-Type', 'text/html');\n    res.send(name).end();\n  });\n\n  const server = app.listen(8081);\n  return server;\n};\n\n\nconst runClient = async () => {\n  const respFoo = await mylib.makeHttpGet(new URL('http://localhost:8081/?name=foo'));\n  console.log(`Client: response: ${respFoo}`);\n\n  const respBar = await mylib.makeHttpGet(new URL('http://localhost:8081/?name=bar'));\n\n  console.log('You will never see this statement due to deadlock!');\n\n  console.log(`Client: response: ${respBar}`);\n};\n\n\nconst server = createServer();\nawait runClient();\nserver.close();\n"
  },
  {
    "path": "js-nodejs/demo25-atomic.js",
    "content": "/*\nATOMIC ACCESS\n*/\n\nimport * as mylib from './mylib.js';\nimport { isMainThread, workerData } from 'worker_threads';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nconst updateArr = async (arr) => {\n  await mylib.sleep(1000);\n\n  for (let i = 0; i < 1000; ++i) {\n    // increase arr[0] by one atomically\n    Atomics.add(arr, 0, 1);\n\n    // decrease arr[4] by five atomically\n    Atomics.sub(arr, 4, 5);\n  }\n};\n\nconst workerFunc = async () => {\n  try {\n    const sharedArr = workerData;\n    const arr = new Int32Array(sharedArr);\n    await updateArr(arr);\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst mainFunc = async () => {\n  try {\n\n    // bufArr is an array of which max length = 5\n    // Its element is bufArr[0], bufArr[1], ... bufArr[4]\n    const ARR_MAX_SIZE = 5;\n    const sharedArr = new SharedArrayBuffer(Int32Array.BYTES_PER_ELEMENT * ARR_MAX_SIZE);\n    const arr = new Int32Array(sharedArr);\n\n    const NUM_THREADS = 16;\n    const lstTh = new Array(NUM_THREADS);\n\n    for (let i = 0; i < NUM_THREADS; ++i) {\n      lstTh[i] = mylib.createThread(new URL(import.meta.url),\n        sharedArr\n      );\n    }\n\n    await Promise.all(lstTh.map(([_,pr]) => pr));\n\n    console.log('The result:');\n    console.log(arr);\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nif (isMainThread) {\n  await mainFunc();\n} else {\n  await workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/exer01a-max-div.js",
    "content": "/*\nMAXIMUM NUMBER OF DIVISORS\n*/\n\nimport * as mylib from './mylib.js';\n\n\nconst mainFunc = () => {\n  try {\n\n    const RANGE_START = 1;\n    const RANGE_END = 100000;\n\n    let resValue = 0;\n    let resNumDiv = 0;\n\n    const tpStart = process.hrtime();\n\n    for (let i = RANGE_START; i <= RANGE_END; ++i) {\n      let numDiv = 0;\n\n      for (let j = i / 2; j > 0; --j)\n          if (i % j == 0)\n              ++numDiv;\n\n      if (resNumDiv < numDiv) {\n          resNumDiv = numDiv;\n          resValue = i;\n      }\n    }\n\n    const timeElapsed = mylib.hrtimeToNumber(process.hrtime(tpStart));\n\n    console.log('The integer which has largest number of divisors is', resValue);\n    console.log('The largest number of divisor is', resNumDiv);\n    console.log('Time elapsed =', timeElapsed);\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nmainFunc();\n"
  },
  {
    "path": "js-nodejs/exer01c-max-div.js",
    "content": "/*\nMAXIMUM NUMBER OF DIVISORS\n*/\n\nimport * as mylib from './mylib.js';\nimport { Mutex } from './mylib_mutex.js';\nimport { isMainThread, workerData, parentPort } from 'worker_threads';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nconst doTask = arg => {\n  let resValue = 0;\n  let resNumDiv = 0;\n\n  for (let i = arg.iStart; i <= arg.iEnd; ++i) {\n    let numDiv = 0;\n\n    for (let j = i / 2; j > 0; --j)\n        if (i % j == 0)\n            ++numDiv;\n\n    if (resNumDiv < numDiv) {\n        resNumDiv = numDiv;\n        resValue = i;\n    }\n  }\n\n  parentPort.postMessage({ resValue, resNumDiv });\n};\n\nconst workerFunc = () => {\n  try {\n    const arg = workerData;\n    doTask(arg);\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst prepareArg = (rangeStart, rangeEnd, numThreads) => {\n  let rangeA, rangeB, rangeBlock;\n\n  rangeBlock = (rangeEnd - rangeStart + 1) / numThreads;\n  rangeA = rangeStart;\n\n  let lstWorkerArg = [];\n\n  for (let i = 0; i < numThreads; ++i, rangeA += rangeBlock) {\n      rangeB = rangeA + rangeBlock - 1;\n\n      if (i == numThreads - 1)\n          rangeB = rangeEnd;\n\n      lstWorkerArg.push({ iStart: rangeA, iEnd: rangeB });\n  }\n\n  return lstWorkerArg;\n};\n\n\nconst updateFinalResult = (mutex, finalRes, value, numDiv) => {\n  // Should protect by a mutex?\n  mutex.runExclusive(() => {\n    if (finalRes.numDiv < numDiv) {\n      finalRes.numDiv = numDiv;\n      finalRes.value = value;\n    }\n  });\n};\n\n\nconst mainFunc = async () => {\n  try {\n\n    const RANGE_START = 1;\n    const RANGE_END = 100000;\n    const NUM_THREADS = 8;\n\n    const lstWorkerArg = prepareArg(RANGE_START, RANGE_END, NUM_THREADS);\n    const finalRes = { value: 0, numDiv: 0 };\n\n    const lstTh = new Array(NUM_THREADS);\n    const mutex = new Mutex();\n\n    const tpStart = process.hrtime();\n\n    for (let i = 0; i < NUM_THREADS; ++i) {\n      const arg = lstWorkerArg[i];\n      const [worker, prom] = mylib.createThread(new URL(import.meta.url), arg);\n      worker.on('message', res => updateFinalResult(mutex, finalRes, ...Object.values(res)));\n      lstTh[i] = [worker, prom];\n    }\n\n    await Promise.all(lstTh.map(([_,pr]) => pr));\n    const timeElapsed = mylib.hrtimeToNumber(process.hrtime(tpStart));\n\n    console.log('The integer which has largest number of divisors is', finalRes.value);\n    console.log('The largest number of divisor is', finalRes.numDiv);\n    console.log('Time elapsed =', timeElapsed);\n\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\nif (isMainThread) {\n  await mainFunc();\n} else {\n  workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/exerex-userhash-problem.js",
    "content": "/*\nUSERNAME HASH PROBLEM\n\nThis is an app using expressJS framework to serve \"The Username Hash Service\". The app provides 2 APIs:\n\nGET /?name=<UserName>\n  Returns the <UserName> and the hashed value from <UserName>\n\nGET /history\n  Returns request history. Each <UserName> is in one line.\n\nThe Hash Task is consider CPU consuming. You should wait for a long time to get result (may upto 24 seconds).\n\nNow, let's get started:\n- Run the app. Open the web browser and visit http://localhost:8081/?name=JohnnyTeo\n- While waiting for result, try visiting http://localhost:8081/history\n  Notice that /history is blocked until completion of hash result calculation.\n\nThe reason is Javascript by default runs in a single thread.\nThis thread is busy calculating hash result from request /?name=JohnnyTeo\nso it cannot serve the next request /history\n\nYour task is making /history \"non-blocking\" i.e. app can serve /history while doing hashing job.\n\nP/S: The problem idea is inspired by a great article at:\nhttps://www.digitalocean.com/community/tutorials/how-to-use-multithreading-in-node-js\n*/\n\nimport * as mylib from './mylib.js';\nimport express from 'express';\nimport { getHash, splitStrInToChunks } from './exerex-userhash-util.js';\n\n\nconst PORT = 8081;\nconst app = express();\n\n\nconst getSuperHash = plainText => {\n  const numChunks = 8;\n  const lstChunks = splitStrInToChunks(numChunks, plainText);\n  const lstHashes = lstChunks.map(chunk => getHash(2**21, chunk));\n  const finalHash = getHash(1, lstHashes.join(''));\n  return finalHash;\n};\n\n\nconst userNameHistory = [];\n\n\napp.get('/history', async (req, res) => {\n  const html = userNameHistory.join('<br/>') || '&lt;Empty history&gt;';\n  res.status(200).send(html).end();\n});\n\n\napp.get('/', (req, res) => {\n  const userName = req.query.name;\n  if (!userName) {\n    res.status(400).end();\n    return;\n  }\n  userNameHistory.push(userName);\n\n  const tpStart = process.hrtime();\n\n  // GET USERNAME HASH\n  const userNameHash = getSuperHash(userName);\n\n  const timeElapsed = mylib.hrtimeToNumber(process.hrtime(tpStart));\n  console.log(`userName = ${userName}; Time elapsed = ${timeElapsed}`);\n\n  const html = userName + '<br/>' + userNameHash;\n  res.status(200).setHeader('Content-Type', 'text/html').send(html).end();\n});\n\n\nconsole.log('Server is listening on port', PORT);\nconst server = app.listen(PORT);\n"
  },
  {
    "path": "js-nodejs/exerex-userhash-solve01.js",
    "content": "/*\nUSERNAME HASH PROBLEM\n\nTo make /history \"non-blocking\", run app in 2 threads:\n- The main thread: Serves I/O HTTP requests\n- The worker thread: Does the Hash Task\n\nWhen users visit http://localhost:8081/?name=JohnnyTeo,\nthe app delegates the Hash Task to the worker thread so it can serve the next requests.\n\nThe worker thread receives and does the Hash Task and returns result to main thread.\n*/\n\nimport * as mylib from './mylib.js';\nimport { isMainThread, workerData, parentPort } from 'worker_threads';\nimport express from 'express';\nimport { getHash, splitStrInToChunks } from './exerex-userhash-util.js';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nconst getSuperHashByWorker = plainText => {\n  const numChunks = 8;\n  const lstChunks = splitStrInToChunks(numChunks, plainText);\n  const lstHashes = lstChunks.map(chunk => getHash(2**21, chunk));\n  const finalHash = getHash(1, lstHashes.join(''));\n  return finalHash;\n};\n\n\nconst workerFunc = () => {\n  try {\n    const plainText = workerData;\n    const hash = getSuperHashByWorker(plainText);\n    parentPort.postMessage(hash);\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst getSuperHashByMain = async plainText => {\n  let finalHash = '';\n  const [worker, prom] = mylib.createThread(new URL(import.meta.url), plainText);\n  worker.on('message', hash => finalHash = hash);\n  await prom;\n  return finalHash;\n};\n\n\nconst mainFunc = () => {\n  const PORT = 8081;\n  const app = express();\n  const userNameHistory = [];\n\n  app.get('/history', async (req, res) => {\n    const html = userNameHistory.join('<br/>') || '&lt;Empty history&gt;';\n    res.status(200).send(html).end();\n  });\n\n  app.get('/', async (req, res) => {\n    const userName = req.query.name;\n    if (!userName) {\n      res.status(400).end();\n      return;\n    }\n    userNameHistory.push(userName);\n\n    const tpStart = process.hrtime();\n\n    // GET USERNAME HASH\n    const userNameHash = await getSuperHashByMain(userName);\n\n    const timeElapsed = mylib.hrtimeToNumber(process.hrtime(tpStart));\n    console.log(`userName = ${userName}; Time elapsed = ${timeElapsed}`);\n\n    const html = userName + '<br/>' + userNameHash;\n    res.status(200).setHeader('Content-Type', 'text/html').send(html).end();\n  });\n\n  console.log('Server is listening on port', PORT);\n  const server = app.listen(PORT);\n};\n\n\nif (isMainThread) {\n  try {\n    mainFunc();\n  } catch (error) {\n    console.error(error);\n  }\n} else {\n  workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/exerex-userhash-solve02-faster.js",
    "content": "/*\nUSERNAME HASH PROBLEM\n\nAlthough the problem is solved in previous solution, the calculation speed keeps too slow.\n\nAnalyze the Hash Task and you realize that\nthis task can be divided into multiple individual sub tasks.\nEach sub task is corresponding to hashing one chunk.\n\nHence, multithreading makes sense: Each thread does each sub task in parallel.\n*/\n\nimport * as mylib from './mylib.js';\nimport { isMainThread, workerData, parentPort } from 'worker_threads';\nimport express from 'express';\nimport { getHash, splitStrInToChunks } from './exerex-userhash-util.js';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nexport const workerFunc = () => {\n  try {\n    const [idx, chunk] = workerData;\n    const chunkHash = getHash(2**21, chunk);\n    parentPort.postMessage([idx, chunkHash]);\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst getSuperHashByMain = async (plainText) => {\n  const numChunks = 8; // It is also the number of threads\n\n  const lstChunks = splitStrInToChunks(numChunks, plainText);\n  const lstHashes = new Array(numChunks);\n\n  const lstWorkerProm = [];\n\n  for (let i = 0; i < numChunks; ++i) {\n    const chunk = lstChunks[i];\n    const [worker, prom] = mylib.createThread(new URL(import.meta.url), [i, chunk]);\n    worker.on('message', ([idx, chunkHash]) => lstHashes[idx] = chunkHash);\n    lstWorkerProm.push(prom);\n  }\n\n  await Promise.all(lstWorkerProm);\n  const finalHash = getHash(1, lstHashes.join(''));\n  return finalHash;\n};\n\n\nconst mainFunc = () => {\n  const PORT = 8081;\n  const app = express();\n  const userNameHistory = [];\n\n  app.get('/history', async (req, res) => {\n    const html = userNameHistory.join('<br/>') || '&lt;Empty history&gt;';\n    res.status(200).send(html).end();\n  });\n\n  app.get('/', async (req, res) => {\n    const userName = req.query.name;\n    if (!userName) {\n      res.status(400).end();\n      return;\n    }\n    userNameHistory.push(userName);\n\n    const tpStart = process.hrtime();\n\n    // GET USERNAME HASH\n    const userNameHash = await getSuperHashByMain(userName);\n\n    const timeElapsed = mylib.hrtimeToNumber(process.hrtime(tpStart));\n    console.log(`userName = ${userName}; Time elapsed = ${timeElapsed}`);\n\n    const html = userName + '<br/>' + userNameHash;\n    res.status(200).setHeader('Content-Type', 'text/html').send(html).end();\n  });\n\n  console.log('Server is listening on port', PORT);\n  const server = app.listen(PORT);\n};\n\n\nif (isMainThread) {\n  try {\n    mainFunc();\n  } catch (error) {\n    console.error(error);\n  }\n} else {\n  workerFunc();\n}\n"
  },
  {
    "path": "js-nodejs/exerex-userhash-solve03-faster.js",
    "content": "/*\nUSERNAME HASH PROBLEM\n\nEach time users send requests to hash, threads are created.\nConstantly taking requests and creating new threads is a matter of concern.\n\nBy using Execution Service/Thread Pool, threads can be reused for next tasks/next requests\n(i.e. no more thread creation).\n*/\n\nimport * as mylib from './mylib.js';\nimport { isMainThread } from 'worker_threads';\nimport { Piscina } from 'piscina';\nimport express from 'express';\nimport { getHash, splitStrInToChunks } from './exerex-userhash-util.js';\n\n\n//---------------------------------------------\n//            WORKER THREAD SECTION\n//---------------------------------------------\n\n\nexport const workerFunc = ({idx, chunk}) => {\n  try {\n    const chunkHash = getHash(2**21, chunk);\n    return chunkHash;\n  } catch (error) {\n    console.error(error);\n  }\n};\n\n\n//---------------------------------------------\n//             MAIN THREAD SECTION\n//---------------------------------------------\n\n\nconst getSuperHashByMain = async (execService, plainText) => {\n  const numChunks = 8; // It is also the number of threads\n\n  const lstChunks = splitStrInToChunks(numChunks, plainText);\n  const lstWorkerProm = [];\n\n  for (let i = 0; i < numChunks; ++i) {\n    const chunk = lstChunks[i];\n    lstWorkerProm.push(execService.run({ idx: i, chunk: chunk }, { name: 'workerFunc' }));\n  }\n\n  const lstHashes = await Promise.all(lstWorkerProm);\n  const finalHash = getHash(1, lstHashes.join(''));\n  return finalHash;\n};\n\n\nconst mainFunc = () => {\n  const PORT = 8081;\n  const app = express();\n  const userNameHistory = [];\n\n  const execService = new Piscina({\n    filename: new URL(import.meta.url).href,\n    minThreads: 8,\n    maxThreads: 8\n  });\n\n  app.get('/history', async (req, res) => {\n    const html = userNameHistory.join('<br/>') || '&lt;Empty history&gt;';\n    res.status(200).send(html).end();\n  });\n\n  app.get('/', async (req, res) => {\n    const userName = req.query.name;\n    if (!userName) {\n      res.status(400).end();\n      return;\n    }\n    userNameHistory.push(userName);\n\n    const tpStart = process.hrtime();\n\n    // GET USERNAME HASH\n    const userNameHash = await getSuperHashByMain(execService, userName);\n\n    const timeElapsed = mylib.hrtimeToNumber(process.hrtime(tpStart));\n    console.log(`userName = ${userName}; Time elapsed = ${timeElapsed}`);\n\n    const html = userName + '<br/>' + userNameHash;\n    res.status(200).setHeader('Content-Type', 'text/html').send(html).end();\n  });\n\n  console.log('Server is listening on port', PORT);\n  const server = app.listen(PORT);\n};\n\n\nif (isMainThread) {\n  try {\n    mainFunc();\n  } catch (error) {\n    console.error(error);\n  }\n}\n"
  },
  {
    "path": "js-nodejs/exerex-userhash-util.js",
    "content": "const { createHash } = await import('node:crypto');\n\n\nexport const getHash = (numLoops, str) => {\n  for (let i = numLoops; i > 0; --i) {\n    str = createHash('sha256').update(str).digest('hex');\n  }\n  return str;\n};\n\n\nexport const splitStrInToChunks = (numChunks, str) => {\n  const lstChunks = [];\n  const quotient = Math.floor(str.length / numChunks);\n  const remainder = str.length % numChunks;\n  for (let i = 0; i < numChunks; ++i) {\n    const chunk = str.substring(\n      i * quotient + Math.min(i, remainder),\n      (i+1) * quotient + Math.min(i+1, remainder)\n    );\n    lstChunks.push(chunk);\n  }\n  return lstChunks;\n};\n"
  },
  {
    "path": "js-nodejs/mylib.js",
    "content": "import * as http from 'http';\nimport { Worker } from 'worker_threads';\n\n\nexport const sleep = timems => new Promise(resolve => setTimeout(resolve, timems));\n\n\nexport const hrtimeToNumber = hrtime => (hrtime[0] + (hrtime[1] / 1e9)).toFixed(6);\n\n\nexport const createThread = (filename, input_args) => {\n  const worker = new Worker(filename, { workerData: input_args });\n  const prom = new Promise((resolve, reject) => {\n    // worker.on('message', msg => resolve(msg));\n    worker.on('error', error => reject([worker, error]));\n    worker.on('exit', code => code !== 0 ? reject([worker, code]) : resolve(worker));\n  });\n  return [worker, prom];\n};\n\n\nexport const makeHttpGet = requestUrl => new Promise((resolve, reject) => {\n  const req = http.get(requestUrl, res => {\n    let body = '';\n    res.on('data', chunk => body += chunk);\n    res.on('end', () => {\n      resolve(body);\n    });\n    res.on('error', error => reject(error));\n  });\n  req.end();\n});\n"
  },
  {
    "path": "js-nodejs/mylib_mutex.js",
    "content": "/******************************************************\n*\n* File name:    mylib_mutex.js\n*\n* Author:       Name:   Thanh Nguyen\n*               Email:  thanh.it1995(at)gmail(dot)com\n*\n* License:      3-Clause BSD License\n*\n* Description:  The mutex implementation in Javascript ES2019\n*\n* Warning:      This mutex shall be used for truly-multithreading conditions (e.g. NodeJS Worker)\n*\n******************************************************/\n\n\nexport class Mutex {\n\n  #sabuff;\n  #sa;\n\n\n  constructor(sharedArrayBuffer) {\n    if (!sharedArrayBuffer) {\n      this.#sabuff = new SharedArrayBuffer(Int32Array.BYTES_PER_ELEMENT);\n    } else {\n      Mutex.#requireValidStateSAB(sharedArrayBuffer);\n      this.#sabuff = sharedArrayBuffer;\n    }\n    this.#sa = new Int32Array(this.#sabuff );\n  }\n\n\n  getSAB() {\n    return this.#sabuff;\n  }\n\n\n  static #requireValidStateSAB(sharedArrayBuffer) {\n    if (! sharedArrayBuffer ) {\n      throw 'sharedArrayBuffer is null or undefined';\n    }\n    if (! (sharedArrayBuffer instanceof SharedArrayBuffer)) {\n      throw 'Illegal type of sharedArrayBuffer';\n    }\n    if (sharedArrayBuffer.byteLength < Int32Array.BYTES_PER_ELEMENT) {\n      throw `Illegal sharedArrayBuffer: byteLength=${sharedArrayBuffer.byteLength} < Int32Array.BYTES_PER_ELEMENT`;\n    }\n  }\n\n\n  static #requireValidStateSA(sharedArray) {\n    if (! sharedArray ) {\n      throw 'sharedArray is null or undefined';\n    }\n    if (! (sharedArray instanceof Int32Array) ) {\n      throw 'Illegal type of sharedArray';\n    }\n    if (sharedArray.byteLength < Int32Array.BYTES_PER_ELEMENT) {\n      throw `Illegal sharedArray: byteLength=${sharedArray.byteLength} < Int32Array.BYTES_PER_ELEMENT`;\n    }\n    const sab = sharedArray.buffer;\n    this.#requireValidStateSAB(sab);\n  }\n\n\n  #requireValidState() {\n    Mutex.#requireValidStateSA(this.#sa);\n  }\n\n\n  acquire() {\n    this.#requireValidState();\n\n    while (true) {\n      if (Atomics.load(this.#sa, 0) > 0) {\n        while (Atomics.wait(this.#sa, 0, 0) !== 'ok');\n      }\n\n      let oldCounter = Atomics.add(this.#sa, 0, 1);\n      if (oldCounter >= 1) {\n        Atomics.sub(this.#sa, 0, 1);\n        continue;\n      }\n\n      // Atomics.notify(this.#sa, 0, 1);\n      return;\n    }\n  }\n\n\n  release() {\n    this.#requireValidState();\n\n    if (Atomics.load(this.#sa, 0) > 0) {\n      Atomics.sub(this.#sa, 0, 1);\n    }\n    Atomics.notify(this.#sa, 0, 1);\n  }\n\n\n  runExclusive(callbackFunc) {\n    this.#requireValidState();\n\n    if (typeof callbackFunc !== 'function') {\n      throw 'Illegal type of callbackFunc';\n    }\n    let res = undefined;\n\n    this.acquire();\n    try {\n      res = callbackFunc();\n    } finally {\n      this.release();\n    }\n\n    return res;\n  }\n\n}\n\n\n//---------------------------------------------\n//               TESTING AREA\n//---------------------------------------------\n\n\n// import { Worker, isMainThread, workerData } from 'worker_threads';\n\n\n// const sleep = timems => new Promise(resolve => setTimeout(resolve, timems));\n\n\n// const createThread = (filename, input_args) => {\n//   const worker = new Worker(filename, { workerData: input_args });\n//   const prom = new Promise((resolve, reject) => {\n//     // worker.on('message', msg => resolve(msg));\n//     worker.on('error', error => reject([worker, error]));\n//     worker.on('exit', code => code !== 0 ? reject([worker, code]) : resolve(worker));\n//   });\n//   return [worker, prom];\n// };\n\n\n// const workerFunc = async () => {\n//   try {\n//     const [sab, msg] = workerData;\n//     const mutex = new Mutex(sab);\n//     await sleep(1000);\n//     mutex.acquire();\n//     // await sleep(20);\n//     console.log(msg);\n//     mutex.release();\n\n//   } catch (error) {\n//     console.error(error);\n//   }\n// };\n\n\n// const mainFunc = async () => {\n//   try {\n//     const mutex = new Mutex();\n//     const lstProm = [];\n\n//     console.log('Begin creating worker thread...');\n//     for (let i = 0; i < 200; ++i) {\n//       const [worker, prom] = createThread(\n//         new URL(import.meta.url),\n//         [\n//           mutex.getSAB(),\n//           'foo' + i\n//         ]);\n//       lstProm.push(prom);\n//     }\n\n//     await Promise.all(lstProm);\n//     console.log('Exit app');\n\n//   } catch (error) {\n//     console.error(error);\n//   }\n// };\n\n\n// if (isMainThread) {\n//   await mainFunc();\n// } else {\n//   await workerFunc();\n// }\n"
  },
  {
    "path": "js-nodejs/package.json",
    "content": "{\n  \"type\": \"module\",\n  \"dependencies\": {\n    \"@types/node\": \"^18.7.2\",\n    \"async-mutex\": \"^0.3.2\",\n    \"express\": \"^4.18.1\",\n    \"piscina\": \"^3.2.0\"\n  }\n}\n"
  },
  {
    "path": "notes-articles.md",
    "content": "# ARTICLES AND LEARNING NOTES\n\n## DESCRIPTION\n\nSome articles and learning notes taken from my research.\n\n&nbsp;\n\n---\n\n&nbsp;\n\n## CONTENT\n\n### VOLATILE VS ATOMIC\n\nThere are two important concepts in multithreading environment:\n\n- Atomicity.\n- Visibility.\n\nThe volatile keyword eradicates visibility problems, but it does not deal with atomicity.\n\nConsider this snippet in a concurrent environment:\n\n```code\nboolean isStopped = false;\n\nwhile (!isStopped) {\n    // Do some kind of work\n}\n```\n\nThe idea here is that some thread could change the value of ```isStopped``` from false to true in order to indicate to the subsequent loop that it is time to stop looping.\n\nIntuitively, there is no problem. Logically if another thread makes ```isStopped``` equal to true, then the loop must terminate. The reality is that the loop will likely never terminate even if another thread makes ```isStopped``` equal to true.\n\nThe reason for this is not intuitive, but consider that modern processors have multiple cores and that each core has multiple registers and multiple levels of cache memory that are **not accessible to other processors**. In other words, values that are cached in one processor's local memory are **not visisble** to threads executing on a different processor. Herein lies one of the central problems with concurrency: visibility.\n\nThe Java Memory Model makes no guarantees whatsoever about when changes that are made to a variable in one thread may become visible to other threads. In order to guarantee that updates are visisble as soon as they are made, you must synchronize.\n\nThe ```volatile``` keyword is a weak form of synchronization. While it does nothing for mutual exclusion or atomicity, it does provide a guarantee that changes made to a variable in one thread will become visible to other threads as soon as it is made. Because individual reads and writes to variables that are not 8-bytes are atomic in Java, declaring variables ```volatile``` provides an easy mechanism for providing visibility in situations where there are no other atomicity or mutual exclusion requirements.\n\n&nbsp;\n\nTake the following example:\n\n```code\nThread A:\n    i = i + 1\n\nThread B:\n    i = 1000\n```\n\nNo matter how you define ```i```, a different thread reading the value just when the above line is executed might get ```i```, or ```i + 1```, because the operation is not atomically. Explanation:\n\n- Assume ```i = 0```\n- Thread A reads ```i```, calculates ```i + 1```, which is ```1```\n- Thread B sets ```i = 1000``` and returns\n- Thread A now sets ```i``` to the result of the operation, which is ```i = 1```\n\nAtomics like AtomicInteger ensure, that such operations happen atomically. So the above issue cannot happen, ```i``` would either be ```1000``` or ```1001``` once both threads are finished.\n\n&nbsp;\n\nNotably, an operation that requires more than one read/write, such as ```i++```, which is equivalent to ```i = i + 1```, which does one read and one write, **is not atomic**, since another thread may write to ```i``` between the read and the write.\n\nThe ```Atomic``` classes, like ```AtomicInteger``` and ```AtomicReference```, provide a wider variety of operations atomically, specifically including increment for ```AtomicInteger```.\n\n&nbsp;\n\nVolatile only ensures that the access is atomically, while Atomics ensure that the operation is atomically.\n\n&nbsp;\n\nReferences:\n\n- <https://stackoverflow.com/questions/19744508/volatile-vs-atomic/19744523>\n\n&nbsp; &nbsp;\n"
  },
  {
    "path": "notes-demos-exercises.md",
    "content": "# NOTES OF DEMOS AND EXERCISES\n\n## DESCRIPTION\n\nThis file contains descriptions/notes of demo and exercise in the repo.\n\n&nbsp;\n\n## TABLE OF CONTENTS\n\nI am sorry that generated table of contents contains too many uppercase stuff...\n\n- [NOTES OF DEMOS AND EXERCISES](#notes-of-demos-and-exercises)\n  - [DESCRIPTION](#description)\n  - [TABLE OF CONTENTS](#table-of-contents)\n  - [DEMOSTRATIONS](#demostrations)\n    - [DEMO 00 - INTRODUCTION TO MULTITHREADING](#demo-00---introduction-to-multithreading)\n    - [DEMO 01 - HELLO](#demo-01---hello)\n    - [DEMO 02 - THREAD JOINS](#demo-02---thread-joins)\n    - [DEMO 03 - PASSING ARGUMENTS](#demo-03---passing-arguments)\n    - [DEMO 04 - SLEEP](#demo-04---sleep)\n    - [DEMO 05 - GETTING THREAD'S ID](#demo-05---getting-threads-id)\n    - [DEMO 06 - LIST OF MULTIPLE THREADS](#demo-06---list-of-multiple-threads)\n    - [DEMO 07 - FORCING A THREAD TO TERMINATE](#demo-07---forcing-a-thread-to-terminate)\n    - [DEMO 08 - GETTING RETURNED VALUES FROM THREADS](#demo-08---getting-returned-values-from-threads)\n    - [DEMO 09 - THREAD DETACHING](#demo-09---thread-detaching)\n    - [DEMO 10 - THREAD YIELDING](#demo-10---thread-yielding)\n    - [DEMO 11 - EXECUTOR SERVICES AND THREAD POOLS](#demo-11---executor-services-and-thread-pools)\n    - [DEMO 12A - RACE CONDITIONS](#demo-12a---race-conditions)\n    - [DEMO 12B - DATA RACES](#demo-12b---data-races)\n    - [DEMO 12C - RACE CONDITIONS AND DATA RACES](#demo-12c---race-conditions-and-data-races)\n    - [IMPORTANT NOTES](#important-notes)\n    - [DEMO 13 - MUTEXES](#demo-13---mutexes)\n    - [DEMO 14 - SYNCHRONIZED BLOCKS](#demo-14---synchronized-blocks)\n    - [DEMO 15 - DEADLOCK](#demo-15---deadlock)\n      - [Version A](#version-a)\n      - [Version B](#version-b)\n    - [DEMO 16 - MONITORS](#demo-16---monitors)\n    - [DEMO 17 - REENTRANT LOCKS (RECURSIVE MUTEXES)](#demo-17---reentrant-locks-recursive-mutexes)\n    - [DEMO 18 - BARRIERS AND LATCHES](#demo-18---barriers-and-latches)\n    - [DEMO 19 - READ-WRITE LOCKS](#demo-19---read-write-locks)\n    - [DEMO 20A - SEMAPHORES](#demo-20a---semaphores)\n      - [Version A01](#version-a01)\n      - [Version A02](#version-a02)\n      - [Version A03](#version-a03)\n    - [DEMO 20B - SEMAPHORES](#demo-20b---semaphores)\n    - [DEMO 21 - CONDITION VARIABLES](#demo-21---condition-variables)\n    - [DEMO 22 - BLOCKING QUEUES](#demo-22---blocking-queues)\n    - [DEMO 23 - THREAD-LOCAL STORAGE](#demo-23---thread-local-storage)\n    - [DEMO 24 & 25 - THE VOLATILE KEYWORD AND ATOMIC ACCESS](#demo-24--25---the-volatile-keyword-and-atomic-access)\n  - [EXERCISES](#exercises)\n    - [EX01 - MAXIMUM NUMBER OF DIVISORS](#ex01---maximum-number-of-divisors)\n      - [Version A](#version-a-1)\n      - [Version B](#version-b-1)\n      - [Version C](#version-c)\n    - [EX02 - THE PRODUCER-CONSUMER PROBLEM](#ex02---the-producer-consumer-problem)\n    - [EX03 - THE READERS-WRITERS PROBLEM](#ex03---the-readers-writers-problem)\n      - [Problem statement](#problem-statement)\n      - [Problem variations](#problem-variations)\n        - [Second readers-writers problem](#second-readers-writers-problem)\n        - [Third readers-writers problem](#third-readers-writers-problem)\n    - [EX04 - THE DINING PHILOSOPHERS PROBLEM](#ex04---the-dining-philosophers-problem)\n      - [Problem statement](#problem-statement-1)\n      - [Solution](#solution)\n    - [EX05 - MATRIX PRODUCTION](#ex05---matrix-production)\n      - [Version A: Matrix-vector multiplication](#version-a-matrix-vector-multiplication)\n      - [Version B: Matrix-matrix production (dot product)](#version-b-matrix-matrix-production-dot-product)\n    - [EX06 - BLOCKING QUEUE IMPLEMENTATION](#ex06---blocking-queue-implementation)\n    - [EX07 - THE DATA SERVER PROBLEM](#ex07---the-data-server-problem)\n    - [EX08 - EXECUTOR SERVICE & THREAD POOL IMPLEMENTATION](#ex08---executor-service--thread-pool-implementation)\n\n&nbsp;\n\n---\n\n&nbsp;\n\n## DEMOSTRATIONS\n\n### DEMO 00 - INTRODUCTION TO MULTITHREADING\n\nJust run the code several times and see results: The results are not the same!!!\n\nThis is because threads execute concurrently. The operating system shall care the order of thread execution. Depend on current state, the coressponding result varies.\n\n&nbsp;\n\n### DEMO 01 - HELLO\n\nYou learn how to create a thread. That's all.\n\n&nbsp;\n\n### DEMO 02 - THREAD JOINS\n\nWhen I say: \"Thread X waits for thread Y to join\". It means thread X shall wait for thread Y to complete, then thread X continues its execution.\n\n&nbsp;\n\n### DEMO 03 - PASSING ARGUMENTS\n\nYou learn how to pass arguments to a thread:\n\n- Passing various data types of variables.\n- Passing some special types (such as C++ references).\n\n&nbsp;\n\n### DEMO 04 - SLEEP\n\nMaking a thread sleep for a while.\n\nNote that thread sleep is:\n\n- Useful, when you want to wait for something to be ready.\n- Awful, when performance is important, you may waste a lot of resources while thread is asleep.\n\n&nbsp;\n\n### DEMO 05 - GETTING THREAD'S ID\n\nEach thread has its own identification. This demo helps you learn how to get the thread's id.\n\n&nbsp;\n\n### DEMO 06 - LIST OF MULTIPLE THREADS\n\nHandling a list of multiple threads.\n\nBe careful when you pass arguments to threads due to variable reference mechanism. In C/C++ you can forget this warning because variables are usually passed by values.\n\nFor an example:\n\n```code\nfunction doTask(i) {\n\n}\n\nfor (int i = 0; i < 3; ++i) {\n    th = new Thread(doTask(i));\n    th.start();\n}\n```\n\nThere is only one variable `i` and its reference is passed into thread `th`. In the end, all 3 threads will receive `i = 3` as the parameter.\n\nHow to solve this problem? Just create new variables.\n\n```code\nfunction doTask(i) {\n\n}\n\nfor (int i = 0; i < 3; ++i) {\n    arg = i;\n    th = new Thread(doTask(arg));\n    th.start();\n}\n```\n\n&nbsp;\n\n### DEMO 07 - FORCING A THREAD TO TERMINATE\n\nForcing a thread to terminate aka. \"killing the thread\".\n\nSometimes, we want to force a thread to terminate (for convenient).\n\nHowever, to be careful, the thread should terminate by itself, not by external factors. Assume that a thread is using resource or locking a mutex, and then it is suddenly killed by external factors, so the harmful results are:\n\n- Resource may not be disposed/freed.\n- Mutex is not unlocked, which is strongly possible to leads to the deadlock.\n\n&nbsp;\n\n### DEMO 08 - GETTING RETURNED VALUES FROM THREADS\n\nYou learn how to return value from a thread, and how to use that value for future tasks.\n\nPlease note that if you do not use a synchronization mechanism (e.g. thread join, mutex...) then the result may be incorrect. To be clear, let's see this:\n\n```code\nresult = 0;\n\nfunction doTask() {\n  do something for a while;\n  result = 9;\n}\n\nth = new Thread(doTask);\nth.start();\nprint(result);\n```\n\nWe are not sure that result printed is `9`, because at the time `print(result)` executes, `th` does not complete yet (`result = 9` are not executed).\n\nSo, we need to wait for `th` to complete before printing the result.\n\n```code\nresult = 0;\n\nfunction doTask() {\n  do something for a while;\n  result = 9;\n}\n\nth = new Thread(doTask);\nth.start();\nth.join(); // wait for th to complete to make sure result is set\nprint(result);\n```\n\n&nbsp;\n\n### DEMO 09 - THREAD DETACHING\n\nWhen a thread is created, one of its attributes defines whether it is joinable or detached. Only threads that are created as joinable can be joined. If a thread is created as detached, it can never be joined.\n\n&nbsp;\n\n### DEMO 10 - THREAD YIELDING\n\nYield is an action that occurs in a computer program during multithreading, of forcing a processor to relinquish control of the current running thread, and sending it to the end of the running queue, of the same scheduling priority.\n\n&nbsp;\n\n### DEMO 11 - EXECUTOR SERVICES AND THREAD POOLS\n\nYou learn how to use the executor services (and thread pools) and how they works.\n\nFrom Wikipedia:\n\n> A thread pool is a software design pattern for achieving concurrency of execution in a computer program. Often also called a replicated workers or worker-crew model, a thread pool maintains multiple threads waiting for tasks to be allocated for concurrent execution by the supervising program. By maintaining a pool of threads, the model increases performance and avoids latency in execution due to frequent creation and destruction of threads for short-lived tasks.\n\nAn executor service (or a thread-pool executor) includes:\n\n- A thread pool, and\n- Methods to manage this thread pool:\n  - `submit()`: Push tasks to thread pool\n  - `shutdown()`: Stop/join all threads in pool\n  - ...\n\n&nbsp;\n\n### DEMO 12A - RACE CONDITIONS\n\nA race condition or race hazard is the condition of an electronics, software, or other system where the system's substantive behavior is dependent on the sequence or timing of other uncontrollable events.\n\nThis program illustrates race condition: Each time you run the program, the results displayed are different.\n\n&nbsp;\n\n### DEMO 12B - DATA RACES\n\nData race specifically refers to the non-synchronized conflicting \"memory accesses\" (or actions, or operations) to the same memory location.\n\nI use a problem statement for illustration: From range [ 1..N ], count numbers of integers which are divisible by 2 or by 3.\n\nFor an example of N = 8, the integers that match the requirements are 2, 3, 6 and 8. Hence, the result is four numbers.\n\nOf course, you can easily solve the problem by a single loop. However, we need to go deeper. There is another solution using \"marking array\".\n\n- Let `a` be array of boolean values. Initialize all elements in `a` by `false`.\n- For `i` in range `1` to `N`, if `i` is divisible by `2` or by `3`, then assign `a[i] = true`.\n- Finally, the result is now counting number of `true` in `a`.\n\nWith `N = 8`, `a[2], a[3], a[4], a[6], a[8]` are marked by `true`.\n\n&nbsp;\n\nAbout the source code, there are two versions:\n\n- Version 01 uses traditional single threading to let you get started.\n\n- Version 02 uses multithreading.\n  - Thread `markDiv2` will mark `true` for all numbers divisible by 2.\n  - Thread `markDiv3` will mark `true` for all numbers divisible by 3.\n  - **The rule of threading tell us that `a[6]` might be accessed by both threads at the same time ==> DATA RACE**. However, in the end, `a[6] = true` is obvious, so the final result is still correct ==> not race condition.\n\n&nbsp;\n\n### DEMO 12C - RACE CONDITIONS AND DATA RACES\n\nMany people are confused about race conditions and data races.\n\n- There is a case that race condition appears without data race. That is demo version A.\n- There is also a case that data race appears without race condition. That is demo version B.\n\n*Small note: Looking from a deeper perspective, demo version A still causes data race (that is... output console terminal, hahaha).*\n\nUsusally, race condition happens together with data race. A race condition often occurs when two or more threads need to perform operations on the same memory area (data race) but the results of computations depends on the order in which these operations are performed.\n\nConcurrent accesses to shared resources can lead to unexpected or erroneous behavior, so parts of the program where the shared resource is accessed need to be protected in ways that avoid the concurrent access. This protected section is the **critical section** or **critical region**.\n\n&nbsp;\n&nbsp;\n\n### IMPORTANT NOTES\n\n&nbsp;\n\nMultithreading makes things run in parallel/concurrency. Therefore we need techniques that handle the control flow to:\n\n- make sure the app runs correctly, and\n- avoid race conditions.\n\nThere are several techniques, which are divided into two types: synchronization and non-synchronization.\n\n|             | SYNCHRONIZATION | NON-SYNCHRONIZATION |\n| ----------- | --------------- | ------------------- |\n| Description | To block threads until a condition is satisfy | Not to block threads |\n| Other names | Blocking | Non-blocking, lock-free |\n| Techniques  | - Low-level: Mutex, semaphore, condition variable<br>- High-level: Synchronized block, blocking queue, barrier and latch | Atomic, thread-local storage |\n| Pros        | - Give you in-depth controls<br>- Cooperate among threads | - App's performance may be better (compared to synchronization)<br>- Avoid deadlock |\n| Cons        | - Hard to control in complex synchronization<br>- May be dangerous (when deadlock appears) | Usually too simple |\n\n&nbsp;\n\nPlease note that we cannot replace sync techniques by non-sync techniques and vice versa. Each technique has its use cases, strengths and weaknesses.\n\nIn practical:\n\n- We mostly use synchronized blocks, blocking queues, atomic operations and mutexes.\n- We usually combine sync and non-sync techniques.\n\n&nbsp;\n\n`------------------ End of important notes ------------------`\n\n&nbsp;\n&nbsp;\n\n### DEMO 13 - MUTEXES\n\nThe mutex is the synchronization primitive used to prevent race conditions.\n\nIf a resource meets race conditions (i.e. it is access by more than one thread), we create a mutex associating with it:\n\n- Lock the mutex before accessing the resource.\n- Release the mutex after accessing the resource.\n\nThe code block between (lock-release) mutex action shall be executed by one thread at a time.\n\n&nbsp;\n\n### DEMO 14 - SYNCHRONIZED BLOCKS\n\nSynchronized blocks are the blocks of code which prevent executions of multiple threads, that means only one thread can execute a synchronized block at a time although multiple threads are running this block. A synchronized block can be made by using a mutex:\n\n- Lock the mutex at the begin of the block.\n- Unlock the mutex at the end of the block.\n\nIn some programming languages, synchronized blocks are supported by default, such as Java and C#.\n\n&nbsp;\n\n### DEMO 15 - DEADLOCK\n\n#### Version A\n\nA simple demo of deadlock: Forgetting to release mutex.\n\n&nbsp;\n\n#### Version B\n\nThere are 2 workers \"foo\" and \"bar\".\n\nThey try to access resource A and B (which are protected by mutResourceA and mutResourceB).\n\nScenario:\n\n```text\nfoo():\n    synchronized:\n        access resource A\n\n        synchronized:\n            access resource B\n\nbar():\n    synchronized:\n        access resource B\n\n        synchronized:\n            access resource A\n```\n\nAfter foo accessing A and bar accessing B, foo and bar might wait other together ==> Deadlock occurs.\n\n&nbsp;\n\n### DEMO 16 - MONITORS\n\nMonitor: Concurrent programming meets object-oriented programming.\n\n- When concurrent programming became a big deal, object-oriented programming too.\n- People started to think about ways to make concurrent programming more structured.\n\nA monitor is a thread-safe class, object, or module that wraps around a mutex in order to safely allow access to a method or variable by more than one thread.\n\n&nbsp;\n\n### DEMO 17 - REENTRANT LOCKS (RECURSIVE MUTEXES)\n\nThe reason for using reentrant lock is to avoid a deadlock due to e.g. recursion.\n\nA reentrant lock is a synchronization primitive that may be acquired multiple times by the same thread. Internally, it uses the concepts of \"owning thread\" and \"recursion level\" in addition to the locked/unlocked state used by primitive locks.\n\nIn the locked state, some thread owns the lock; in the unlocked state, no thread owns it.\n\n&nbsp;\n\n### DEMO 18 - BARRIERS AND LATCHES\n\nIn cases where you must wait for a number of tasks to be completed before an overall task can proceed, barrier synchronization can be used.\n\nThere are two types of barriers:\n\n- Cyclic barrier: A general, *reusable barrier*.\n- Count down latch: There is no possibility to increase or reset the counter, which makes the latch a *single-use barrier*.\n\n&nbsp;\n\n### DEMO 19 - READ-WRITE LOCKS\n\nIn many situations, data is read more often than it is modified or written. In these cases, you can allow threads to read concurrently while holding the lock and allow only one thread to hold the lock when data is modified. A multiple-reader single-writer lock (or read-write lock) does this.\n\nA read-write lock is acquired either for reading or writing, and then is released. The thread that acquires the read-write lock must be the one that releases it.\n\n&nbsp;\n\n### DEMO 20A - SEMAPHORES\n\n#### Version A01\n\nIn an exam, each candidate is given a couple of 2 scratch papers. Write a program to illustrate this scenario.\n\nThe program will combine 2 scratch papers into one test package, concurrenly.\n\n&nbsp;\n\n#### Version A02\n\nThe problem in version 01 is:\n\n- When \"makeOnePaper\" produces too fast, there are a lot of pending papers...\n\nThis version 02 solves the problem:\n\n- Use a semaphore to restrict \"makeOnePaper\": Only make papers when a package is finished.\n\n&nbsp;\n\n#### Version A03\n\nThe problem in this version is DEADLOCK, due to a mistake of semaphore synchronization.\n\n&nbsp;\n\n### DEMO 20B - SEMAPHORES\n\nA car is manufactured at each stop on a conveyor belt in a car factory.\n\nA car is constructed from thefollowing parts: chassis, tires. Thus there are 2 tasks in manufacturing a car. However, 4 tires cannot be added until the chassis is placed on the belt.\n\nThere are:\n\n- 2 production lines (i.e. 2 threads) of making tires.\n- 1 production line (i.e. 1 thread) of making chassis.\n\nWrite a program to illustrate this scenario.\n\n&nbsp;\n\n### DEMO 21 - CONDITION VARIABLES\n\nCondition variables are synchronization primitives that enable threads to wait until a particular condition occurs. Condition variables are user-mode objects that cannot be shared across processes.\n\n&nbsp;\n\n### DEMO 22 - BLOCKING QUEUES\n\nA blocking queue is a queue that blocks when you:\n\n- try to dequeue from it and the queue is empty, or...\n- try to enqueue items to it and the queue is already full.\n\n\"Synchronous queue\" is usually a synonym of \"blocking queue\". In Java, a synchronous queue has zero capacity (i.e. it does not store any value at all).\n\n&nbsp;\n\n### DEMO 23 - THREAD-LOCAL STORAGE\n\nIn some cases, shared resources could be used individually for each thread. Every thread has its own copy of the shared resources. Therefore, race condition disappears.\n\n```text\nBEFORE:\n            X = 8          (X is shared resource)\n              |\n    ---------------------\n    |         |         |\n    v         v         v\n ThreadA   ThreadB   ThreadC\n\n\nAFTER:\n    ---------------------\n    |         |         |\n    v         v         v\n ThreadA   ThreadB   ThreadC\n  X = 8     X = 8     X = 8\n```\n\nThread-local storage helps you to **avoid synchronization**, because synchronization might be dangerous and hard to handle.\n\nApplication of thread-local storage:\n\n- Counter: Each thread does its own counting job.\n\n- Security: The random functions often use an initialization seed. When multiple threads calling a random function, results may be the same for some threads. This may lead to security issues. Using synchronization of course solves this problem, but it is too overhead. In this case, using thread-local storage is great. Each thread use an individual random function, which has a different random seed.\n\nIn the demo code, by using thread-local storage, each thread has its own counter. So, the counter in one thread is completely independent of each other.\n\n&nbsp;\n\n### DEMO 24 & 25 - THE VOLATILE KEYWORD AND ATOMIC ACCESS\n\nPlease read article \"Volatile vs Atomic\" in [notes-articles.md](notes-articles.md) for better understanding.\n\n&nbsp;\n\n---\n\n&nbsp;\n\n## EXERCISES\n\n### EX01 - MAXIMUM NUMBER OF DIVISORS\n\nProblem statement: Find the integer in the range 1 to 100000 that has the largest number of divisors.\n\n&nbsp;\n\n#### Version A\n\nThe solution without multithreading.\n\n&nbsp;\n\n#### Version B\n\nThis source code file contains the solution using multithreading.\n\nThere are 2 phases:\n\n- Phase 1:\n  - Each worker finds result on a specific range.\n  - This phase uses multiple threads.\n\n- Phase 2:\n  - Based on multiple results from workers, main function gets the final result with maximum numDiv.\n  - This phase uses a single thread (i.e. main function).\n\n&nbsp;\n\n#### Version C\n\nThe difference between version C and version B is:\n\n- Each worker finds result on a specific range, and then updates final result itself.\n- So, main function does nothing.\n\n&nbsp;\n\n### EX02 - THE PRODUCER-CONSUMER PROBLEM\n\nThe producer–consumer problem (also known as the bounded-buffer problem) is a classic example of a multi-process synchronization problem. The first version of which was proposed by Edsger W. Dijkstra in 1965.\n\nIn the producer-consumer problem, there is one producer that is producing something and there is one consumer that is consuming the products produced by the producer. The producers and consumers share the same memory buffer that is of fixed-size.\n\nThe job of the producer is to generate the data, put it into the buffer, and again start generating data. While the job of the consumer is to consume the data from the buffer.\n\nIn the later formulation of the problem, Dijkstra proposed multiple producers and consumers sharing a finite collection of buffers.\n\n**What are the problems here?**\n\n- The producer and consumer should not access the buffer at the same time.\n\n- The producer should produce data only when the buffer is not full.\n  - If the buffer is full, then the producer shouldn't be allowed to put any data into the buffer.\n\n- The consumer should consume data only when the buffer is not empty.\n  - If the buffer is empty, then the consumer shouldn't be allowed to take any data from the buffer.\n\n&nbsp;\n\n### EX03 - THE READERS-WRITERS PROBLEM\n\n#### Problem statement\n\nConsider a situation where we have a file shared between many people.\n\nIf one of the people tries editing the file, no other person should be reading or writing at the same time, otherwise changes will not be visible to him/her. However if some person is reading the file, then others may read it at the same time.\n\nPrecisely in Computer Science we call this situation as the readers-writers problem.\n\n**What are the problems here?**\n\n- One set of data is shared among a number of processes.\n- Once a writer is ready, it performs its write. Only one writer may write at a time.\n- If a process is writing, no other process can read it.\n- If at least one reader is reading, no other process can write.\n\n&nbsp;\n\n#### Problem variations\n\n##### Second readers-writers problem\n\nThe first solution is suboptimal, because it is possible that a reader R1 might have the lock, a writer W be waiting for the lock, and then a reader R2 requests access.\n\nIt would be unfair for R2 to jump in immediately, ahead of W; if that happened often enough, W would STARVE. Instead, W should start as soon as possible.\n\nThis is the motivation for the second readers–writers problem, in which the constraint is added that no writer, once added to the queue, shall be kept waiting longer than absolutely necessary. This is also called writers-preference.\n\n##### Third readers-writers problem\n\nIn fact, the solutions implied by both problem statements can result in starvation - the first one may starve writers in the queue, and the second one may starve readers.\n\nTherefore, the third readers–writers problem is sometimes proposed, which adds the constraint that no thread shall be allowed to starve; that is, the operation of obtaining a lock on the shared data will always terminate in a bounded amount of time.\n\nSolution:\n\n- The idea is using a semaphore \"serviceQueue\" to preserve ordering of requests (signaling must be FIFO).\n\n&nbsp;\n\n### EX04 - THE DINING PHILOSOPHERS PROBLEM\n\n#### Problem statement\n\nThe dining philosophers problem states that there are 5 philosophers sharing a circular table and they eat and think alternatively. There is a bowl of rice for each of the philosophers and 5 chopsticks.\n\nA philosopher needs both their right and left chopstick to eat.\n\nA hungry philosopher may only eat if there are both chopsticks available.\n\nOtherwise a philosopher puts down their chopstick and begin thinking again.\n\n&nbsp;\n\n#### Solution\n\nA solution of the dining philosophers problem is to use a semaphore to represent a chopstick.\n\nA chopstick can be picked up by executing a wait operation on the semaphore and released by executing a signal semaphore.\n\nThe structure of a random philosopher ```i``` is given as follows:\n\n```pseudocode\nwhile true do\n    wait( chopstick[i] );\n    wait( chopstick[ (i+1) % 5] );\n\n    EATING THE RICE\n\n    signal( chopstick[i] );\n    signal( chopstick[ (i+1) % 5] );\n\n    THINKING\n```\n\n**What are the problems here?**\n\n- Deadlock.\n- Starvation.\n\nThe above solution makes sure that no two neighboring philosophers can eat at the same time. But this solution can lead to a deadlock. This may happen if all the philosophers pick their left chopstick simultaneously. Then none of them can eat and deadlock occurs.\n\nSome of the ways to avoid deadlock are as follows:\n\n- An even philosopher should pick the right chopstick and then the left chopstick while an odd philosopher should pick the left chopstick and then the right chopstick.\n- A philosopher should only be allowed to pick their chopstick if both are available at the same time.\n\n&nbsp;\n\n### EX05 - MATRIX PRODUCTION\n\n#### Version A: Matrix-vector multiplication\n\nFor an example:\n\nMatrix A:\n\n```text\n|   1   2   3   |\n|   4   5   6   |\n|   7   8   9   |\n```\n\nVector b:\n\n```text\n|   3   |\n|   -1  |\n|   0   |\n```\n\nThe multiplication of A and b is the vector:\n\n```text\n|   1   |\n|   7   |\n|   13  |\n```\n\n**Solution:**\n\n- Separate matrix A into list of rows.\n- For each row, calculate scalar product with vector b.\n- We can process each row individually. Therefore multithreading will get the job done.\n\n&nbsp;\n\n#### Version B: Matrix-matrix production (dot product)\n\nFor an example:\n\nMatrix A:\n\n```text\n|   1   3   5   |\n|   2   4   6   |\n```\n\nMatrix B:\n\n```text\n|   1   0   1   0   |\n|   0   1   0   1   |\n|   1   0   0   -2  |\n```\n\nThe result of dot(A, B) is the matrix:\n\n```text\n|   6   3   1   -7  |\n|   8   4   2   -8  |\n```\n\n&nbsp;\n\n### EX06 - BLOCKING QUEUE IMPLEMENTATION\n\nBlocking queues strongly relate to the producer-consumer problem:\n\n- The `enqueue` method is corresponding to the `produce` action, which creates an object and push it into the rear of the queue.\n- The `dequeue` method is corresponding to the `consume` action, which removes an object from the front of the queue.\n\nThere are many methods to implement the producer-consumer problem and this is similar to implementation of the blocking queues.\n\n&nbsp;\n\n### EX07 - THE DATA SERVER PROBLEM\n\nThe internal data server of a company performs two main tasks for a request:\n\n- check_auth_user()\n- process_files()\n\nThe pseudo code is:\n\n```pseudo\nfunction process_request():\n    if check_auth_user(), then:\n        list_file_data = process_files()\n        return list_file_data\n\n\nfunction check_auth_user():\n    check username, permissions, encryption...\n    return (true or false)\n\n\nfunction process_files():\n    read file data from disk\n    do some stuff...\n    write log\n    return file data\n```\n\n&nbsp;\n\n**The problem:** Usually, two tasks might take a long time. How can we improve the performance? Suppose this server serves internally for employees in company.\n\nWe can assume that `check_auth_user()` usually returns true, so instead of waiting for a long time of `check_auth_user()`, we can run `process_files()` in parallel.\n\n```text\nBEFORE:\n    Main thread  ------------------->  ------------------->\n                 check_auth_user()     process_files()\n\n\nAFTER:\n                                        Thread join\n                                        Synchronize\n    Main thread  ------------------->----------------->\n                 check_auth_user()           ^\n                                             |\n    Child thread ------------------->---------\n                 process_files()\n```\n\n&nbsp;\n\n**The most important & interesting things** is here:\n\n**The first thing:**\n\n- Checking for authorization is CPU bound (and maybe network bandwidth bound).\n- Processing files is disk bound.\n\nBy running `check_auth_user()` and `process_files()` in parallel, we can increase the performance.\n\n**The second thing:**\n\nFunction `process_files()` not only **reads files** but also **writes logs**. After reading files, we can return file data to user immediately. *The \"writting logs\" tasks could perform later*.\n\n```text\n                         Synchronize\n      ------------------------|----------------------------> Time\n CPU  Check auth user         |             Do other tasks\nDisk  Read file               |             Write log\n                       User is authorized\n                      and files are loaded\n```\n\nLet's have a look at the code. I hope you could learn something good.\n\n&nbsp;\n\n### EX08 - EXECUTOR SERVICE & THREAD POOL IMPLEMENTATION\n\nTo implement an executor service, you need to focus on the worker function (the function that is executed by threads in the pool). For an idle thread:\n\n1. Pick a task (a job) in the queue.\n2. Do the task.\n\nStep 1 requires synchronization (by a blocking queue, a queue with mutex/synchronized block/condition variable...).\n\nIt looks simple, but if we need to expand features for our thread pool, things start to get complicated. You may take care of synchronization everywhere:\n\n- When a task is dequeued.\n- When a task is done.\n- When all tasks are done.\n- When users want to shutdown thread pool.\n- ...\n"
  },
  {
    "path": "old/cpppthread-reentrant-lock-a.cpp",
    "content": "/*\nREENTRANT LOCK (RECURSIVE MUTEX)\n\nThe function \"getFactorial\" will cause deadlock.\n*/\n\n\n#include <iostream>\n#include <pthread.h>\nusing namespace std;\n\n\n\npthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\n\n\n\nint getFactorial(int n) {\n    if (n <= 0)\n        return 1;\n\n    pthread_mutex_lock(&mut);\n\n    int result = n * getFactorial(n - 1);\n\n    pthread_mutex_unlock(&mut);\n\n    return result;\n}\n\n\n\nvoid* routine(void* arg) {\n    int n = *(int*)arg;\n\n    int factorial = getFactorial(n);\n\n    cout << \"Factorial of \" << n << \" is \" << factorial << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    int n = 5;\n\n    pthread_t tid;\n    int ret = 0;\n\n    ret = pthread_create(&tid, nullptr, routine, &n);\n    ret = pthread_join(tid, nullptr);\n\n    pthread_mutex_destroy(&mut);\n\n    return 0;\n}\n"
  },
  {
    "path": "old/cpppthread-reentrant-lock-b.cpp",
    "content": "/*\nREENTRANT LOCK (RECURSIVE MUTEX)\n*/\n\n\n#include <iostream>\n#include <pthread.h>\nusing namespace std;\n\n\n\npthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;\n\n\n\nint getFactorial(int n) {\n    if (n <= 0)\n        return 1;\n\n    pthread_mutex_lock(&mut);\n\n    int result = n * getFactorial(n - 1);\n\n    pthread_mutex_unlock(&mut);\n\n    return result;\n}\n\n\n\nvoid* routine(void* arg) {\n    int n = *(int*)arg;\n\n    int factorial = getFactorial(n);\n\n    cout << \"Factorial of \" << n << \" is \" << factorial << endl;\n\n    pthread_exit(nullptr);\n    return nullptr;\n}\n\n\n\nint main() {\n    int n = 5;\n\n    pthread_t tid;\n    int ret = 0;\n\n    pthread_mutexattr_t attr;\n    pthread_mutexattr_init(&attr);\n    pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE);\n    pthread_mutex_init(&mut, &attr);\n\n    ret = pthread_create(&tid, nullptr, routine, &n);\n    ret = pthread_join(tid, nullptr);\n\n    pthread_mutexattr_destroy(&attr);\n    pthread_mutex_destroy(&mut);\n\n    return 0;\n}\n"
  },
  {
    "path": "old/cppstd-data-race.cpp",
    "content": "#include <iostream>\n#include <string>\n#include <fstream>\n#include <thread>\n#include \"mytool-time.hpp\"\nusing namespace std;\n\n\n\nvoid writeToFile(string fileName, string programName) {\n    ofstream ofs;\n    ofs.open(fileName, ios::app);\n\n    if (ofs.fail()) {\n        return;\n    }\n\n    ofs << programName << endl;\n    ofs.close();\n}\n\n\n\nint main(int argc, char* argv[]) {\n    if (argc < 3) {\n        cout << \"Please run program with 2 arguments to specify:\" << endl;\n        cout << \"\\tArgument 1: Waiting seconds (positive integer)\" << endl;\n        cout << \"\\tArgument 2: Program name (string)\" << endl;\n        return 0;\n    }\n\n\n    const string fileName = \"tmp-output.txt\";\n    const int waitingSeconds = std::stoi(argv[1]);\n    const string programName = string(argv[2]);\n\n\n    cout << \"This program name is \" << programName << endl;\n    cout << \"Please run this program twice to achieve 'data race'\" << endl;\n\n\n    auto tpFutureWakeUp = mytool::getTimePointFutureFloor(\n        std::chrono::system_clock::now(),\n        waitingSeconds\n    );\n\n\n    cout << \"Program will sleep until \" << mytool::getTimePointStr(tpFutureWakeUp) << endl;\n\n\n    std::this_thread::sleep_until(tpFutureWakeUp);\n\n\n    cout << \"Writing to the file \" << fileName << \"...\" << endl;\n    writeToFile(fileName, programName);\n\n\n    return 0;\n}\n"
  },
  {
    "path": "old/cppstd-reentrant-lock-a.cpp",
    "content": "/*\nREENTRANT LOCK (RECURSIVE MUTEX)\n\nThe function \"getFactorial\" will cause deadlock.\n*/\n\n\n#include <iostream>\n#include <thread>\n#include <mutex>\nusing namespace std;\n\n\n\nstd::mutex mut;\n\n\n\nint getFactorial(int n) {\n    if (n <= 0)\n        return 1;\n\n    mut.lock();\n\n    int result = n * getFactorial(n - 1);\n\n    mut.unlock();\n\n    return result;\n}\n\n\n\nvoid routine(int n) {\n    int factorial = getFactorial(n);\n    cout << \"Factorial of \" << n << \" is \" << factorial << endl;\n}\n\n\n\nint main() {\n    int n = 5;\n\n    auto th = std::thread(routine, n);\n\n    th.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "old/cppstd-reentrant-lock-b.cpp",
    "content": "/*\nREENTRANT LOCK (RECURSIVE MUTEX)\n*/\n\n\n#include <iostream>\n#include <thread>\n#include <mutex>\nusing namespace std;\n\n\n\nstd::recursive_mutex mut;\n\n\n\nint getFactorial(int n) {\n    if (n <= 0)\n        return 1;\n\n    mut.lock();\n\n    int result = n * getFactorial(n - 1);\n\n    mut.unlock();\n\n    return result;\n}\n\n\n\nvoid routine(int n) {\n    int factorial = getFactorial(n);\n    cout << \"Factorial of \" << n << \" is \" << factorial << endl;\n}\n\n\n\nint main() {\n    int n = 5;\n\n    auto th = std::thread(routine, n);\n\n    th.join();\n\n    return 0;\n}\n"
  },
  {
    "path": "python/.gitignore",
    "content": "__pycache__/\n"
  },
  {
    "path": "python/demo00.py",
    "content": "'''\nINTRODUCTION TO MULTITHREADING\nYou should try running this app several times and see results.\n'''\n\nimport threading\n\n\n\ndef do_task():\n    for _ in range(300):\n        print('B', end='')\n\n\n\nth = threading.Thread(target=do_task)\nth.start()\n\nfor _ in range(300):\n    print('A', end='')\n\nth.join()\nprint()\n"
  },
  {
    "path": "python/demo01_hello.py",
    "content": "'''\nHELLO WORLD VERSION MULTITHREADING\n'''\n\nimport threading\n\n\n\ndef do_task():\n    print('Hello from example thread')\n\n\n\nth = threading.Thread(target=do_task)\nth.start()\n\nprint('Hello from main thread')\n"
  },
  {
    "path": "python/demo01ex_name.py",
    "content": "'''\nHELLO WORLD VERSION MULTITHREADING\nGetting thread's name\n'''\n\nimport threading\n\n\n\ndef do_task():\n    print(f'My name is {threading.current_thread().name}')\n\n\n\nth_foo = threading.Thread(target=do_task, name='foo')\nth_bar = threading.Thread(target=do_task, name='bar')\nth_foo.start()\nth_bar.start()\n"
  },
  {
    "path": "python/demo02a_join.py",
    "content": "'''\nTHREAD JOINS\n'''\n\nimport threading\n\n\n\ndef do_heavy_task():\n    # Do a heavy task, which takes a little time\n    for _ in range(0, 2 * 10**8):\n        pass\n\n    print('Done!')\n\n\n\nth = threading.Thread(target=do_heavy_task)\n\nth.start()\nth.join()\n\nprint('Good bye!')\n"
  },
  {
    "path": "python/demo02b_join.py",
    "content": "'''\nTHREAD JOINS\n'''\n\nimport threading\n\n\n\nth_foo = threading.Thread(target=lambda:print('foo'))\nth_bar = threading.Thread(target=lambda:print('bar'))\n\nth_foo.start()\nth_bar.start()\n\n# th_foo.join()\n# th_bar.join()\n\n'''\nWe do not need to call th_foo.join() and th_bar.join().\nThe reason is main thread will wait for the completion of all threads before app exits.\n'''\n"
  },
  {
    "path": "python/demo03a_pass_arg.py",
    "content": "'''\nPASSING ARGUMENTS\nVersion A: Using the Thread's constructor\n'''\n\nimport threading\n\n\n\ndef do_task(a: int, b: float, c: str):\n    print(f'{a}  {b}  {c}')\n\n\n\nth_foo = threading.Thread(target=do_task, args=(1, 2, 'red'))\nth_bar = threading.Thread(target=do_task, args=(3, 4, 'blue'))\n\nth_foo.start()\nth_bar.start()\n"
  },
  {
    "path": "python/demo03b_pass_arg.py",
    "content": "'''\nPASSING ARGUMENTS\nVersion B: Using lambdas\n'''\n\nimport threading\n\n\n\ndef do_task(a: int, b: float, c: str):\n    print(f'{a}  {b}  {c}')\n\n\n\nth_foo = threading.Thread(target=lambda:do_task(1, 2, 'red'))\nth_bar = threading.Thread(target=lambda:do_task(3, 4, 'blue'))\n\nth_foo.start()\nth_bar.start()\n"
  },
  {
    "path": "python/demo04_sleep.py",
    "content": "'''\nSLEEP\n'''\n\nimport time\nimport threading\n\n\n\ndef do_task():\n    print('foo is sleeping')\n    time.sleep(3)\n    print('foo wakes up')\n\n\n\nth_foo = threading.Thread(target=do_task)\n\nth_foo.start()\nth_foo.join()\n\nprint('Good bye')\n"
  },
  {
    "path": "python/demo05_id.py",
    "content": "'''\nGETTING THREAD'S ID\n'''\n\nimport time\nimport threading\n\n\n\ndef do_task():\n    time.sleep(1)\n    tid = threading.get_ident()\n    tid_native = threading.get_native_id()\n    print('id of current thread:', tid)\n    print('native id of current thread from operating system:', tid_native)\n\n\n\nth_foo = threading.Thread(target=do_task)\nth_bar = threading.Thread(target=do_task)\n\nth_foo.start()\nth_bar.start()\n\nprint(\"foo's id:\", th_foo.ident)\nprint(\"foo's native id:\", th_foo.native_id)\nprint(\"bar's id:\", th_bar.ident)\nprint(\"bar's native id:\", th_bar.native_id)\n"
  },
  {
    "path": "python/demo06_list_threads.py",
    "content": "'''\nLIST OF MULTIPLE THREADS\n'''\n\nimport time\nimport threading\n\n\n\ndef do_task(index):\n    time.sleep(0.5)\n    print(index, end='')\n\n\n\nNUM_THREADS = 5\nlstth = []\n\nfor i in range(NUM_THREADS):\n    lstth.append(threading.Thread( target=do_task, args=(i,) ))\n\nfor th in lstth:\n    th.start()\n"
  },
  {
    "path": "python/demo07_terminate.py",
    "content": "'''\nFORCING A THREAD TO TERMINATE (i.e. killing the thread)\nUsing a flag to notify the thread\n'''\n\nimport time\nimport threading\n\n\n\nflag_stop = False\n\n\n\ndef do_task():\n    while True:\n        if flag_stop:\n            break\n\n        print('Running...')\n        time.sleep(1)\n\n\n\nth = threading.Thread(target=do_task)\nth.start()\n\ntime.sleep(3)\nflag_stop = True\n"
  },
  {
    "path": "python/demo08a_return_value.py",
    "content": "'''\nGETTING RETURNED VALUES FROM THREADS\n'''\n\nimport threading\n\n\n\ndef double_value(value):\n    return value * 2\n\n\n\nres = {}\n\nth_foo = threading.Thread( target=lambda: res.update({ 'foo': double_value(5) }) )\nth_bar = threading.Thread( target=lambda: res.update({ 'bar': double_value(80) }) )\n\nth_foo.start()\nth_bar.start()\n\n# Wait until th_foo and th_far finish\nth_foo.join()\nth_bar.join()\n\nprint(res['foo'])\nprint(res['bar'])\n"
  },
  {
    "path": "python/demo08b_return_value.py",
    "content": "'''\nGETTING RETURNED VALUES FROM THREADS\n'''\n\nimport threading\n\n\n\ndef double_value(result: dict, name: str, value):\n    result[name] = value * 2\n\n\n\nres = {}\n\nth_foo = threading.Thread( target=double_value, args=(res, 'foo', 5) )\nth_bar = threading.Thread( target=double_value, args=(res, 'bar', 80) )\n\nth_foo.start()\nth_bar.start()\n\n# Wait until th_foo and th_far finish\nth_foo.join()\nth_bar.join()\n\nprint(res['foo'])\nprint(res['bar'])\n"
  },
  {
    "path": "python/demo09_detach.py",
    "content": "'''\nTHREAD DETACHING\n'''\n\nimport time\nimport threading\n\n\n\ndef do_task():\n    print('foo is starting...')\n    time.sleep(2)\n    print('foo is exiting...')\n\n\n\nth_foo = threading.Thread(target=do_task, daemon=True)\nth_foo.start()\n\n# If I comment this statement,\n# th_foo will be forced into terminating with main thread\ntime.sleep(3)\n\nprint('Main thread is exiting')\n"
  },
  {
    "path": "python/demo10_yield.py",
    "content": "'''\nTHREAD YIELDING\n\nI think that thread yielding does not make sense in Python.\nNo demo available.\n'''\n"
  },
  {
    "path": "python/demo11a_exec_service.py",
    "content": "'''\nEXECUTOR SERVICES AND THREAD POOLS\nVersion A: The executor service (of which thread pool) containing a single thread\n'''\n\nfrom concurrent.futures import ThreadPoolExecutor\n\n\n\ndef do_task():\n    print('Hello the Executor Service')\n\n\n\nexecutor = ThreadPoolExecutor(max_workers=1)\n\nexecutor.submit(lambda: print('Hello World'))\nexecutor.submit(do_task)\n\nexecutor.shutdown(wait=True)\n"
  },
  {
    "path": "python/demo11b_exec_service.py",
    "content": "'''\nEXECUTOR SERVICES AND THREAD POOLS\nVersion B: The executor service containing multiple threads\n'''\n\nimport time\nfrom concurrent.futures import ThreadPoolExecutor\n\n\n\ndef do_task(name: str):\n    print(f'Task {name} is starting')\n    time.sleep(3)\n    print(f'Task {name} is completed')\n\n\n\nNUM_TASKS = 5\nexecutor = ThreadPoolExecutor(max_workers=2)\n\nfor i in range(NUM_TASKS):\n    task_name = chr(i + 65)\n    executor.submit(do_task, task_name)\n\nexecutor.shutdown(wait=True)\n"
  },
  {
    "path": "python/demo11c01_exec_service.py",
    "content": "'''\nEXECUTOR SERVICES AND THREAD POOLS\nVersion C01: The executor service and Future objects\n'''\n\nfrom concurrent.futures import ThreadPoolExecutor\n\n\n\ndef get_squared(x):\n    return x * x\n\n\n\nexecutor = ThreadPoolExecutor(max_workers=1)\n\nfuture = executor.submit(get_squared, 7)\n# print(future.done())\n\nprint(future.result())\n\nexecutor.shutdown(wait=True)\n"
  },
  {
    "path": "python/demo11c02_exec_service.py",
    "content": "'''\nEXECUTOR SERVICES AND THREAD POOLS\nVersion C02: The executor service and Future objects\n'''\n\nimport time\nfrom concurrent.futures import ThreadPoolExecutor\n\n\n\ndef get_squared(x):\n    time.sleep(3)\n    return x * x\n\n\n\nexecutor = ThreadPoolExecutor(max_workers=1)\n\nfuture = executor.submit(get_squared, 7)\n\nprint('Calculating...')\nprint(future.result())\n\nexecutor.shutdown(wait=True)\n"
  },
  {
    "path": "python/demo12a_race_condition.py",
    "content": "'''\nRACE CONDITIONS\n'''\n\nimport time\nimport threading\n\n\n\ndef do_task(index: int):\n    time.sleep(1)\n    print(index, end='')\n\n\n\nNUM_THREADS = 4\nlstth = []\n\nfor i in range(NUM_THREADS):\n    lstth.append(threading.Thread( target=do_task, args=(i,) ))\n\nfor th in lstth:\n    th.start()\n"
  },
  {
    "path": "python/demo12b01_data_race_single.py",
    "content": "'''\nDATA RACES\nVersion 01: Without multithreading\n'''\n\n\n\ndef get_result(n: int):\n    a = [False] * (n + 1)\n\n    for i in range(1, n + 1):\n        if i % 2 == 0 or i % 3 == 0:\n            a[i] = True\n\n    res = a.count(True)\n    return res\n\n\n\nN = 8\nresult = get_result(N)\nprint('Number of integers that are divisible by 2 or 3 is:', result)\n"
  },
  {
    "path": "python/demo12b02_data_race_multi.py",
    "content": "'''\nDATA RACES\nVersion 02: Multithreading\n'''\n\nimport threading\n\n\n\ndef count_div_2(a: list, n: int):\n    for i in range(2, n + 1, 2):\n        a[i] = True\n\n\n\ndef count_div_3(a: list, n: int):\n    for i in range(3, n + 1, 3):\n        a[i] = True\n\n\n\nN = 8\nA = [False] * (N + 1)\n\nth_div_2 = threading.Thread(target=count_div_2, args=(A, N))\nth_div_3 = threading.Thread(target=count_div_3, args=(A, N))\n\nth_div_2.start()\nth_div_3.start()\nth_div_2.join()\nth_div_3.join()\n\nresult = A.count(True)\n\nprint('Number of integers that are divisible by 2 or 3 is:', result)\n"
  },
  {
    "path": "python/demo12c01_race_cond_data_race.py",
    "content": "'''\nRACE CONDITIONS AND DATA RACES\n'''\n\nimport time\nimport threading\n\n\n\ncounter = 0\n\n\n\ndef do_task():\n    global counter\n\n    for _ in range(1000):\n        temp = counter + 1\n        time.sleep(0.0001)\n        counter = temp\n\n\n\nlstth = [threading.Thread(target=do_task) for _ in range(32)]\n\nfor th in lstth:\n    th.start()\n\nfor th in lstth:\n    th.join()\n\nprint('counter =', counter)\n# We are NOT sure that counter = 32000\n"
  },
  {
    "path": "python/demo12c02_race_cond_data_race.py",
    "content": "'''\nRACE CONDITIONS AND DATA RACES\n'''\n\nimport time\nimport threading\n\n\n\ncounter = 0\n\n\n\ndef do_task_a():\n    global counter\n    time.sleep(1)\n\n    while counter < 10:\n        counter += 1\n\n    print('A won !!!')\n\n\n\ndef do_task_b():\n    global counter\n    time.sleep(1)\n\n    while counter > -10:\n        counter -= 1\n\n    print('B won !!!')\n\n\n\nthreading.Thread(target=do_task_a).start()\nthreading.Thread(target=do_task_b).start()\n"
  },
  {
    "path": "python/demo13a_mutex.py",
    "content": "'''\nMUTEXES\nIn Python, Lock objects can be used as mutexes\n'''\n\nimport time\nimport threading\n\n\n\nmutex = threading.Lock()\ncounter = 0\n\n\n\ndef do_task():\n    global counter\n\n    mutex.acquire()\n\n    for _ in range(1000):\n        temp = counter + 1\n        time.sleep(0.0001)\n        counter = temp\n\n    mutex.release()\n\n\n\nNUM_THREADS = 32\nlstth = [threading.Thread(target=do_task) for _ in range(NUM_THREADS)]\n\nfor th in lstth:\n    th.start()\n\nfor th in lstth:\n    th.join()\n\nprint('counter =', counter)\n# We are sure that counter = 32000\n"
  },
  {
    "path": "python/demo14_synchronized_block.py",
    "content": "'''\nSYNCHRONIZED BLOCKS\n'''\n\nimport time\nimport threading\n\n\n\nmutex = threading.Lock()\ncounter = 0\n\n\n\ndef do_task():\n    global counter\n\n    with mutex:\n        for _ in range(1000):\n            temp = counter + 1\n            time.sleep(0.0001)\n            counter = temp\n\n\n\nNUM_THREADS = 32\nlstth = [threading.Thread(target=do_task) for _ in range(NUM_THREADS)]\n\nfor th in lstth:\n    th.start()\n\nfor th in lstth:\n    th.join()\n\nprint('counter =', counter)\n# We are sure that counter = 32000\n"
  },
  {
    "path": "python/demo15a_deadlock.py",
    "content": "'''\nDEADLOCK\nVersion A\n'''\n\nimport threading\n\n\n\nmutex = threading.Lock()\n\n\n\ndef do_task(name: str):\n    mutex.acquire()\n    print(f'{name} acquired resource')\n    # mutex.release() # Forget this statement ==> deadlock\n\n\n\nth_foo = threading.Thread(target=do_task, args=('foo',))\nth_bar = threading.Thread(target=do_task, args=('bar',))\n\nth_foo.start()\nth_bar.start()\nth_foo.join()\nth_bar.join()\n\nprint('You will never see this statement due to deadlock!')\n"
  },
  {
    "path": "python/demo15b_deadlock.py",
    "content": "'''\nDEADLOCK\nVersion B\n'''\n\nimport time\nimport threading\n\n\n\nmutex_a = threading.Lock()\nmutex_b = threading.Lock()\n\n\n\ndef foo():\n    with mutex_a:\n        print('foo acquired resource A')\n        time.sleep(1)\n        with mutex_b:\n            print('foo acquired resource B')\n\n\n\ndef bar():\n    with mutex_b:\n        print('bar acquired resource B')\n        time.sleep(1)\n        with mutex_a:\n            print('bar acquired resource A')\n\n\n\nth_foo = threading.Thread(target=foo)\nth_bar = threading.Thread(target=bar)\n\nth_foo.start()\nth_bar.start()\nth_foo.join()\nth_bar.join()\n\nprint('You will never see this statement due to deadlock!')\n"
  },
  {
    "path": "python/demo16_monitor.py",
    "content": "'''\nMONITORS\nImplementation of a monitor for managing a counter\n'''\n\nimport time\nimport threading\n\n\n\nclass Monitor:\n    def __init__(self, res: dict, field_name: str):\n        self.__lock = threading.Lock()\n        self.__res = res\n        self.__field_name = field_name\n\n    def increase_counter(self):\n        with self.__lock:\n            tmp = self.__res[self.__field_name] + 1\n            time.sleep(0.0001)\n            self.__res[self.__field_name] = tmp\n\n\n\ndef do_task(mon: Monitor):\n    for _ in range(1000):\n        mon.increase_counter()\n\n\n\nresult = { 'data': 0 }\nmonitor = Monitor(result, 'data')\n\nNUM_THREADS = 32\nlstth = [threading.Thread(target=do_task, args=(monitor,)) for _ in range(NUM_THREADS)]\n\nfor th in lstth:\n    th.start()\n\nfor th in lstth:\n    th.join()\n\nprint('counter =', result['data'])\n# We are sure that counter = 32000\n"
  },
  {
    "path": "python/demo17a_reentrant_lock.py",
    "content": "'''\nREENTRANT LOCKS (RECURSIVE MUTEXES)\nVersion A: Introduction to reentrant locks\n'''\n\nimport threading\n\n\n\nlock = threading.Lock()\n\n\n\ndef do_task():\n    with lock:\n        print('First time acquiring the resource')\n        with lock:\n            print('Second time acquiring the resource')\n\n\n\nth = threading.Thread(target=do_task)\nth.start()\n\n# The thread th shall meet deadlock.\n# So, you will never get output \"Second time the acquiring resource\".\n"
  },
  {
    "path": "python/demo17b_reentrant_lock.py",
    "content": "'''\nREENTRANT LOCKS (RECURSIVE MUTEXES)\nVersion B: Solving the problem from version A\n'''\n\nimport threading\n\n\n\nlock = threading.RLock()\n\n\n\ndef do_task():\n    with lock:\n        print('First time acquiring the resource')\n        with lock:\n            print('Second time acquiring the resource')\n\n\n\nth = threading.Thread(target=do_task)\nth.start()\nth.join()\n"
  },
  {
    "path": "python/demo17c_reentrant_lock.py",
    "content": "'''\nREENTRANT LOCKS (RECURSIVE MUTEXES)\nVersion C: A multithreaded app example\n'''\n\nimport time\nimport threading\n\n\n\nlock = threading.RLock()\n\n\n\ndef do_task(name: str):\n    time.sleep(1)\n    with lock:\n        print(f'First time {name} acquiring the resource')\n        with lock:\n            print(f'Second time {name} acquiring the resource')\n\n\n\nNUM_THREADS = 3\n\nfor i in range(NUM_THREADS):\n    threading.Thread(target=do_task, args=(chr(i + 65),)).start()\n"
  },
  {
    "path": "python/demo18a01_barrier.py",
    "content": "'''\nBARRIERS AND LATCHES\nVersion A: Barriers\n'''\n\nimport time\nimport threading\n\n\n\nsync_point = threading.Barrier(parties=3)\n\n\n\ndef process_request(user_name: str, wait_time: int):\n    time.sleep(wait_time)\n\n    print(f'Get request from {user_name}')\n    sync_point.wait()\n\n    print(f'Process request for {user_name}')\n    sync_point.wait()\n\n    print(f'Done {user_name}')\n\n\n\nlstarg = [\n    ('lorem', 1),\n    ('ipsum', 2),\n    ('dolor', 3)\n]\n\n_ = [ threading.Thread(target=process_request, args=arg).start() for arg in lstarg ]\n"
  },
  {
    "path": "python/demo18a02_barrier.py",
    "content": "'''\nBARRIERS AND LATCHES\nVersion A: Barriers\n'''\n\nimport time\nimport threading\n\n\n\nsync_point = threading.Barrier(parties=2)\n\n\n\ndef process_request(user_name: str, wait_time: int):\n    time.sleep(wait_time)\n\n    print(f'Get request from {user_name}')\n    sync_point.wait()\n\n    print(f'Process request for {user_name}')\n    sync_point.wait()\n\n    print(f'Done {user_name}')\n\n\n\nlstarg = [\n    ('lorem', 1),\n    ('ipsum', 3),\n    ('dolor', 3),\n    ('amet', 10)\n]\n\n_ = [ threading.Thread(target=process_request, args=arg) for arg in lstarg ]\n\n# Thread with user_name = \"amet\" shall be FREEZED\n"
  },
  {
    "path": "python/demo18a03_barrier.py",
    "content": "'''\nBARRIERS AND LATCHES\nVersion A: Barriers\n'''\n\nimport time\nimport threading\n\n\n\nsync_point_a = threading.Barrier(parties=2)\nsync_point_b = threading.Barrier(parties=2)\n\n\n\ndef process_request(user_name: str, wait_time: int):\n    time.sleep(wait_time)\n\n    print(f'Get request from {user_name}')\n    sync_point_a.wait()\n\n    print(f'Process request for {user_name}')\n    sync_point_b.wait()\n\n    print(f'Done {user_name}')\n\n\n\nlstarg = [\n    ('lorem', 1),\n    ('ipsum', 3),\n    ('dolor', 3),\n    ('amet', 10)\n]\n\n_ = [ threading.Thread(target=process_request, args=arg) for arg in lstarg ]\n"
  },
  {
    "path": "python/demo18b01_latch.py",
    "content": "'''\nBARRIERS AND LATCHES\nVersion B: Count-down latches\n\nCount-down latches in Python are not supported by default.\nSo, I use mylib_latch for this demonstration.\n'''\n\nimport time\nimport threading\nfrom mylib_latch import CountDownLatch\n\n\n\ndef process_request(user_name: str, wait_time: int):\n    time.sleep(wait_time)\n\n    print(f'Get request from {user_name}')\n\n    sync_point.count_down()\n    sync_point.wait()\n\n    print(f'Done {user_name}')\n\n\n\nlstarg = [\n    ('lorem', 1),\n    ('ipsum', 2),\n    ('dolor', 3)\n]\n\nsync_point = CountDownLatch(count=3)\n\n_ = [ threading.Thread(target=process_request, args=arg).start() for arg in lstarg ]\n"
  },
  {
    "path": "python/demo18b02_latch.py",
    "content": "'''\nBARRIERS AND LATCHES\nVersion B: Count-down latches\n\nMain thread waits for 3 child threads to get enough data to progress.\n\nCount-down latches in Python are not supported by default.\nSo, I use mylib_latch for this demonstration.\n'''\n\nimport time\nimport threading\nfrom mylib_latch import CountDownLatch\n\n\n\ndef do_task(message: str, wait_time: int):\n    time.sleep(wait_time)\n\n    print(message)\n    sync_point.count_down()\n\n    time.sleep(8)\n    print('Cleanup')\n\n\n\nlstarg = [\n    ('Send request to egg.net to get data', 6),\n    ('Send request to foo.org to get data', 2),\n    ('Send request to bar.com to get data', 4)\n]\n\nsync_point = CountDownLatch(count=len(lstarg))\n\n_ = [ threading.Thread(target=do_task, args=arg).start() for arg in lstarg ]\n\nsync_point.wait()\nprint('\\nNow we have enough data to progress to next step\\n')\n"
  },
  {
    "path": "python/demo19_read_write_lock.py",
    "content": "'''\nREAD-WRITE LOCKS\nRead-write locks in Python are not supported by default.\nSo, I use mylib_rwlock for this demonstration.\n'''\n\nimport time\nimport random\nimport threading\nfrom mylib_rwlock import ReadWriteLock\n\n\n\nrwlock = ReadWriteLock()\nresource = 0\n\n\n\ndef read_func(wait_time: int):\n    time.sleep(wait_time)\n\n    with rwlock.readlock():\n        print(f'read: {resource}')\n\n\n\ndef write_func(wait_time: int):\n    global resource\n    time.sleep(wait_time)\n\n    with rwlock.writelock():\n        resource = random.randint(0, 99)\n        print(f'write: {resource}')\n\n\n\nNUM_THREADS_READ = 10\nNUM_THREADS_WRITE = 4\nTIME_WAIT_MAX = 2\n\nlstth_read = [\n    threading.Thread(target=read_func, args=(random.randint(0, TIME_WAIT_MAX),))\n    for _ in range(NUM_THREADS_READ)\n]\n\nlstth_write = [\n    threading.Thread(target=write_func, args=(random.randint(0, TIME_WAIT_MAX),))\n    for _ in range(NUM_THREADS_WRITE)\n]\n\nfor th in lstth_read:\n    th.start()\n\nfor th in lstth_write:\n    th.start()\n"
  },
  {
    "path": "python/demo20a01_semaphore.py",
    "content": "'''\nSEMAPHORES\nVersion A: Paper sheets and packages\n'''\n\nimport time\nimport threading\n\n\n\nsem_package = threading.Semaphore(0)\n\n\n\ndef make_one_sheet():\n    for _ in range(4):\n        print('Make 1 sheet')\n        time.sleep(1)\n        sem_package.release()\n\n\n\ndef combine_one_package():\n    for _ in range(4):\n        sem_package.acquire()\n        sem_package.acquire()\n        print('Combine 2 sheets into 1 package')\n\n\n\nthreading.Thread(target=make_one_sheet).start()\nthreading.Thread(target=make_one_sheet).start()\nthreading.Thread(target=combine_one_package).start()\n"
  },
  {
    "path": "python/demo20a02_semaphore.py",
    "content": "'''\nSEMAPHORES\nVersion A: Paper sheets and packages\n'''\n\nimport time\nimport threading\n\n\n\nsem_package = threading.Semaphore(0)\nsem_sheet = threading.Semaphore(2)\n\n\n\ndef make_one_sheet():\n    for _ in range(4):\n        sem_sheet.acquire()\n        print('Make 1 sheet')\n        sem_package.release()\n\n\n\ndef combine_one_package():\n    for _ in range(4):\n        sem_package.acquire()\n        sem_package.acquire()\n        print('Combine 2 sheets into 1 package')\n        time.sleep(2)\n        sem_sheet.release()\n        sem_sheet.release()\n\n\n\nthreading.Thread(target=make_one_sheet).start()\nthreading.Thread(target=make_one_sheet).start()\nthreading.Thread(target=combine_one_package).start()\n"
  },
  {
    "path": "python/demo20a03_semaphore_deadlock.py",
    "content": "'''\nSEMAPHORES\nVersion A: Paper sheets and packages\n'''\n\nimport time\nimport threading\n\n\n\nsem_package = threading.Semaphore(0)\nsem_sheet = threading.Semaphore(2)\n\n\n\ndef make_one_sheet():\n    for _ in range(4):\n        sem_sheet.acquire()\n        print('Make 1 sheet')\n        sem_package.release()\n\n\n\ndef combine_one_package():\n    for _ in range(4):\n        sem_package.acquire()\n        sem_package.acquire()\n        print('Combine 2 sheets into 1 package')\n        time.sleep(2)\n        sem_sheet.release()\n        # sem_sheet.release() # Missing one statement: sem_sheet.release() ==> deadlock\n\n\n\nthreading.Thread(target=make_one_sheet).start()\nthreading.Thread(target=make_one_sheet).start()\nthreading.Thread(target=combine_one_package).start()\n"
  },
  {
    "path": "python/demo20b_semaphore.py",
    "content": "'''\nSEMAPHORES\nVersion B: Tires and chassis\n'''\n\nimport time\nimport threading\n\n\n\nsem_tire = threading.Semaphore(4)\nsem_chassis = threading.Semaphore(0)\n\n\n\ndef make_tire():\n    for _ in range(8):\n        sem_tire.acquire()\n\n        print('Make 1 tire')\n        time.sleep(1)\n\n        sem_chassis.release()\n\n\n\ndef make_chassis():\n    for _ in range(4):\n        sem_chassis.acquire()\n        sem_chassis.acquire()\n        sem_chassis.acquire()\n        sem_chassis.acquire()\n\n        print('Make 1 chassis')\n        time.sleep(3)\n\n        sem_tire.release()\n        sem_tire.release()\n        sem_tire.release()\n        sem_tire.release()\n\n\n\nthreading.Thread(target=make_tire).start()\nthreading.Thread(target=make_tire).start()\nthreading.Thread(target=make_chassis).start()\n"
  },
  {
    "path": "python/demo21a01_condition_variable.py",
    "content": "'''\nCONDITION VARIABLES\n'''\n\nimport time\nimport threading\n\n\n\ncondition_var = threading.Condition()\n\n\n\ndef foo():\n    print('foo is waiting...')\n    with condition_var:\n        condition_var.wait()\n    print('foo resumed')\n\n\n\ndef bar():\n    time.sleep(3)\n    with condition_var:\n        condition_var.notify()\n\n\n\nthreading.Thread(target=foo).start()\nthreading.Thread(target=bar).start()\n"
  },
  {
    "path": "python/demo21a02_condition_variable.py",
    "content": "'''\nCONDITION VARIABLES\n'''\n\nimport time\nimport threading\n\n\n\ncondition_var = threading.Condition()\n\n\n\ndef foo():\n    print('foo is waiting...')\n    with condition_var:\n        condition_var.wait()\n    print('foo resumed')\n\n\n\ndef bar():\n    for _ in range(3):\n        time.sleep(2)\n        with condition_var:\n            condition_var.notify()\n\n\n\n_ = [ threading.Thread(target=foo).start() for _ in range(3) ]\nthreading.Thread(target=bar).start()\n"
  },
  {
    "path": "python/demo21a03_condition_variable.py",
    "content": "'''\nCONDITION VARIABLES\n'''\n\nimport time\nimport threading\n\n\n\ncondition_var = threading.Condition()\n\n\n\ndef foo():\n    print('foo is waiting...')\n    with condition_var:\n        condition_var.wait()\n    print('foo resumed')\n\n\n\ndef bar():\n    time.sleep(3)\n    with condition_var:\n        # Notify all waiting threads\n        condition_var.notify_all()\n\n\n\n_ = [ threading.Thread(target=foo).start() for _ in range(3) ]\nthreading.Thread(target=bar).start()\n"
  },
  {
    "path": "python/demo21b_condition_variable.py",
    "content": "'''\nCONDITION VARIABLES\n'''\n\nimport threading\n\n\n\ncondition_var = threading.Condition()\n\ncounter = 0\n\nCOUNT_HALT_01 = 3\nCOUNT_HALT_02 = 6\nCOUNT_DONE = 10\n\n\n\ndef foo():\n    '''\n    Write numbers 1-3 and 8-10 as permitted by egg()\n    '''\n    global counter\n\n    while True:\n        with condition_var:\n            condition_var.wait()\n\n            counter += 1\n            print(f'foo counter = {counter}')\n\n            if counter >= COUNT_DONE:\n                return\n\n\n\ndef bar():\n    '''\n    Write numbers 4-7\n    '''\n    global counter\n\n    while True:\n        with condition_var:\n            if counter < COUNT_HALT_01 or counter > COUNT_HALT_02:\n                # Signal to free waiting thread by freeing the mutex\n                # Note: foo() is now permitted to modify \"counter\"\n                condition_var.notify()\n            else:\n                counter += 1\n                print(f'egg counter = {counter}')\n\n            if counter >= COUNT_DONE:\n                return\n\n\n\nthreading.Thread(target=foo).start()\nthreading.Thread(target=bar).start()\n"
  },
  {
    "path": "python/demo22a_blocking_queue.py",
    "content": "'''\nBLOCKING QUEUES\nVersion A: A slow producer and a fast consumer\n'''\n\nimport time\nfrom queue import Queue\nimport threading\n\n\n\ndef producer(q: Queue):\n    time.sleep(2)\n    q.put('Alice')\n\n    time.sleep(2)\n    q.put('likes')\n\n    time.sleep(2)\n    q.put('singing')\n\n\n\ndef consumer(q: Queue):\n    for _ in range(3):\n        print('\\nWaiting for data...')\n        data = q.get()\n        print(f'    {data}')\n\n\n\nblkq = Queue()\n\nthreading.Thread(target=producer, args=(blkq,)).start()\nthreading.Thread(target=consumer, args=(blkq,)).start()\n"
  },
  {
    "path": "python/demo22b_blocking_queue.py",
    "content": "'''\nBLOCKING QUEUES\nVersion B: A fast producer and a slow consumer\n'''\n\nimport time\nfrom queue import Queue\nimport threading\n\n\n\ndef producer(q: Queue):\n    q.put('Alice')\n    q.put('likes')\n\n    # Due to reaching the maximum capacity = 2, when executing q.put('singing'),\n    # this thread is going to sleep until the queue removes an element.\n    q.put('singing')\n\n\n\ndef consumer(q: Queue):\n    time.sleep(2)\n\n    for _ in range(3):\n        print('\\nWaiting for data...')\n        data = q.get()\n        print(f'    {data}')\n\n\n\nblkq = Queue(maxsize=2) # blocking queue with capacity = 2\n\nthreading.Thread(target=producer, args=(blkq,)).start()\nthreading.Thread(target=consumer, args=(blkq,)).start()\n"
  },
  {
    "path": "python/demo23a_thread_local.py",
    "content": "'''\nTHREAD-LOCAL STORAGE\n'''\n\nimport time\nimport threading\n\n\n\ndata = threading.local()\n\n\n\ndef print_local_value():\n    print(data.value)\n\n\n\ndef do_task_apple():\n    data.value = 'APPLE'\n    time.sleep(2)\n    print_local_value()\n\n\n\ndef do_task_banana():\n    data.value = 'BANANA'\n    time.sleep(2)\n    print_local_value()\n\n\n\nthreading.Thread(target=do_task_apple).start()\nthreading.Thread(target=do_task_banana).start()\n"
  },
  {
    "path": "python/demo23b_thread_local.py",
    "content": "'''\nTHREAD-LOCAL STORAGE\nAvoiding synchronization using thread-local storage\n'''\n\nimport time\nimport threading\n\n\n\ndata = threading.local()\n\n\n\ndef do_task(t: int):\n    time.sleep(1)\n    data.counter = 0\n\n    for _ in range(1000):\n        data.counter += 1\n\n    print(f'Thread {t} gives counter = {data.counter}')\n\n\n\nNUM_THREADS = 3\n\nfor i in range(NUM_THREADS):\n    threading.Thread(target=do_task, args=(i,)).start()\n\n# By using thread-local storage, each thread has its own counter.\n# So, the counter in one thread is completely independent of each other.\n# Thread-local storage helps us to AVOID SYNCHRONIZATION.\n"
  },
  {
    "path": "python/demo24_volatile.py",
    "content": "'''\nTHE VOLATILE KEYWORD\nThe \"volatile\" keyword in Python are not supported by default.\n'''\n"
  },
  {
    "path": "python/demo25_atomic.py",
    "content": "'''\nATOMIC ACCESS\nThe atomic operation syntax in Python is not supported by default.\n'''\n"
  },
  {
    "path": "python/demoex_event.py",
    "content": "'''\nEVENT OBJECTS\n'''\n\nimport time\nimport threading\n\n\n\n# Event to notify the speakers\nev = threading.Event()\nrunning = True\n\n\n\ndef func_speaker(name: str):\n    while True:\n        ev.wait()\n\n        if not running:\n            return\n\n        print(f'{name}: ring ring ring')\n\n\n\ndef func_clock():\n    global running\n\n    for i in range(89, -1, -1):\n        minute = i // 60\n        second = i % 60\n        print(f'{minute:02d}:{second:02d}')\n\n        if i % 30 == 0:\n            ev.set()    # let speakers do speak 'ring ring ring'\n            ev.clear()  # reset internal flag to reuse the event in the future\n\n        time.sleep(0.2)\n\n    running = False\n    ev.set()\n\n\n\nthreading.Thread(target=func_speaker, args=('ham',)).start()\nthreading.Thread(target=func_speaker, args=('egg',)).start()\nthreading.Thread(target=func_clock).start()\n"
  },
  {
    "path": "python/demoex_timer.py",
    "content": "'''\nTIMER OBJECTS\n'''\n\nimport time\nimport threading\n\n\n\ndef func_time_out():\n    print('Time out!!!')\n\n\n\nthreading.Timer(10, func_time_out).start()\n\nfor i in range(9, -1, -1):\n    print(i)\n    time.sleep(1)\n"
  },
  {
    "path": "python/exer01a_max_div.py",
    "content": "'''\nMAXIMUM NUMBER OF DIVISORS\n'''\n\nimport time\n\n\n\nRANGE_START = 1\nRANGE_END = 30000\n\nres_value = 0\nres_numdiv = 0 # number of divisors of result\n\ntp_start = time.time()\n\n\nfor i in range(RANGE_START, RANGE_END + 1):\n    numdiv = 0\n\n    for j in range(1, i // 2):\n        if i % j == 0:\n            numdiv += 1\n\n    if res_numdiv < numdiv:\n        res_numdiv = numdiv\n        res_value = i\n\n\ntime_elapsed = time.time() - tp_start\n\nprint('The integer which has largest number of divisors is', res_value)\nprint('The largest number of divisor is', res_numdiv)\nprint('Time elapsed =', time_elapsed)\n"
  },
  {
    "path": "python/exer01b_max_div.py",
    "content": "'''\nMAXIMUM NUMBER OF DIVISORS\n'''\n\nimport time\nimport threading\n\n\n\nlk = threading.Lock()\n\n\n\ndef prepare_arg(rng_start: int, rng_end: int, num_threads: int) -> list[dict]:\n    rng_block = (rng_end - rng_start + 1) // num_threads\n    rng_a = rng_start\n    lst_arg = []\n\n    for i in range(num_threads):\n        rng_b = rng_a + rng_block - 1 if i < num_threads - 1 else rng_end\n        lst_arg.append({ 'start': rng_a, 'end': rng_b })\n        rng_a += rng_block\n\n    return lst_arg\n\n\n\ndef do_task(arg: dict, lst_res: list[dict]):\n    res_value = 0\n    res_numdiv = 0\n\n    for i in range(arg['start'], arg['end'] + 1):\n        numdiv = 0\n\n        for j in range(1, i // 2):\n            if i % j == 0:\n                numdiv += 1\n\n        if res_numdiv < numdiv:\n            res_numdiv = numdiv\n            res_value = i\n\n    with lk:\n        lst_res.append({ 'value': res_value, 'numdiv': res_numdiv })\n\n    '''\n    BETTER WAY (avoiding synchronization of lst_res):\n\n    - Initialize lst_res with null objects.\n        Of course, the number of objects is NUM_THREADS.\n\n    - In thread function:\n        lst_res[thread_index] = { 'value': res_value, 'numdiv': res_numdiv }\n    '''\n\n\n\n##########################################################\n\n\n\nRANGE_START = 1\nRANGE_END = 30000\nNUM_THREADS = 8\n\nlst_worker_arg = prepare_arg(RANGE_START, RANGE_END, NUM_THREADS)\nlst_worker_res = []\n\nlstth = [\n    threading.Thread( target=do_task, args=(arg, lst_worker_res) )\n    for arg in lst_worker_arg\n]\n\n\ntp_start = time.time()\n\nfor th in lstth:\n    th.start()\n\nfor th in lstth:\n    th.join()\n\nfinal_res = sorted(lst_worker_res, key=lambda res: res['numdiv'])[-1]\n\ntime_elapsed = time.time() - tp_start\n\n\nprint('The integer which has largest number of divisors is', final_res['value'])\nprint('The largest number of divisor is', final_res['numdiv'])\nprint('Time elapsed =', time_elapsed)\n"
  },
  {
    "path": "python/exer01c_max_div.py",
    "content": "'''\nMAXIMUM NUMBER OF DIVISORS\n'''\n\nimport time\nimport threading\n\n\n\nlk = threading.Lock()\nfinal_res = { 'value': 0, 'numdiv': 0 }\n\n\n\ndef prepare_arg(rng_start: int, rng_end: int, num_threads: int) -> list[dict]:\n    rng_block = (rng_end - rng_start + 1) // num_threads\n    rng_a = rng_start\n    lst_arg = []\n\n    for i in range(num_threads):\n        rng_b = rng_a + rng_block - 1 if i < num_threads - 1 else rng_end\n        lst_arg.append({ 'start': rng_a, 'end': rng_b })\n        rng_a += rng_block\n\n    return lst_arg\n\n\n\ndef do_task(arg: dict):\n    res_value = 0\n    res_numdiv = 0\n\n    for i in range(arg['start'], arg['end'] + 1):\n        numdiv = 0\n\n        for j in range(1, i // 2):\n            if i % j == 0:\n                numdiv += 1\n\n        if res_numdiv < numdiv:\n            res_numdiv = numdiv\n            res_value = i\n\n    with lk:\n        if final_res['numdiv'] < res_numdiv:\n            final_res['numdiv'] = res_numdiv\n            final_res['value'] = res_value\n\n\n\n##########################################################\n\n\n\nRANGE_START = 1\nRANGE_END = 30000\nNUM_THREADS = 8\n\nlst_worker_arg = prepare_arg(RANGE_START, RANGE_END, NUM_THREADS)\n\nlstth = [\n    threading.Thread( target=do_task, args=(arg,) )\n    for arg in lst_worker_arg\n]\n\n\ntp_start = time.time()\n\nfor th in lstth:\n    th.start()\n\nfor th in lstth:\n    th.join()\n\ntime_elapsed = time.time() - tp_start\n\n\nprint('The integer which has largest number of divisors is', final_res['value'])\nprint('The largest number of divisor is', final_res['numdiv'])\nprint('Time elapsed =', time_elapsed)\n"
  },
  {
    "path": "python/exer02a01_producer_consumer.py",
    "content": "'''\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE A: USING BLOCKING QUEUES\n    Version A01: 1 slow producer, 1 fast consumer\n'''\n\nimport time\nfrom queue import Queue\nimport threading\n\n\n\ndef producer(q: Queue):\n    i = 1\n\n    while True:\n        q.put(i)\n        time.sleep(1)\n        i += 1\n\n\n\ndef consumer(q: Queue):\n    while True:\n        data = q.get()\n        print('Consumer', data)\n\n\n\nblkq = Queue()\nthreading.Thread(target=producer, args=(blkq,)).start()\nthreading.Thread(target=consumer, args=(blkq,)).start()\n"
  },
  {
    "path": "python/exer02a02_producer_consumer.py",
    "content": "'''\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE A: USING BLOCKING QUEUES\n    Version A02: 2 slow producers, 1 fast consumer\n'''\n\nimport time\nfrom queue import Queue\nimport threading\n\n\n\ndef producer(q: Queue):\n    i = 1\n\n    while True:\n        q.put(i)\n        time.sleep(1)\n        i += 1\n\n\n\ndef consumer(q: Queue):\n    while True:\n        data = q.get()\n        print('Consumer', data)\n\n\n\nblkq = Queue()\n\nthreading.Thread(target=producer, args=(blkq,)).start()\nthreading.Thread(target=producer, args=(blkq,)).start()\n\nthreading.Thread(target=consumer, args=(blkq,)).start()\n"
  },
  {
    "path": "python/exer02a03_producer_consumer.py",
    "content": "'''\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE A: USING BLOCKING QUEUES\n    Version A03: 1 slow producer, 2 fast consumers\n'''\n\nimport time\nfrom queue import Queue\nimport threading\n\n\n\ndef producer(q: Queue):\n    i = 1\n\n    while True:\n        q.put(i)\n        time.sleep(1)\n        i += 1\n\n\n\ndef consumer(name: str, q: Queue):\n    while True:\n        data = q.get()\n        print(f'Consumer {name}: {data}')\n\n\n\nblkq = Queue()\n\nthreading.Thread(target=producer, args=(blkq,)).start()\n\nthreading.Thread(target=consumer, args=('foo', blkq)).start()\nthreading.Thread(target=consumer, args=('bar', blkq)).start()\n"
  },
  {
    "path": "python/exer02a04_producer_consumer.py",
    "content": "'''\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE A: USING BLOCKING QUEUES\n    Version A04: Multiple fast producers, multiple slow consumers\n'''\n\nimport time\nfrom queue import Queue\nimport threading\n\n\n\ndef producer(q: Queue, start_value: int):\n    time.sleep(1)\n    i = 1\n\n    while True:\n        q.put(i + start_value)\n        i += 1\n\n\n\ndef consumer(q: Queue):\n    while True:\n        data = q.get()\n        print('Consumer', data)\n        time.sleep(1)\n\n\n\nblkq = Queue(maxsize=5)\nNUM_PRODUCERS = 3\nNUM_CONSUMERS = 2\n\nfor i in range(NUM_PRODUCERS):\n    threading.Thread(target=producer, args=(blkq, i * 1000)).start()\n\nfor _ in range(NUM_CONSUMERS):\n    threading.Thread(target=consumer, args=(blkq,)).start()\n"
  },
  {
    "path": "python/exer02b01_producer_consumer.py",
    "content": "'''\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE B: USING SEMAPHORES\n    Version B01: 1 slow producer, 1 fast consumer\n'''\n\nimport time\nimport threading\n\n\n\ndef producer(\n    sem_fill: threading.Semaphore,\n    sem_empty: threading.Semaphore,\n    q: list\n):\n    i = 1\n\n    while True:\n        sem_empty.acquire()\n        q.append(i)\n        time.sleep(1)\n        sem_fill.release()\n        i += 1\n\n\n\ndef consumer(\n    sem_fill: threading.Semaphore,\n    sem_empty: threading.Semaphore,\n    q: list\n):\n    while True:\n        sem_fill.acquire()\n        data = q.pop(0)\n        print('Consumer', data)\n        sem_empty.release()\n\n\n\ns_fill = threading.Semaphore(0)  # item produced\ns_empty = threading.Semaphore(1) # remaining space in queue\nque = []\n\nthreading.Thread(target=producer, args=(s_fill, s_empty, que)).start()\nthreading.Thread(target=consumer, args=(s_fill, s_empty, que)).start()\n"
  },
  {
    "path": "python/exer02b02_producer_consumer.py",
    "content": "'''\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE B: USING SEMAPHORES\n    Version B02: 2 slow producers, 1 fast consumer\n'''\n\nimport time\nimport threading\n\n\n\ndef producer(\n    sem_fill: threading.Semaphore,\n    sem_empty: threading.Semaphore,\n    q: list,\n    start_value: int\n):\n    i = 1\n\n    while True:\n        sem_empty.acquire()\n        q.append(i + start_value)\n        time.sleep(1)\n        sem_fill.release()\n        i += 1\n\n\n\ndef consumer(\n    sem_fill: threading.Semaphore,\n    sem_empty: threading.Semaphore,\n    q: list\n):\n    while True:\n        sem_fill.acquire()\n        data = q.pop(0)\n        print('Consumer', data)\n        sem_empty.release()\n\n\n\ns_fill = threading.Semaphore(0)  # item produced\ns_empty = threading.Semaphore(1) # remaining space in queue\nque = []\n\nthreading.Thread(target=producer, args=(s_fill, s_empty, que, 0)).start()\nthreading.Thread(target=producer, args=(s_fill, s_empty, que, 1000)).start()\n\nthreading.Thread(target=consumer, args=(s_fill, s_empty, que)).start()\n"
  },
  {
    "path": "python/exer02b03_producer_consumer.py",
    "content": "'''\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE B: USING SEMAPHORES\n    Version B03: 2 fast producers, 1 slow consumer\n'''\n\nimport time\nimport threading\n\n\n\ndef producer(\n    sem_fill: threading.Semaphore,\n    sem_empty: threading.Semaphore,\n    q: list,\n    start_value: int\n):\n    i = 1\n\n    while True:\n        sem_empty.acquire()\n        q.append(i + start_value)\n        sem_fill.release()\n        i += 1\n\n\n\ndef consumer(\n    sem_fill: threading.Semaphore,\n    sem_empty: threading.Semaphore,\n    q: list\n):\n    while True:\n        sem_fill.acquire()\n        data = q.pop(0)\n        print('Consumer', data)\n        time.sleep(1)\n        sem_empty.release()\n\n\n\ns_fill = threading.Semaphore(0)  # item produced\ns_empty = threading.Semaphore(1) # remaining space in queue\nque = []\n\nthreading.Thread(target=producer, args=(s_fill, s_empty, que, 0)).start()\nthreading.Thread(target=producer, args=(s_fill, s_empty, que, 1000)).start()\n\nthreading.Thread(target=consumer, args=(s_fill, s_empty, que)).start()\n"
  },
  {
    "path": "python/exer02b04_producer_consumer.py",
    "content": "'''\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE B: USING SEMAPHORES\n    Version B04: Multiple fast producers, multiple slow consumers\n'''\n\nimport time\nimport threading\n\n\n\ndef producer(\n    sem_fill: threading.Semaphore,\n    sem_empty: threading.Semaphore,\n    q: list,\n    start_value: int\n):\n    time.sleep(1)\n    i = 1\n\n    while True:\n        sem_empty.acquire()\n        q.append(i + start_value)\n        sem_fill.release()\n        i += 1\n\n\n\ndef consumer(\n    sem_fill: threading.Semaphore,\n    sem_empty: threading.Semaphore,\n    q: list\n):\n    while True:\n        sem_fill.acquire()\n        data = q.pop(0)\n        print('Consumer', data)\n        time.sleep(1)\n        sem_empty.release()\n\n\n\ns_fill = threading.Semaphore(0)  # item produced\ns_empty = threading.Semaphore(1) # remaining space in queue\nque = []\n\nNUM_PRODUCERS = 3\nNUM_CONSUMERS = 2\n\nfor i in range(NUM_PRODUCERS):\n    threading.Thread(target=producer, args=(s_fill, s_empty, que, i * 1000)).start()\n\nfor _ in range(NUM_CONSUMERS):\n    threading.Thread(target=consumer, args=(s_fill, s_empty, que)).start()\n"
  },
  {
    "path": "python/exer02c_producer_consumer.py",
    "content": "'''\nTHE PRODUCER-CONSUMER PROBLEM\n\nSOLUTION TYPE C: USING CONDITION VARIABLES & MONITORS\n    Multiple fast producers, multiple slow consumers\n'''\n\nimport time\nimport threading\n\n\n\nclass Monitor:\n    def __init__(self, max_queue_size: int, q: list):\n        self.__q = q\n        self.__max_queue_size = max_queue_size\n        self.__lk = threading.Lock()\n        self.__cond_full = threading.Condition(self.__lk)\n        self.__cond_empty = threading.Condition(self.__lk)\n\n\n    def add(self, item):\n        with self.__lk:\n            while len(self.__q) == self.__max_queue_size:\n                self.__cond_full.wait()\n\n            self.__q.append(item)\n\n            if len(self.__q) == 1:\n                self.__cond_empty.notify()\n\n\n    def remove(self):\n        with self.__lk:\n            while len(self.__q) == 0:\n                self.__cond_empty.wait()\n\n            item = self.__q.pop(0)\n\n            if len(self.__q) == self.__max_queue_size - 1:\n                self.__cond_full.notify()\n\n            return item\n\n\n\ndef producer(mon: Monitor, start_value: int):\n    time.sleep(1)\n    i = 1\n\n    while True:\n        mon.add(i + start_value)\n        i += 1\n\n\n\ndef consumer(mon: Monitor):\n    while True:\n        data = mon.remove()\n        print('Consumer', data)\n        time.sleep(1)\n\n\n\nMAX_QUEUE_SIZE = 6\nNUM_PRODUCERS = 3\nNUM_CONSUMERS = 2\n\nq = []\nmonitor = Monitor(MAX_QUEUE_SIZE, q)\n\nfor i in range(NUM_PRODUCERS):\n    threading.Thread(target=producer, args=(monitor, i * 1000)).start()\n\nfor _ in range(NUM_CONSUMERS):\n    threading.Thread(target=consumer, args=(monitor,)).start()\n"
  },
  {
    "path": "python/exer03a_readers_writers.py",
    "content": "'''\nTHE READERS-WRITERS PROBLEM\nSolution for the first readers-writers problem\n'''\n\nimport random\nimport time\nimport threading\n\n\n\nclass GlobalData:\n    def __init__(self):\n        self.resource = 0\n        self.reader_count = 0\n        self.lk_resource = threading.Lock()\n        self.lk_reader_count = threading.Lock()\n\n\n\ndef do_task_writer(g: GlobalData, delay_time: int):\n    time.sleep(delay_time)\n\n    with g.lk_resource:\n        g.resource = random.randint(0, 99)\n        print('Write', g.resource)\n\n\n\ndef do_task_reader(g: GlobalData, delay_time: int):\n    time.sleep(delay_time)\n\n    # Increase reader count\n    with g.lk_reader_count:\n        g.reader_count += 1\n\n        if g.reader_count == 1:\n            g.lk_resource.acquire()\n\n    # Do the reading\n    print('Read', g.resource)\n\n    # Decrease reader count\n    with g.lk_reader_count:\n        g.reader_count -= 1\n\n        if g.reader_count == 0:\n            g.lk_resource.release()\n\n\n\ngbl_data = GlobalData()\nNUM_READERS = 8\nNUM_WRITERS = 6\n\nfor _ in range(NUM_READERS):\n    threading.Thread(target=do_task_reader, args=(gbl_data, random.randint(0, 2))).start()\n\nfor _ in range(NUM_WRITERS):\n    threading.Thread(target=do_task_writer, args=(gbl_data, random.randint(0, 2))).start()\n"
  },
  {
    "path": "python/exer03b_readers_writers.py",
    "content": "'''\nTHE READERS-WRITERS PROBLEM\nSolution for the third readers-writers problem\n'''\n\nimport random\nimport time\nimport threading\n\n\n\nclass GlobalData:\n    def __init__(self):\n        self.resource = 0\n        self.reader_count = 0\n        self.lk_resource = threading.Lock()\n        self.lk_reader_count = threading.Lock()\n        self.lk_service_queue = threading.Lock()\n\n\n\ndef do_task_writer(g: GlobalData, delay_time: int):\n    time.sleep(delay_time)\n\n    with g.lk_service_queue:\n        g.lk_resource.acquire()\n\n    g.resource = random.randint(0, 99)\n    print('Write', g.resource)\n\n    g.lk_resource.release()\n\n\n\ndef do_task_reader(g: GlobalData, delay_time: int):\n    time.sleep(delay_time)\n\n    with g.lk_service_queue:\n        # Increase reader count\n        with g.lk_reader_count:\n            g.reader_count += 1\n\n            if g.reader_count == 1:\n                g.lk_resource.acquire()\n\n    # Do the reading\n    print('Read', g.resource)\n\n    # Decrease reader count\n    with g.lk_reader_count:\n        g.reader_count -= 1\n\n        if g.reader_count == 0:\n            g.lk_resource.release()\n\n\n\ngbl_data = GlobalData()\nNUM_READERS = 8\nNUM_WRITERS = 6\n\nfor _ in range(NUM_READERS):\n    threading.Thread(target=do_task_reader, args=(gbl_data, random.randint(0, 2))).start()\n\nfor _ in range(NUM_WRITERS):\n    threading.Thread(target=do_task_writer, args=(gbl_data, random.randint(0, 2))).start()\n"
  },
  {
    "path": "python/exer04_dining_philosophers.py",
    "content": "'''\nTHE DINING PHILOSOPHERS PROBLEM\n'''\n\nimport time\nimport threading\n\n\n\ndef do_task_philosopher(chstk: list, n_philo: int, id_philo: int):\n    time.sleep(1)\n\n    with chstk[id_philo]:\n        with chstk[(id_philo + 1) % n_philo]:\n            print(f'Philosopher #{id_philo} is eating the rice')\n\n\n\nNUM_PHILOSOPHERS = 5\nchopstick = [threading.Lock() for _ in range(NUM_PHILOSOPHERS)]\n\nfor i in range(NUM_PHILOSOPHERS):\n    threading.Thread(target=do_task_philosopher, args=(chopstick, NUM_PHILOSOPHERS, i)).start()\n"
  },
  {
    "path": "python/exer05a_product_matrix_vector.py",
    "content": "'''\nMATRIX-VECTOR MULTIPLICATION\n'''\n\nimport threading\n\n\n\ndef get_scalar_product(u: list, v: list):\n    s = sum(a * b for a, b in zip(u, v))\n    return s\n\n\n\ndef scalar_thfunc(u: list, v: list, res: list, idx_res: int):\n    scalar_prod = get_scalar_product(u, v)\n    res[idx_res] = scalar_prod\n\n\n\ndef get_product(mat: list[list], vec: list) -> list:\n    # Assume that size of mat and vec are both eligible\n    size_row_mat = len(mat)\n    # size_col_mat = len(mat[0])\n    # size_vec = len(vec)\n\n    res = [0] * size_row_mat\n    lstth = []\n\n    for i in range(size_row_mat):\n        lstth.append(threading.Thread(target=scalar_thfunc, args=(mat[i], vec, res, i)))\n\n    for th in lstth:\n        th.start()\n\n    for th in lstth:\n        th.join()\n\n    return res\n\n\n\nA = [\n    [ 1, 2, 3 ],\n    [ 4, 5, 6 ],\n    [ 7, 8, 9 ]\n]\n\nb = [\n    3,\n    -1,\n    0\n]\n\nresult = get_product(A, b)\nprint(result)\n"
  },
  {
    "path": "python/exer05b_product_matrix_vector.py",
    "content": "'''\nMATRIX-MATRIX MULTIPLICATION (DOT PRODUCT)\n'''\n\nimport threading\n\n\n\ndef get_scalar_product(u: list, v: list):\n    s = sum(a * b for a, b in zip(u, v))\n    return s\n\n\n\ndef scalar_thfunc(u: list, v: list, res: list, idx_res: int):\n    scalar_prod = get_scalar_product(u, v)\n    res[idx_res] = scalar_prod\n\n\n\ndef get_transpose_matrix(mat: list[list]) -> list[list]:\n    num_row = len(mat)\n    num_col = len(mat[0])\n\n    res = [[0] * num_row for _ in range(num_col)]\n\n    for i in range(num_row):\n        for j in range(num_col):\n            res[j][i] = mat[i][j]\n\n    return res\n\n\n\ndef get_str_matrix(mat: list[list]):\n    return '\\n'.join(\n        ' '.join(f'{val:>5}' for val in row)\n        for row in mat\n    )\n\n\n\ndef get_product(mata: list[list], matb: list[list]) -> list[list]:\n    # Assume that size of mat and vec are both eligible\n    size_row_a = len(mata)\n    size_col_b = len(matb[0])\n\n    res = [[0] * size_col_b for _ in range(size_row_a)]\n    matbt = get_transpose_matrix(matb)\n    lstth = []\n\n    for i in range(size_row_a):\n        for j in range(size_col_b):\n            lstth.append(\n                threading.Thread(target=scalar_thfunc, args=(mata[i], matbt[j], res[i], j))\n            )\n\n    for th in lstth:\n        th.start()\n\n    for th in lstth:\n        th.join()\n\n    return res\n\n\n\nA = [\n    [ 1, 3, 5 ],\n    [ 2, 4, 6 ]\n]\n\nB = [\n    [ 1, 0, 1, 0 ],\n    [ 0, 1, 0, 1 ],\n    [ 1, 0, 0, -2 ]\n]\n\nresult = get_product(A, B)\n\nprint(get_str_matrix(result))\n"
  },
  {
    "path": "python/exer06a_blocking_queue.py",
    "content": "'''\nBLOCKING QUEUE IMPLEMENTATION\nVersion A: Synchronous queues\n'''\n\nimport time\nimport threading\n\n\n\nclass SynchronousQueue:\n    def __init__(self):\n        self.__sem_put = threading.Semaphore(1)\n        self.__sem_take = threading.Semaphore(0)\n        self.__element = None\n\n    def put(self, value):\n        self.__sem_put.acquire()\n        self.__element = value\n        self.__sem_take.release()\n\n    def take(self):\n        self.__sem_take.acquire()\n        result = self.__element\n        self.__sem_put.release()\n        return result\n\n\n\ndef producer(syncq: SynchronousQueue):\n    arr = [ 'lorem', 'ipsum', 'dolor' ]\n\n    for data in arr:\n        print(f'Producer: {data}')\n        syncq.put(data)\n        print(f'Producer: {data} \\t\\t\\t[done]')\n\n\n\ndef consumer(syncq: SynchronousQueue):\n    time.sleep(5)\n\n    for _ in range(3):\n        value = syncq.take()\n        print(f'\\tConsumer: {value}')\n\n\n\nsyncqueue = SynchronousQueue()\nthreading.Thread(target=producer, args=(syncqueue,)).start()\nthreading.Thread(target=consumer, args=(syncqueue,)).start()\n"
  },
  {
    "path": "python/exer06b01_blocking_queue.py",
    "content": "'''\nBLOCKING QUEUE IMPLEMENTATION\nVersion B01: General blocking queues\n             Underlying mechanism: Semaphores\n'''\n\nimport time\nimport threading\n\n\n\nclass BlockingQueue:\n    def __init__(self, capacity: int):\n        if capacity <= 0:\n            raise ValueError('capacity must be a positive integer')\n        # self.__capacity = capacity\n        self.__sem_remain = threading.Semaphore(capacity)\n        self.__sem_fill = threading.Semaphore(0)\n        self.__lk = threading.Lock()\n        self.__q = [] # queue\n\n\n    def put(self, value):\n        self.__sem_remain.acquire()\n\n        with self.__lk:\n            self.__q.append(value)\n\n        self.__sem_fill.release()\n\n\n    def take(self):\n        self.__sem_fill.acquire()\n\n        with self.__lk:\n            result = self.__q.pop(0)\n\n        self.__sem_remain.release()\n        return result\n\n\n\ndef producer(q: BlockingQueue):\n    arr = [ 'nice', 'to', 'meet', 'you' ]\n\n    for data in arr:\n        print(f'Producer: {data}')\n        q.put(data)\n        print(f'Producer: {data} \\t\\t\\t[done]')\n\n\n\ndef consumer(q: BlockingQueue):\n    time.sleep(5)\n\n    for i in range(4):\n        data = q.take()\n        print(f'\\tConsumer: {data}')\n\n        if i == 0:\n            time.sleep(5)\n\n\n\nblkqueue = BlockingQueue(2) # capacity = 2\nthreading.Thread(target=producer, args=(blkqueue,)).start()\nthreading.Thread(target=consumer, args=(blkqueue,)).start()\n"
  },
  {
    "path": "python/exer06b02_blocking_queue.py",
    "content": "'''\nBLOCKING QUEUE IMPLEMENTATION\nVersion B02: General blocking queues\n             Underlying mechanism: Condition variables\n'''\n\nimport time\nimport threading\n\n\n\nclass BlockingQueue:\n    def __init__(self, capacity: int):\n        if capacity <= 0:\n            raise ValueError('capacity must be a positive integer')\n        self.__capacity = capacity\n        self.__lk = threading.Lock()\n        self.__cond_empty = threading.Condition(self.__lk)\n        self.__cond_full = threading.Condition(self.__lk)\n        self.__q = [] # queue\n\n\n    def put(self, value):\n        with self.__lk:\n            while len(self.__q) >= self.__capacity:\n                self.__cond_full.wait()\n\n            self.__q.append(value)\n            self.__cond_empty.notify()\n\n\n    def take(self):\n        result = None\n\n        with self.__lk:\n            while len(self.__q) == 0:\n                self.__cond_empty.wait()\n\n            result = self.__q.pop(0)\n            self.__cond_full.notify()\n\n        return result\n\n\n\ndef producer(q: BlockingQueue):\n    arr = [ 'nice', 'to', 'meet', 'you' ]\n\n    for value in arr:\n        print(f'Producer: {value}')\n        q.put(value)\n        print(f'Producer: {value} \\t\\t\\t[done]')\n\n\n\ndef consumer(q: BlockingQueue):\n    time.sleep(5)\n\n    for i in range(4):\n        value = q.take()\n        print(f'\\tConsumer: {value}')\n\n        if i == 0:\n            time.sleep(5)\n\n\n\nblkqueue = BlockingQueue(2) # capacity = 2\nthreading.Thread(target=producer, args=(blkqueue,)).start()\nthreading.Thread(target=consumer, args=(blkqueue,)).start()\n"
  },
  {
    "path": "python/exer07a_data_server.py",
    "content": "'''\nTHE DATA SERVER PROBLEM\nVersion A: Solving the problem using a condition variable\n'''\n\nfrom dataclasses import dataclass\nimport time\nimport threading\n\n\n\n@dataclass\nclass Counter:\n    value: int = 0\n    cond: threading.Condition = threading.Condition()\n\n\n\ndef check_auth_user():\n    print('[   Auth   ] Start')\n    # Send request to authenticator, check permissions, encrypt, decrypt...\n    time.sleep(20)\n    print('[   Auth   ] Done')\n\n\n\ndef process_files(lst_file_name: list[str], counter: Counter):\n    for file_name in lst_file_name:\n        # Read file\n        print('[ ReadFile ] Start', file_name)\n        time.sleep(10)\n        print('[ ReadFile ] Done ', file_name)\n\n        with counter.cond:\n            counter.value -= 1\n            counter.cond.notify()\n\n        # Write log into disk\n        time.sleep(5)\n        print('[ WriteLog ]')\n\n\n\ndef process_request():\n    lst_file_name = [ 'foo.html', 'bar.json' ]\n    counter = Counter(value=len(lst_file_name))\n\n    # The server checks auth user while reading files, concurrently\n    threading.Thread(target=process_files, args=(lst_file_name, counter)).start()\n    check_auth_user()\n\n    # The server waits for completion of loading files\n    with counter.cond:\n        while counter.value > 0:\n            counter.cond.wait(10) # timeout = 10 seconds\n\n    print('\\nNow user is authorized and files are loaded')\n    print('Do other tasks...\\n')\n\n\n\nprocess_request()\n"
  },
  {
    "path": "python/exer07b_data_server.py",
    "content": "'''\nTHE DATA SERVER PROBLEM\nVersion B: Solving the problem using a semaphore\n'''\n\nimport time\nimport threading\n\n\n\ndef check_auth_user():\n    print('[   Auth   ] Start')\n    # Send request to authenticator, check permissions, encrypt, decrypt...\n    time.sleep(20)\n    print('[   Auth   ] Done')\n\n\n\ndef process_files(lst_file_name: list[str], sem: threading.Semaphore):\n    for file_name in lst_file_name:\n        # Read file\n        print('[ ReadFile ] Start', file_name)\n        time.sleep(10)\n        print('[ ReadFile ] Done ', file_name)\n\n        sem.release()\n\n        # Write log into disk\n        time.sleep(5)\n        print('[ WriteLog ]')\n\n\n\ndef process_request():\n    lst_file_name = [ 'foo.html', 'bar.json' ]\n    sem = threading.Semaphore(value=0)\n\n    # The server checks auth user while reading files, concurrently\n    threading.Thread(target=process_files, args=(lst_file_name, sem)).start()\n    check_auth_user()\n\n    # The server waits for completion of loading files\n    for _ in range(len(lst_file_name)):\n        sem.acquire()\n\n    print('\\nNow user is authorized and files are loaded')\n    print('Do other tasks...\\n')\n\n\n\nprocess_request()\n"
  },
  {
    "path": "python/exer07c_data_server.py",
    "content": "'''\nTHE DATA SERVER PROBLEM\nVersion C: Solving the problem using a count-down latch\n'''\n\nimport time\nimport threading\nfrom mylib_latch import CountDownLatch\n\n\n\ndef check_auth_user():\n    print('[   Auth   ] Start')\n    # Send request to authenticator, check permissions, encrypt, decrypt...\n    time.sleep(20)\n    print('[   Auth   ] Done')\n\n\n\ndef process_files(lst_file_name: list[str], latch: CountDownLatch):\n    for file_name in lst_file_name:\n        # Read file\n        print('[ ReadFile ] Start', file_name)\n        time.sleep(10)\n        print('[ ReadFile ] Done ', file_name)\n\n        latch.count_down()\n\n        # Write log into disk\n        time.sleep(5)\n        print('[ WriteLog ]')\n\n\n\ndef process_request():\n    lst_file_name = [ 'foo.html', 'bar.json' ]\n    latch = CountDownLatch(len(lst_file_name))\n\n    # The server checks auth user while reading files, concurrently\n    threading.Thread(target=process_files, args=(lst_file_name, latch)).start()\n    check_auth_user()\n\n    # The server waits for completion of loading files\n    latch.wait()\n\n    print('\\nNow user is authorized and files are loaded')\n    print('Do other tasks...\\n')\n\n\n\nprocess_request()\n"
  },
  {
    "path": "python/exer07d_data_server.py",
    "content": "'''\nTHE DATA SERVER PROBLEM\nVersion D: Solving the problem using a blocking queue\n'''\n\nimport time\nimport threading\nfrom queue import Queue\n\n\n\ndef check_auth_user():\n    print('[   Auth   ] Start')\n    # Send request to authenticator, check permissions, encrypt, decrypt...\n    time.sleep(20)\n    print('[   Auth   ] Done')\n\n\n\ndef process_files(lst_file_name: list[str], blkq: Queue):\n    for file_name in lst_file_name:\n        # Read file\n        print('[ ReadFile ] Start', file_name)\n        time.sleep(10)\n        print('[ ReadFile ] Done ', file_name)\n\n        blkq.put(file_name) # You may put file data here\n\n        # Write log into disk\n        time.sleep(5)\n        print('[ WriteLog ]')\n\n\n\ndef process_request():\n    lst_file_name = [ 'foo.html', 'bar.json' ]\n    blkq = Queue()\n\n    # The server checks auth user while reading files, concurrently\n    threading.Thread(target=process_files, args=(lst_file_name, blkq)).start()\n    check_auth_user()\n\n    # The server waits for completion of loading files\n    for _ in range(len(lst_file_name)):\n        blkq.get()\n\n    print('\\nNow user is authorized and files are loaded')\n    print('Do other tasks...\\n')\n\n\n\nprocess_request()\n"
  },
  {
    "path": "python/exer08_exec_service_itask.py",
    "content": "from abc import ABC, abstractmethod\n\n\nclass ITask(ABC):\n    @abstractmethod\n    def run(self):\n        pass\n"
  },
  {
    "path": "python/exer08_exec_service_main.py",
    "content": "'''\nEXECUTOR SERVICE & THREAD POOL IMPLEMENTATION\n'''\n\nimport time\nfrom exer08_exec_service_itask import ITask\nfrom exer08_exec_service_v0a import MyExecServiceV0A\nfrom exer08_exec_service_v0b import MyExecServiceV0B\nfrom exer08_exec_service_v1a import MyExecServiceV1A\nfrom exer08_exec_service_v1b import MyExecServiceV1B\nfrom exer08_exec_service_v2a import MyExecServiceV2A\nfrom exer08_exec_service_v2b import MyExecServiceV2B\n\n\n\nclass MyTask(ITask):\n    def __init__(self, task_id: str):\n        self.id = task_id\n\n    def run(self):\n        print(f'Task {self.id} is starting')\n        time.sleep(3)\n        print(f'Task {self.id} is completed')\n\n\n\nNUM_THREADS = 2\nNUM_TASKS = 5\n\nexec_service = MyExecServiceV0A(NUM_THREADS)\n\nlsttask = [MyTask(chr(i + 65)) for i in range(NUM_TASKS)]\n\nfor task in lsttask:\n    exec_service.submit(task)\n\nprint('All tasks are submitted')\n\nexec_service.wait_task_done()\nprint('All tasks are completed')\n\nexec_service.shutdown()\n"
  },
  {
    "path": "python/exer08_exec_service_v0a.py",
    "content": "'''\nMY EXECUTOR SERVICE\n\nVersion 0A: The easiest executor service\n- It uses a blocking queue as underlying mechanism.\n'''\n\nimport time\nfrom queue import Queue\nimport threading\nfrom exer08_exec_service_itask import ITask\n\n\n\nclass MyExecServiceV0A:\n    class EmptyTask(ITask):\n        def run(self):\n            pass\n\n\n    def __init__(self, num_threads: int):\n        self.__num_threads = num_threads\n        self.__lstth = []\n        self.__task_pending = Queue()\n\n        for _ in range(self.__num_threads):\n            self.__lstth.append(\n                threading.Thread(target=MyExecServiceV0A.__thread_worker_func, args=(self,))\n            )\n\n        for th in self.__lstth:\n            th.start()\n\n\n    def submit(self, task: ITask):\n        self.__task_pending.put_nowait(task)\n\n\n    def wait_task_done(self):\n        # This ExecService is too simple,\n        # so there is no implementation for waitTaskDone()\n        time.sleep(11) # fake behaviour\n\n\n    def shutdown(self):\n        # This ExecService is too simple,\n        # so there is no implementation for shutdown()\n        print('No implementation for shutdown().')\n        print('You need to exit the app manually.')\n\n\n    @staticmethod\n    def __thread_worker_func(selfptr: 'MyExecServiceV0A'):\n        task_pending = selfptr.__task_pending\n\n        while True:\n            # WAIT FOR AN AVAILABLE PENDING TASK\n            task = task_pending.get()\n\n            # DO THE TASK\n            task.run()\n"
  },
  {
    "path": "python/exer08_exec_service_v0b.py",
    "content": "'''\nMY EXECUTOR SERVICE\n\nVersion 0B: The easiest executor service\n- It uses a blocking queue as underlying mechanism.\n- It supports waitTaskDone() and shutdown().\n'''\n\nimport time\nfrom queue import Queue\nimport threading\nfrom exer08_exec_service_itask import ITask\n\n\n\nclass MyExecServiceV0B:\n    class EmptyTask(ITask):\n        def run(self):\n            pass\n\n\n    def __init__(self, num_threads: int):\n        self.__num_threads = num_threads\n        self.__lstth = []\n        self.__task_pending = Queue()\n        self.__counter_task_running = 0\n        self.__force_thread_shutdown = False\n\n        for _ in range(self.__num_threads):\n            self.__lstth.append(\n                threading.Thread(target=MyExecServiceV0B.__thread_worker_func, args=(self,))\n            )\n\n        for th in self.__lstth:\n            th.start()\n\n\n    def submit(self, task: ITask):\n        self.__task_pending.put_nowait(task)\n\n\n    def wait_task_done(self):\n        # This ExecService is too simple,\n        # so there is no good implementation for waitTaskDone()\n        while self.__task_pending.qsize() > 0 or self.__counter_task_running > 0:\n            time.sleep(1)\n\n\n    def shutdown(self):\n        self.__force_thread_shutdown = True\n\n        # Wait until task_pending is empty\n        self.__task_pending.join()\n\n        # Invoke blocked threads by adding \"empty\" tasks\n        for _ in range(self.__num_threads):\n            self.__task_pending.put(self.EmptyTask())\n\n        _ = [th.join() for th in self.__lstth]\n        self.__num_threads = 0\n        self.__lstth.clear()\n\n\n    @staticmethod\n    def __thread_worker_func(selfptr: 'MyExecServiceV0B'):\n        task_pending = selfptr.__task_pending\n\n        while True:\n            # WAIT FOR AN AVAILABLE PENDING TASK\n            task = task_pending.get()\n\n            # If shutdown() was called, then exit the function\n            if selfptr.__force_thread_shutdown:\n                break\n\n            # DO THE TASK\n            selfptr.__counter_task_running += 1\n            task.run()\n            task_pending.task_done()\n            selfptr.__counter_task_running -= 1\n"
  },
  {
    "path": "python/exer08_exec_service_v1a.py",
    "content": "'''\nMY EXECUTOR SERVICE\n\nVersion 1A: Simple executor service\n- Method \"waitTaskDone\" invokes thread sleeps in loop (which can cause performance problems).\n'''\n\nimport time\nimport threading\nfrom exer08_exec_service_itask import ITask\n\n\n\nclass MyExecServiceV1A:\n    def __init__(self, num_threads: int):\n        # self.shutdown()\n        self.__num_threads = num_threads\n        self.__lstth = []\n        self.__task_pending = []\n        self.__lk_task_pending = threading.Lock()\n        self.__cond_task_pending = threading.Condition(self.__lk_task_pending)\n        self.__force_thread_shutdown = False\n\n        with self.__lk_task_pending:\n            self.__counter_task_running = 0\n\n        for _ in range(self.__num_threads):\n            self.__lstth.append(\n                threading.Thread(target=MyExecServiceV1A.__thread_worker_func, args=(self,))\n            )\n\n        for th in self.__lstth:\n            th.start()\n\n\n    def submit(self, task: ITask):\n        with self.__lk_task_pending:\n            self.__task_pending.append(task)\n            self.__cond_task_pending.notify()\n\n\n    def wait_task_done(self):\n        done = False\n        while True:\n            with self.__lk_task_pending:\n                if len(self.__task_pending) == 0 and self.__counter_task_running == 0:\n                    done = True\n\n            if done:\n                break\n\n            time.sleep(1)\n\n\n    def shutdown(self):\n        if not hasattr(self, f'_{self.__class__.__name__}__lstth'):\n            return\n\n        self.__force_thread_shutdown = True\n\n        with self.__lk_task_pending:\n            self.__task_pending.clear()\n            self.__cond_task_pending.notify_all()\n\n        _ = [th.join() for th in self.__lstth]\n        self.__num_threads = 0\n        self.__lstth.clear()\n\n\n    @staticmethod\n    def __thread_worker_func(selfptr: 'MyExecServiceV1A'):\n        task_pending = selfptr.__task_pending\n        lk_task_pending = selfptr.__lk_task_pending\n        cond_task_pending = selfptr.__cond_task_pending\n\n        while True:\n            with lk_task_pending:\n                # WAIT FOR AN AVAILABLE PENDING TASK\n                while len(task_pending) == 0 and not selfptr.__force_thread_shutdown:\n                    cond_task_pending.wait()\n\n                if selfptr.__force_thread_shutdown:\n                    # lk_task_pending.release()\n                    break\n\n                # GET THE TASK FROM THE PENDING QUEUE\n                task = task_pending.pop(0)\n                selfptr.__counter_task_running += 1\n\n            # DO THE TASK\n            task.run()\n\n            with lk_task_pending:\n                selfptr.__counter_task_running -= 1\n"
  },
  {
    "path": "python/exer08_exec_service_v1b.py",
    "content": "'''\nMY EXECUTOR SERVICE\n\nVersion 1B: Simple executor service\n- Method \"waitTaskDone\" uses a condition variable to synchronize.\n'''\n\nimport threading\nfrom exer08_exec_service_itask import ITask\n\n\n\nclass MyExecServiceV1B:\n    def __init__(self, num_threads: int):\n        # self.shutdown()\n        self.__num_threads = num_threads\n        self.__lstth = []\n\n        self.__task_pending = []\n        self.__lk_task_pending = threading.Lock()\n        self.__cond_task_pending = threading.Condition(self.__lk_task_pending)\n\n        self.__counter_task_running = 0\n        self.__lk_task_running = threading.Lock()\n        self.__cond_task_running = threading.Condition(self.__lk_task_running)\n\n        self.__force_thread_shutdown = False\n\n        for _ in range(self.__num_threads):\n            self.__lstth.append(\n                threading.Thread(target=MyExecServiceV1B.__thread_worker_func, args=(self,))\n            )\n\n        for th in self.__lstth:\n            th.start()\n\n\n    def submit(self, task: ITask):\n        with self.__lk_task_pending:\n            self.__task_pending.append(task)\n            self.__cond_task_pending.notify()\n\n\n    def wait_task_done(self):\n        while True:\n            with self.__lk_task_pending:\n                if len(self.__task_pending) == 0:\n                    with self.__lk_task_running:\n                        while self.__counter_task_running > 0:\n                            self.__cond_task_running.wait()\n\n                        # no pending task and no running task\n                        break\n\n\n    def shutdown(self):\n        if not hasattr(self, f'_{self.__class__.__name__}__lstth'):\n            return\n\n        self.__force_thread_shutdown = True\n\n        with self.__lk_task_pending:\n            self.__task_pending.clear()\n            self.__cond_task_pending.notify_all()\n\n        _ = [th.join() for th in self.__lstth]\n        self.__num_threads = 0\n        self.__lstth.clear()\n\n\n    @staticmethod\n    def __thread_worker_func(selfptr: 'MyExecServiceV1B'):\n        task_pending = selfptr.__task_pending\n        lk_task_pending = selfptr.__lk_task_pending\n        cond_task_pending = selfptr.__cond_task_pending\n\n        lk_task_running = selfptr.__lk_task_running\n        cond_task_running = selfptr.__cond_task_running\n\n        while True:\n            with lk_task_pending:\n                # WAIT FOR AN AVAILABLE PENDING TASK\n                while len(task_pending) == 0 and not selfptr.__force_thread_shutdown:\n                    cond_task_pending.wait()\n\n                if selfptr.__force_thread_shutdown:\n                    # lk_task_pending.release()\n                    break\n\n                # GET THE TASK FROM THE PENDING QUEUE\n                task = task_pending.pop(0)\n                selfptr.__counter_task_running += 1\n\n            # DO THE TASK\n            task.run()\n\n            with lk_task_running:\n                selfptr.__counter_task_running -= 1\n\n                if selfptr.__counter_task_running == 0:\n                    cond_task_running.notify()\n"
  },
  {
    "path": "python/exer08_exec_service_v2a.py",
    "content": "'''\nMY EXECUTOR SERVICE\n\nVersion 2A: The executor service storing running tasks\n- Method \"waitTaskDone\" uses a semaphore to synchronize.\n'''\n\nimport threading\nfrom exer08_exec_service_itask import ITask\n\n\n\nclass MyExecServiceV2A:\n    def __init__(self, num_threads: int):\n        # self.shutdown()\n        self.__num_threads = num_threads\n        self.__lstth = []\n        self.__task_pending = []\n        self.__lk_task_pending = threading.Lock()\n        self.__cond_task_pending = threading.Condition(self.__lk_task_pending)\n        self.__task_running = []\n        self.__lk_task_running = threading.Lock()\n        self.__counter_task_running = threading.Semaphore(0)\n        self.__force_thread_shutdown = False\n\n        for _ in range(self.__num_threads):\n            self.__lstth.append(\n                threading.Thread(target=MyExecServiceV2A.__thread_worker_func, args=(self,))\n            )\n\n        for th in self.__lstth:\n            th.start()\n\n\n    def submit(self, task: ITask):\n        with self.__lk_task_pending:\n            self.__task_pending.append(task)\n            self.__cond_task_pending.notify()\n\n\n    def wait_task_done(self):\n        while True:\n            self.__counter_task_running.acquire()\n\n            with self.__lk_task_pending, self.__lk_task_running:\n                if len(self.__task_pending) == 0 and len(self.__task_running) == 0:\n                    break\n\n\n    def shutdown(self):\n        if not hasattr(self, f'_{self.__class__.__name__}__lstth'):\n            return\n\n        self.__force_thread_shutdown = True\n\n        with self.__lk_task_pending:\n            self.__task_pending.clear()\n            self.__cond_task_pending.notify_all()\n\n        _ = [th.join() for th in self.__lstth]\n        self.__num_threads = 0\n        self.__lstth.clear()\n\n\n    @staticmethod\n    def __thread_worker_func(selfptr: 'MyExecServiceV2A'):\n        task_pending = selfptr.__task_pending\n        lk_task_pending = selfptr.__lk_task_pending\n        cond_task_pending = selfptr.__cond_task_pending\n\n        task_running = selfptr.__task_running\n        lk_task_running = selfptr.__lk_task_running\n        counter_task_running = selfptr.__counter_task_running\n\n        while True:\n            with lk_task_pending:\n                # WAIT FOR AN AVAILABLE PENDING TASK\n                while len(task_pending) == 0 and not selfptr.__force_thread_shutdown:\n                    cond_task_pending.wait()\n\n                if selfptr.__force_thread_shutdown:\n                    # lk_task_pending.release()\n                    break\n\n                # GET THE TASK FROM THE PENDING QUEUE\n                task = task_pending.pop(0)\n\n                # PUSH IT TO THE RUNNING QUEUE\n                with lk_task_running:\n                    task_running.append(task)\n\n            # DO THE TASK\n            task.run()\n\n            # REMOVE IT FROM THE RUNNING QUEUE\n            with lk_task_running:\n                task_running.remove(task)\n                counter_task_running.release()\n"
  },
  {
    "path": "python/exer08_exec_service_v2b.py",
    "content": "'''\nMY EXECUTOR SERVICE\n\nVersion 2B: The executor service storing running tasks\n- Method \"waitTaskDone\" uses a condition variable to synchronize.\n'''\n\nimport threading\nfrom exer08_exec_service_itask import ITask\n\n\n\nclass MyExecServiceV2B:\n    def __init__(self, num_threads: int):\n        # self.shutdown()\n        self.__num_threads = num_threads\n        self.__lstth = []\n        self.__task_pending = []\n        self.__lk_task_pending = threading.Lock()\n        self.__cond_task_pending = threading.Condition(self.__lk_task_pending)\n        self.__task_running = []\n        self.__lk_task_running = threading.Lock()\n        self.__cond_task_running = threading.Condition(self.__lk_task_running)\n        self.__force_thread_shutdown = False\n\n        for _ in range(self.__num_threads):\n            self.__lstth.append(\n                threading.Thread(target=MyExecServiceV2B.__thread_worker_func, args=(self,))\n            )\n\n        for th in self.__lstth:\n            th.start()\n\n\n    def submit(self, task: ITask):\n        with self.__lk_task_pending:\n            self.__task_pending.append(task)\n            self.__cond_task_pending.notify()\n\n\n    def wait_task_done(self):\n        while True:\n            with self.__lk_task_pending:\n                if len(self.__task_pending) == 0:\n                    with self.__lk_task_running:\n                        while len(self.__task_running) > 0:\n                            self.__cond_task_running.wait()\n\n                        # no pending task and no running task\n                        break\n\n\n    def shutdown(self):\n        if not hasattr(self, f'_{self.__class__.__name__}__lstth'):\n            return\n\n        self.__force_thread_shutdown = True\n\n        with self.__lk_task_pending:\n            self.__task_pending.clear()\n            self.__cond_task_pending.notify_all()\n\n        _ = [th.join() for th in self.__lstth]\n        self.__num_threads = 0\n        self.__lstth.clear()\n\n\n    @staticmethod\n    def __thread_worker_func(selfptr: 'MyExecServiceV2B'):\n        task_pending = selfptr.__task_pending\n        lk_task_pending = selfptr.__lk_task_pending\n        cond_task_pending = selfptr.__cond_task_pending\n\n        task_running = selfptr.__task_running\n        lk_task_running = selfptr.__lk_task_running\n        cond_task_running = selfptr.__cond_task_running\n\n        while True:\n            with lk_task_pending:\n                # WAIT FOR AN AVAILABLE PENDING TASK\n                while len(task_pending) == 0 and not selfptr.__force_thread_shutdown:\n                    cond_task_pending.wait()\n\n                if selfptr.__force_thread_shutdown:\n                    # lk_task_pending.release()\n                    break\n\n                # GET THE TASK FROM THE PENDING QUEUE\n                task = task_pending.pop(0)\n\n                # PUSH IT TO THE RUNNING QUEUE\n                with lk_task_running:\n                    task_running.append(task)\n\n            # DO THE TASK\n            task.run()\n\n            # REMOVE IT FROM THE RUNNING QUEUE\n            with lk_task_running:\n                task_running.remove(task)\n                cond_task_running.notify()\n"
  },
  {
    "path": "python/mylib_latch.py",
    "content": "'''\n/******************************************************\n*\n* File name:    mylib_latch.py\n*\n* Author:       Name:   Thanh Nguyen\n*               Email:  thanh.it1995(at)gmail(dot)com\n*\n* License:      3-Clause BSD License\n*\n* Description:  The count-down latch implementation in Python 3\n*\n******************************************************/\n'''\n\n\nimport threading\n\n\n\nclass CountDownLatch:\n    def __init__(self, count: int):\n        if count < 0:\n            raise ValueError('count must be a non-negative integer')\n\n        self.__count = count\n        self.__cond = threading.Condition()\n\n\n    def get_count(self) -> int:\n        return self.__count\n\n\n    def count_down(self):\n        with self.__cond:\n            if self.__count <= 0:\n                return\n\n            self.__count -= 1\n\n            if self.__count <= 0:\n                self.__cond.notify_all()\n\n\n    def wait(self):\n        with self.__cond:\n            self.__cond.wait_for(lambda : self.__count <= 0)\n"
  },
  {
    "path": "python/mylib_rwlock.py",
    "content": "'''\n/******************************************************\n*\n* File name:    mylib_rwlock.py\n*\n* Author:       Name:   Thanh Nguyen\n*               Email:  thanh.it1995(at)gmail(dot)com\n*\n* License:      3-Clause BSD License\n*\n* Description:  The read-write lock implementation in Python 3\n*               Underlying mechanism: The Mutex\n*\n******************************************************/\n'''\n\n\nimport threading\n\n\n\nclass ReadWriteLock:\n    def __init__(self):\n        self.__lk_service_queue = threading.Lock()\n        self.__lk_resource = threading.Lock()\n        self.__lk_reader_count = threading.Lock()\n        self.__reader_count = 0\n        self.__rlock = self.ReadLock(self)\n        self.__wlock = self.WriteLock(self)\n\n\n    def get_reader_count(self) -> int:\n        return self.__reader_count\n\n\n    def acquire_write(self):\n        with self.__lk_service_queue:\n            self.__lk_resource.acquire()\n\n\n    def release_write(self):\n        self.__lk_resource.release()\n\n\n    def acquire_read(self):\n        with self.__lk_service_queue:\n            with self.__lk_reader_count:\n                self.__reader_count += 1\n                if self.__reader_count == 1:\n                    self.__lk_resource.acquire()\n\n\n    def release_read(self):\n        with self.__lk_reader_count:\n            self.__reader_count -= 1\n            if self.__reader_count == 0:\n                self.__lk_resource.release()\n\n\n    def readlock(self) -> 'ReadWriteLock.ReadLock':\n        return self.__rlock\n\n\n    def writelock(self) -> 'ReadWriteLock.WriteLock':\n        return self.__wlock\n\n\n    class ReadLock:\n        def __init__(self, owner: 'ReadWriteLock'):\n            self.__owner = owner\n\n        def __enter__(self):\n            self.__owner.acquire_read()\n\n        def __exit__(self, exc_type, exc_value, exc_traceback):\n            self.__owner.release_read()\n\n\n    class WriteLock:\n        def __init__(self, owner: 'ReadWriteLock'):\n            self.__owner = owner\n\n        def __enter__(self):\n            self.__owner.acquire_write()\n\n        def __exit__(self, exc_type, exc_value, exc_traceback):\n            self.__owner.release_write()\n"
  },
  {
    "path": "python/mylib_rwlock2.py",
    "content": "'''\n/******************************************************\n*\n* File name:    mylib_rwlock2.py\n*\n* Author:       Name:   Thanh Nguyen\n*               Email:  thanh.it1995(at)gmail(dot)com\n*\n* License:      3-Clause BSD License\n*\n* Description:  The read-write lock implementation in Python 3\n*               Underlying mechanism: The Condition Variable\n*\n******************************************************/\n'''\n\n\nimport threading\n\n\n\nclass ReadWriteLock:\n    def __init__(self):\n        self.__cond = threading.Condition()\n        self.__reader_count = 0\n        self.__writer = False\n        self.__rlock = self.ReadLock(self)\n        self.__wlock = self.WriteLock(self)\n\n\n    def get_reader_count(self) -> int:\n        return self.__reader_count\n\n\n    def acquire_write(self):\n        with self.__cond:\n            self.__cond.wait_for(lambda: not self.__writer and self.__reader_count <= 0)\n            self.__writer = True\n\n\n    def release_write(self):\n        with self.__cond:\n            self.__writer = False\n            self.__cond.notify_all()\n\n\n    def acquire_read(self):\n        with self.__cond:\n            self.__cond.wait_for(lambda: not self.__writer)\n            self.__reader_count += 1\n\n\n    def release_read(self):\n        with self.__cond:\n            self.__reader_count -= 1\n            if self.__reader_count <= 0:\n                self.__cond.notify_all()\n\n\n    def readlock(self) -> 'ReadWriteLock.ReadLock':\n        return self.__rlock\n\n\n    def writelock(self) -> 'ReadWriteLock.WriteLock':\n        return self.__wlock\n\n\n    class ReadLock:\n        def __init__(self, owner: 'ReadWriteLock'):\n            self.__owner = owner\n\n        def __enter__(self):\n            self.__owner.acquire_read()\n\n        def __exit__(self, exc_type, exc_value, exc_traceback):\n            self.__owner.release_read()\n\n\n    class WriteLock:\n        def __init__(self, owner: 'ReadWriteLock'):\n            self.__owner = owner\n\n        def __enter__(self):\n            self.__owner.acquire_write()\n\n        def __exit__(self, exc_type, exc_value, exc_traceback):\n            self.__owner.release_write()\n"
  },
  {
    "path": "references.md",
    "content": "# REFERENCES\n\n## General\n\n- [Columbia University, W4118 Operating Systems I (Junfeng Yang), lecture 8 (threads)](http://www.cs.columbia.edu/~junfeng/12sp-w4118/lectures/l08-thread.pdf)\n- <http://math.hws.edu/javanotes/c12/exercises.html>\n- <https://hpc-tutorials.llnl.gov/posix/>\n- <https://docs.oracle.com/cd/E19455-01/806-5257/>\n- <https://en.wikipedia.org/wiki/Race_condition>\n- <https://thispointer.com/category/multithreading/>\n- <https://en.cppreference.com/w/cpp/thread>\n- <https://docs.python.org/3/library/threading.html>\n- <http://tutorials.jenkov.com/java-concurrency>\n\n&nbsp;\n\n## Synchronization\n\n- Mutex:\n  - <https://www.ibm.com/docs/en/aix/7.2?topic=programming-using-mutexes>\n\n- Reentrant lock:\n  - <https://stackoverflow.com/questions/11821801/why-use-a-reentrantlock-if-one-can-use-synchronizedthis>\n\n- Barrier:\n  - <http://dotnetpattern.com/threading-barrier>\n\n- Read/write lock:\n  - <https://docs.python.org/3/library/threading.html#rlock-objects>\n  - <https://www.ibm.com/docs/en/aix/7.2?topic=programming-using-readwrite-locks>\n\n- Comparison:\n  - <https://www.baeldung.com/cs/semaphore-vs-mutex>\n  - <https://www.baeldung.com/java-binary-semaphore-vs-reentrant-lock>\n  - <https://stackoverflow.com/questions/3513045/conditional-variable-vs-semaphore>\n\n&nbsp;\n\n## Classic problems\n\n- <https://en.wikipedia.org/wiki/Producer%E2%80%93consumer_problem>\n- <https://en.wikipedia.org/wiki/Readers%E2%80%93writers_problem>\n- <https://www.tutorialspoint.com/dining-philosophers-problem-dpp>\n- <http://www.cse.chalmers.se/edu/year/2015/course/TDA383_LP3/lecture3.html>\n"
  }
]