[
  {
    "path": ".gitignore",
    "content": "# Compiled Object files\n*.slo\n*.lo\n*.o\n\n# Compiled Dynamic libraries\n*.so\n*.dylib\n\n# Compiled Static libraries\n*.lai\n*.la\n*.a\n"
  },
  {
    "path": "CMakeLists.txt",
    "content": "project( fc_malloc )\ncmake_minimum_required( VERSION 2.8.8 )\n\nIF( WIN32 )\n\tADD_DEFINITIONS( -DBOOST_CONTEXT_NO_LIB )\n\tADD_DEFINITIONS( -D_SCL_SECURE_NO_WARNINGS )\n\tADD_DEFINITIONS( -D_WIN32_WINNT=0x0501 )\n\tADD_DEFINITIONS( -D_CRT_SECURE_NO_WARNINGS )\nELSE(WIN32)\n   SET(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++0x -Wall -Wno-unused-local-typedefs\")\nENDIF(WIN32)\n\n\n#add_executable( m3 malloc3.cpp )\nadd_executable( fheap bench.cpp )\ntarget_link_libraries( fheap jemalloc )\n"
  },
  {
    "path": "README.md",
    "content": "fc_malloc\n=========\n\nSuper Fast, Lock-Free, Wait-Free, CAS-free, thread-safe, memory allocator. \n\nDesign\n================== \n\nThe key to developing fast multi-threaded allocators is eliminating \nlock-contention and false sharing.  Even simple atomic operations and\nspin-locks can destroy the performance of an allocation system.  The real\nchallenge is that the heap is a multi-producer, multi-consumer resource \nwhere all threads need to read and write the common memory pool.\n\nWith fc_malloc I borrowed design principles from the LMAX disruptor and\nassigned a dedicated thread for moving free blocks from all of the other\nthreads to the shared pool.  This makes all threads 'single producers' of\nfree blocks and therefore it is possible to have a lock-free, wait-free \nper-thread free list.   This also makes a single producer of 'free blocks'\nwhich means that blocks can be aquired with a single-producer, multiple\nconsumer pattern.  \n\nWhen there is a need for more memory and existing free-lists are not sufficent,\neach thread maps its own range from the OS in 4 MB chunks. Allocating from\nthis 'cache miss' is not much slower than allocating stack space and\nrequires no contention.  Requests for larger than 4MB are allocated direclty\nfrom the OS via mmap.  \n\nInitial Benchmarks\n==================\n\nTesting memory allocation systems can be very difficulty and 'artificial tests',\nare not always the most accurate predictors of real world performance, but I \nsought to develop a test that would stress the allocation system, particularlly\nin multi-threaded environments.\n\nThe test I came up with creates 1 array per thread containing space for 500K \nallocations.  I then assigned each thread the job of randomly allocating \nempty slots in 1 array and randomly deallocating random slots in another array. \nThe result is a 'random' set of producer-consumer threads.\n\nEach allocation was 128 bytes.  Future versions of this benchmark will include\nrandom sizes as well.  \n\n\n| Benchmark                  | glibc       |  jemalloc   |   fc_malloc |\n|----------------------------|-------------|-------------|-------------|\n| Random Single Threaded     | 5.8s        |  4.5s       |  2.6s       |\n| Random Multi Threaded (10) | 18.2s       |  13.6s      |  6.8s       |\n\nThreads|fcalloc (s)|jemalloc(s)|fcram(MB)|jeram(mb)\n---|---|---|---|---\n1|4.8|9.7|97|84.3\n2|5.9|14.8|120|104\n3|6.5|16.8|145|123\n4|7|18|167|142\n5|8|18.9|185.5|160\n6|8.7|20.3|214.3|189\n7|9.9|22.9|238|212\n8|11.4|25.2|257|224\n9|12.5|26.1|278|244\n10|12.9|27.9|308|270\n\n\n\n\nAs you can see from the results fc_malloc is over 2x faster than the\nstock malloc even for single threaded cases.  For multi-threaded cases\nit is 2.6x faster than the stock allocator.   The real test though is\nthe comparison to jemalloc which is generally considered one of the\nhighest performing alternative allocators available.  Here fc_malloc\nis still 2x faster in the multi-threaded test.\n\n"
  },
  {
    "path": "bench.cpp",
    "content": "#include \"fixed_pool.hpp\"\n#include <thread>\n#include <string.h>\n#include <stdio.h>\n#include <iostream>\n#include <sstream>\n#define BENCH_SIZE ( (1024*16*2) )\n#define ROUNDS 3000\n\n/*  SEQUENTIAL BENCH\nint main( int argc, char** argv )\n{\n  if( argc == 2 && argv[1][0] == 'S' )\n  {\n     printf( \"fp_malloc\\n\");\n     for( int i = 0; i < 50000000; ++i )\n     {\n        char* test = fp_malloc( 128 );\n        assert( test != nullptr );\n        test[0] = 1;\n        free2( test );\n     }\n  }\n  if( argc == 2 && argv[1][0] == 's' )\n  {\n     printf( \"malloc\\n\");\n     for( int i = 0; i < 50000000; ++i )\n     {\n        char* test = (char*)malloc( 128 );\n        assert( test != nullptr );\n        test[0] = 1;\n        free( test );\n     }\n  }\n  fprintf( stderr, \"done\\n\");\n // sleep(5);\n  return 0;\n}\n*/\n/* RANDOM BENCH */\nstd::vector<int64_t*>  buffers[16];\nvoid pc_bench_worker( int pro, int con, char* (*do_alloc)(int s), void (*do_free)(char*)  )\n{\n  int64_t total_alloc = 0;\n  int64_t total_free = 0;\n  int64_t total_block_alloc = 0;\n  int64_t total_free_alloc = 0;\n\n  for( int r = 0; r < ROUNDS; ++r )\n  {\n      for( size_t x = 0; x < BENCH_SIZE/4 ; ++x )\n      {\n         uint32_t p = rand() % buffers[pro].size();\n         if( !buffers[pro][p] )\n         {\n           uint64_t si = 10000;//16 +rand()%(1024); //4000;//32 + rand() % (1<<16);\n           total_alloc += si;\n           int64_t* r = (int64_t*)do_alloc( si );\n      //     block_header* bh = ((block_header*)r)-1;\n     //      assert( bh->size() >= si + 8 );\n      //     fprintf( stderr, \"alloc: %p  %llu  of %llu  %u\\n\", r, si, bh->size(), bh->_size );\n           assert( r != nullptr );\n         //  assert( r[0] != 99 ); \n         \n           memset( r, 0x00, si );\n        //   r[0] = 99;\n    //       total_block_alloc += r[1] = ((block_header*)r)[-1].size();\n           buffers[pro][p] = r;\n         }\n      }\n      for( size_t x = 0; x < BENCH_SIZE/4 ; ++x )\n      {\n         uint32_t p = rand() % buffers[con].size();\n         assert( p < buffers[con].size() );\n         assert( con < 16 );\n         assert( con >= 0 );\n         if( buffers[con][p] ) \n         { \n          // assert( buffers[con][p][0] == 99 ); \n          // buffers[con][p][0] = 0; \n         //  total_free += buffers[con][p][0];\n         //  total_free_alloc += buffers[con][p][1];\n           do_free((char*)buffers[con][p]);\n           buffers[con][p] = nullptr;\n         }\n      }\n      /*\n      fprintf( stderr, \"\\n Total Alloc: %lld   Total Free: %lld   Net: %lld\\n\", total_alloc, total_free, (total_alloc-total_free) );\n      fprintf( stderr, \"\\n Total Block Size: %lld   Total Free Blocks: %lld   Net: %lld\\n\\n\", total_block_alloc, total_free_alloc, (total_block_alloc-total_free_alloc) );\n      auto needed = (total_alloc-total_free);\n      auto used = (total_block_alloc-total_free_alloc);\n      auto wasted = used - needed;\n      fprintf( stderr, \"\\n Total Waste: %lld    %f\\n\\n\", wasted,  double(used)/double(needed) );\n      */\n  }\n}\n\n\nvoid pc_bench(int n, char* (*do_alloc)(int s), void (*do_free)(char*)  )\n{\n  for( int i = 0; i < 16; ++i )\n  {\n    buffers[i].resize( BENCH_SIZE );\n    memset( buffers[i].data(), 0, 8 * BENCH_SIZE );\n  }\n\n  std::thread* a = nullptr;\n  std::thread* b = nullptr;\n  std::thread* c = nullptr;\n  std::thread* d = nullptr;\n  std::thread* e = nullptr;\n  std::thread* f = nullptr;\n  std::thread* g = nullptr;\n  std::thread* h = nullptr;\n  std::thread* i = nullptr;\n  std::thread* j = nullptr;\n\n\n int s = 1;\n  switch( n )\n  {\n     case 10:\n     a = new std::thread( [=](){ pc_bench_worker( n, s, do_alloc, do_free ); } );\n     n--;\n     s++;\n     case 9:\n      b = new std::thread( [=](){ pc_bench_worker( n, s, do_alloc, do_free ); } );\n     n--;\n     s++;\n     case 8:\n      c = new std::thread( [=](){ pc_bench_worker( n, s, do_alloc, do_free ); } );\n     n--;\n     s++;\n     case 7:\n      d = new std::thread( [=](){ pc_bench_worker( n, s, do_alloc, do_free ); } );\n     n--;\n     s++;\n     case 6:\n     e = new std::thread( [=](){ pc_bench_worker( n, s, do_alloc, do_free ); } );\n     n--;\n     s++;\n     case 5:\n     f = new std::thread( [=](){ pc_bench_worker( n, s, do_alloc, do_free ); } );\n     n--;\n     s++;\n     case 4:\n      g = new std::thread( [=](){ pc_bench_worker( n, s, do_alloc, do_free ); } );\n     n--;\n     s++;\n     case 3:\n      h = new std::thread( [=](){ pc_bench_worker( n, s, do_alloc, do_free ); } );\n     n--;\n     s++;\n     case 2:\n      i = new std::thread( [=](){ pc_bench_worker( n, s, do_alloc, do_free ); } );\n     n--;\n     s++;\n     case 1:\n      j = new std::thread( [=](){ pc_bench_worker( n, s, do_alloc, do_free ); } );\n  }\n  if(a)\n  a->join();\n  if(b)\n  b->join();\n  if(c)\n  c->join();\n  if(d)\n  d->join();\n  if(e)\n  e->join();\n  if(f)\n  f->join();\n  if(g)\n  g->join();\n  if(h)\n  h->join();\n  if(i)\n  i->join();\n  if(j)\n  j->join();\n\n}\nvoid pc_bench_st(char* (*do_alloc)(int s), void (*do_free)(char*)  )\n{\n  for( int i = 0; i < 16; ++i )\n  {\n    buffers[i].resize( BENCH_SIZE );\n    memset( buffers[i].data(), 0, 8 * BENCH_SIZE );\n  }\n  //int i = 0;\n  pc_bench_worker( 1, 1, do_alloc, do_free );\n}\n//#include <tbb/scalable_allocator.h>\n\nchar* do_malloc(int s)\n{ \n    return (char*)::malloc(s); \n//   return (char*)scalable_malloc(s);\n}\nvoid  do_malloc_free(char* c)\n{ \n//    scalable_free(c);\n   ::free(c); \n}\n\nchar* do_fc_malloc(int s)\n{ \n  return (char*)fp_malloc(s);\n//    return (char*)fc_malloc(s); \n//   return (char*)scalable_malloc(s);\n}\nvoid  do_fc_free(char* c)\n{ \n  fp_free((void*)c);\n//    scalable_free(c);\n//   fc_free(c); \n}\n\n\nint main( int argc, char** argv )\n{\n  /*\n  char* a = static_heap.alloc32();\n  char* b = static_heap.alloc32();\n  char* c = static_heap.alloc32();\n  fprintf( stderr, \"%p %p %p\\n\", a, b, c );\n  static_heap.free32(b);\n  char* d = static_heap.alloc32();\n  fprintf( stderr, \"%p %p %p\\n\", d, b, c );\n  return 0;\n  */\n\n  if( argc > 2 && argv[1][0] == 'm' )\n  {\n    std::cerr<<\"malloc multi\\n\";\n    pc_bench( atoi(argv[2]), do_malloc, do_malloc_free );\n    return 0;\n  }\n  if( argc > 2 && argv[1][0] == 'M' )\n  {\n    std::cerr<<\"hash malloc multi\\n\";\n//    pc_bench( atoi(argv[2]), do_fp_malloc, do_fp_free );\n    pc_bench( atoi(argv[2]), do_fc_malloc, do_fc_free );\n    return 0;\n  }\n  if( argc > 1 && argv[1][0] == 's' )\n  {\n    std::cerr<<\"malloc single\\n\";\n    pc_bench_st( do_malloc, do_malloc_free );\n    return 0;\n  }\n  if( argc > 1 && argv[1][0] == 'S' )\n  {\n    std::cerr<<\"hash malloc single\\n\";\n    pc_bench_st( do_fc_malloc, do_fc_free );\n    return 0;\n  }\n  std::string line;\n  std::getline( std::cin, line );\n    std::vector<char*> data;\n  while( !std::cin.eof() )\n  {\n    std::stringstream ss(line);\n    std::string cmd;\n\n    ss >> cmd;\n    if( cmd == \"a\" ) // allocate new data\n    {\n      int64_t bytes;\n      ss >> bytes;\n      data.push_back( (char*)fp_malloc( bytes ) );\n    }\n    if( cmd == \"f\" ) // free data at index\n    {\n      int64_t idx;\n      ss >> idx;\n      fp_free( data[idx] );\n      data.erase( data.begin() + idx );\n    }\n    if( cmd == \"c\" ) // print cache\n    {\n    //  thread_allocator::get().print_cache();\n    }\n    if( cmd == \"p\" ) // print heap\n    {\n\n    }\n    if( cmd == \"l\" ) // list data\n    {\n       fprintf( stderr, \"ID]  ptr  _size   _prev_size\\n\");\n       fprintf( stderr, \"-----------------------------\\n\");\n       for( size_t i = 0; i < data.size(); ++i )\n       {\n     //     block_header* bh = reinterpret_cast<block_header*>(data[i]-8);\n          fprintf( stderr, \"%d]  %p \\n\", int(i), data[i]);\n\n       }\n    }\n    std::getline( std::cin, line );\n  }\n  return 0;\n}\n#if 0\n  printf( \"alloc\\n\" );\n  char* tmp = fp_malloc( 61 );\n  usleep( 1000 );\n  char* tmp2 = fp_malloc( 134 );\n  usleep( 1000 );\n  char* tmp4 = fp_malloc( 899 );\n  printf( \"a %p  b %p   c %p\\n\", tmp, tmp2, tmp4 );\n\n  usleep( 1000 );\n\n  printf( \"free\\n\" );\n  free2( tmp );\n  usleep( 1000 );\n  free2( tmp2 );\n  usleep( 1000 );\n  free2( tmp4 );\n\n  usleep( 1000*1000 );\n\n  printf( \"alloc again\\n\" );\n  char* tmp1 = fp_malloc( 61 );\n  usleep( 1000 );\n  char* tmp3 = fp_malloc( 134 );\n  usleep( 1000 );\n  char* tmp5 = fp_malloc( 899 );\n  printf( \"a %p  b %p   c %p\\n\", tmp1, tmp3, tmp5 );\n  free2( tmp1 );\n  free2( tmp3 );\n  free2( tmp4 );\n\n  usleep( 1000*1000 );\n\n  return 0;\n}\n#endif\n\n\n\n\n\n\n"
  },
  {
    "path": "bit_index.cpp",
    "content": "#include \"bit_index.hpp\"\n#include <stdio.h>\n\nint main( int argc, char** argv )\n{\n\n    bit_index<64*64*64> b;\n    b.set_all();\n    for( int i = 0; i < 66; ++i )\n    {\n      b.clear(i);\n      assert( !b.get(i) );\n      fprintf( stderr, \"\\nI: %d\\n\", i );\n      if( i >= 62 )\n          b.dump();\n      if( b.first_set_bit() != i+1 )\n      {\n          exit(1);\n      }\n    }\n    for( int i = 0; i < 66; ++i )\n    {\n      assert( !b.get(i) );\n    }\n    assert( b.get(67) );\n\n    return 0;\n    fprintf( stderr, \"pow64(1) = %d\\n\", pow64<1>::value );\n    fprintf( stderr, \"pow64(2) = %d\\n\", pow64<2>::value );\n    fprintf( stderr, \"log64(pow64(2)) = %d\\n\", log64<pow64<2>::value>::value );\n    fprintf( stderr, \"pow64(log64(64*64)) = %d\\n\", pow64<log64<64*64>::value>::value );\n    fprintf( stderr, \"pow64(log64(64*64*64)) = %d\\n\", pow64<log64<64*64*64>::value>::value );\n\n    fprintf(stderr, \"=========== 64 =============\\n\" ); \n    bit_index<64> _index;\n    fprintf( stderr, \"first set bit: %d\\n\", _index.first_set_bit() );\n    assert( _index.first_set_bit() == 64 );\n    _index.set( 34 );\n    fprintf( stderr, \"first set bit: %d\\n\", _index.first_set_bit() );\n    assert( _index.get(34) );\n    assert( _index.first_set_bit() == 34 );\n    _index.clear(34);\n    assert( !_index.get(34) );\n    assert( _index.first_set_bit() == 64 );\n    fprintf(stderr, \"=========== 64*64 =============\\n\" ); \n\n    bit_index<64*64> _b62;\n    _b62.set(1010);\n    fprintf( stderr, \"first set bit: %d\\n\", _b62.first_set_bit() );\n    assert( _b62.first_set_bit() == 1010 );\n    assert( _b62.get(1010) );\n    assert( _b62.clear(1010) );\n    assert( !_b62.get(1010) );\n\n    fprintf(stderr, \"=========== 64*64*64 =============\\n\" ); \n    bit_index<64*64*64> _b64;\n    fprintf( stderr, \"init first bit b64: %d\\n\", _b64.first_set_bit() );\n    _b64.set( 660 );\n    fprintf( stderr, \"first set:   %d\\n\", _b64.first_set_bit() );\n    assert( _b64.get(660) );\n    _b64.clear(660);\n    fprintf( stderr, \"final first bit b64: %d\\n\", _b64.first_set_bit() );\n    assert( !_b64.get(660) );\n\n    bit_index<64*64*64> _b6464;\n    fprintf( stderr, \"SET BIT 66\\n\" );\n    _b6464.set( 66 );\n    fprintf( stderr, \"first set 66?? :   %d\\n\", _b6464.first_set_bit() );\n    fprintf( stderr, \"size of %d 64*64*64\\n\", int(sizeof(_b64) ) );\n\n    bit_index<64*64*64*64> _bbb;\n    fprintf( stderr, \"size of %d  64*64*64*64  \\n\\n\\n\", int(sizeof(_bbb) ) );\n    _bbb.set(444);\n    assert(_bbb.get(444) );\n    {\n    bit_index<64*64> _bbb;\n    fprintf( stderr, \"size of %d  64*64*64*64  \\n\\n\\n\", int(sizeof(_bbb) ) );\n    _bbb.set(444);\n    assert(_bbb.get(444) );\n    }\n    /*\n    {\n    bit_index<20*64*64> _bbb;\n    fprintf( stderr, \"size of %d  64*64*64*64  \\n\\n\\n\", int(sizeof(_bbb) ) );\n    _bbb.set(444);\n    assert(_bbb.get(444) );\n    }\n    */\n\n    _index.set(3);\n    _index.set(9);\n    _index.set(27);\n\n    auto itr = _index.at( _index.first_set_bit() );\n    while( !itr.end() )\n    {\n       fprintf( stderr, \"next bit %lld\\n\", itr.bit() );\n       itr.next_set_bit();\n    }\n\n  {\n    _b62.set(3);\n    _b62.set(9);\n    _b62.set(270);\n    _b62.set(570);\n    _b62.set(1270);\n\n    auto itr = _b62.at( _b62.first_set_bit() );\n    while( !itr.end() )\n    {\n       fprintf( stderr, \"_b62 next bit %lld\\n\", itr.bit() );\n       itr.next_set_bit();\n    }\n  }\n\n    auto tmp = _bbb.begin();\n    return 0;\n}\n"
  },
  {
    "path": "bit_index.hpp",
    "content": "#pragma once\n#include <stdint.h>\n#include <assert.h>\n#include <stdio.h>\n\n#define LZERO(X)  (__builtin_clzll((X)) )\n\ntemplate<uint64_t>\nclass bit_index;\n\ntemplate<uint64_t x>\nstruct log64;\ntemplate<>\nstruct log64<64> { enum { value = 1 }; };\ntemplate<>\nstruct log64<0> { enum { value = 0 }; };\ntemplate<uint64_t x>\nstruct log64 { enum { value = 1 + log64<x/64>::value }; };\n\n\ntemplate<uint64_t x>\nstruct pow64;\n\ntemplate<>\nstruct pow64<0> { enum ev{ value = 1 }; };\n\ntemplate<uint64_t x>\nstruct pow64 { enum ev{ value = pow64<x-1>::value*64ll }; };\n\n\n\ntemplate<>\nclass bit_index<1>\n{\n  public:\n    enum size_enum { index_size = 1 };\n    void set( uint64_t pos = 0)\n    {\n      assert( pos == 0 );\n      bit = 1;\n    }\n    bool get( uint32_t pos = 0)const { return bit; }\n    uint64_t& get_bits(uint64_t ) { return bit; }\n\n    bool clear( uint64_t pos  = 0)\n    {\n      assert( pos == 0 );\n      return !(bit = 0);  \n    }\n    void clear_all() { clear(); }\n    void set_all()   { set();   }\n\n    uint64_t first_set_bit()const { return !bit; }\n    uint64_t size()const          { return 1;    }\n\n\n    struct iterator\n    {\n       public:\n          uint64_t& get_bits()      { return _self->bit; }\n          bool     end()const       { return _bit == 1;   }\n          int64_t  bit()const       { return _bit; }\n          void     set()            { _self->set(_bit); }\n          bool     clear()          { return _self->clear(_bit); }\n          bool     operator*()const { return _self->get(_bit); }\n\n          iterator&  next_set_bit()\n          {\n              _bit = 1;\n              return *this;\n          }\n\n          iterator( bit_index* s=nullptr, uint8_t b = 64 ):_self(s),_bit(b){}\n       private:\n          bit_index* _self;\n          uint8_t     _bit;\n    };\n\n    iterator at( uint64_t p ) { return iterator(this, p); }\n\n  private:\n    uint64_t bit;\n};\n\ntemplate<>\nclass bit_index<0> : public bit_index<1>{};\n\ntemplate<>\nclass bit_index<64>\n{\n    public:\n      enum size_enum { index_size = 64 };\n        bit_index(uint64_t s = 0):_bits(s){}\n\n        /**\n         *  option A: use conditional to check for 0 and return 64\n         */\n        uint64_t first_set_bit()const     { \n            return _bits == 0 ? 64 : LZERO(_bits); \n        }\n\n        void dump( int depth )\n        {\n           for( int i = 0; i < depth; ++i )\n              fprintf( stderr, \"    \" );\n           fprintf( stderr, \"%llx\\n\", _bits );\n        }\n\n        /**\n         *  Option 2, compare + shift + lzr + compare + mult + or... this approach.. while\n         *  the result of LZERO(0) is undefined, multiplying it by 0 is defined.\n         *\n         *  This code may be faster or slower depending upon this cache miss rate and\n         *  the instruction level parallelism.  Benchmarks are required.\n         */\n        //uint64_t first_set_bit()const     { return (_bits == 0)<<6 | (LZERO(_bits) * (_bits!=0)); }\n        bool     get( uint64_t pos )const { return _bits & (1ll<<(63-pos));   }\n        void     set( uint64_t pos )    \n        { \n            assert( pos < 64 );\n            _bits |= (1ll<<(63-pos));       \n        }\n        bool     clear( uint64_t pos )  \n        { \n//            fprintf( stderr, \"bit_index<64>::clear %llu\\n\", pos );\n            _bits &= ~(1ll<<(63-pos));      \n            //fprintf( stderr, \"bit_index<64> clear: %p   %llx\\n\", this, _bits );\n            //fprintf( stderr, \"bit_index<64>::clear %llu return %llu == 0\\n\", pos, _bits );\n            return _bits == 0;\n        }\n\n        uint64_t size()const  { return 64;                          }\n        uint64_t count()const { return __builtin_popcountll(_bits); }\n\n        void set_all()   { _bits = -1; }\n        void clear_all() { _bits = 0;  }\n\n        uint64_t& get_bits( uint64_t bit )\n        {\n            assert( bit < 64 );\n            return _bits;\n        }\n\n        struct iterator\n        {\n           public:\n              uint64_t& get_bits()       { return _self->_bits; }\n              bool     end()const       { return _bit == 64;   }\n              int64_t  bit()const       { return _bit; }\n              void     set()            { _self->set(_bit); }\n              bool     clear()          { return _self->clear(_bit); }\n              bool     operator*()const { return _self->get(_bit); }\n\n              iterator&  next_set_bit()\n              {\n                  ++_bit;\n                  if( end() ) return *this;\n                  bit_index tmp( (_self->_bits << (_bit))>>(_bit) );  \n                  _bit = tmp.first_set_bit();\n                  return *this;\n              }\n\n              iterator( bit_index* s=nullptr, uint8_t b = 64 ):_self(s),_bit(b){}\n           private:\n              bit_index* _self;\n              uint8_t     _bit;\n\n        };\n\n        iterator begin()      { return iterator(this,0); }\n        iterator at(uint8_t i){ return iterator(this,i); }\n        iterator end()        { return iterator(this,64); }\n    protected:\n        friend class iterator;\n        uint64_t _bits;\n};\n\n/**\n *   A bit_index is a bitset optimized for searching for set bits.  The\n *   operations set and clear maintain higher-level indexes to optimize\n *   finding of set bits.\n *\n *   The fundamental size is 64 bit and the first set bit can be found\n *   with a single instruction. For indexes up-to 64*64 in size, the\n *   first set bit can be found with 2 clz + 1 compare + 1 mult + 1 add.\n *\n */\ntemplate<uint64_t Size>\nclass bit_index\n{\n    public:\n      static_assert( Size >= 64, \"smaller sizes not yet supported\" );\n\n      enum size_enum { \n         index_size        = Size,\n         sub_index_size    = (Size+63) / 64,\n         sub_index_count   = Size / sub_index_size \n      };\n       static_assert( bit_index::sub_index_count > 0, \"array with size 0 is too small\" );\n       static_assert( bit_index::sub_index_count <= 64, \"array with size 64 is too big\" );\n\n\n      void dump( int depth = 0 )\n      {\n           _base_index.dump( depth + 1 );\n           for( int i = 0; i < 3; ++i )\n             _sub_index[i].dump( depth + 2 );\n\n/**\n           for( int i = 0; i < depth; ++i )\n              fprintf( stderr, \"    \" );\n           fprintf( stderr, \"%llx\\n\", _bits );\n           */\n      }\n\n      \n      uint64_t size()const  { return index_size; }\n      uint64_t first_set_bit()const\n      {\n          uint64_t base = _base_index.first_set_bit();\n          if( base >= sub_index_count ) \n          {\n              return Size;\n          }\n          auto subidx = _sub_index[base].first_set_bit();\n          return base * sub_index_size + subidx; //_sub_index[base].first_set_bit(); \n      }\n      bool get( uint64_t bit )const\n      {\n         assert( bit < Size );\n         int64_t sub_idx     = (bit/sub_index_size);\n         int64_t sub_idx_bit = (bit%sub_index_size);\n         return _sub_index[sub_idx].get(  sub_idx_bit );\n      }\n      \n      void set( uint64_t bit )\n      {\n         assert( bit < Size );\n         int64_t sub_idx     = (bit/sub_index_size);\n         int64_t sub_idx_bit = (bit%sub_index_size);\n         _base_index.set(sub_idx);\n         return _sub_index[sub_idx].set( sub_idx_bit );\n      }\n      \n      bool clear( uint64_t bit )\n      {\n         assert( bit < Size );\n         int64_t sub_idx     = (bit/sub_index_size);\n         int64_t sub_idx_bit = (bit%sub_index_size);\n         if( _sub_index[sub_idx].clear( sub_idx_bit ) )\n            return _base_index.clear(sub_idx);\n         return false;\n      }\n      \n      void set_all()\n      {\n         _base_index.set_all();\n         for( uint64_t i = 0; i < sub_index_count; ++i )\n         {\n           _sub_index[i].set_all();\n         }\n      }\n      \n      void clear_all()\n      {\n         _base_index.clear_all();\n         for( uint64_t i = 0; i < sub_index_count; ++i )\n         {\n           _sub_index[i].clear_all();\n         }\n      }\n      \n      uint64_t count()const\n      {\n         uint64_t c = 0;\n         for( uint64_t i = 0; i < sub_index_count; ++i )\n         {\n            c+=_sub_index[i].count();\n         }\n         return 0;\n      }\n\n      /**\n       *  Returns the in64_t that contains bit\n       */\n      uint64_t& get_bits( uint64_t bit )\n      {\n         int64_t sub_idx      = (bit/sub_index_size);\n         int64_t sub_idx_bit  = (bit%sub_index_size);\n         return _sub_index[sub_idx].get_bits( sub_idx_bit );\n      }\n\n\n      struct iterator\n      {\n         public:\n            uint64_t&  get_bits()         { return sub_itr.get_bits(); }\n            bool       operator*()const   { return *sub_itr;           }\n            bool       end()const { return sub_idx >= sub_index_count; }\n            int64_t    bit()const { return pos; }\n            void       set() \n            { \n                bit_idx->_base_index.set(sub_idx); \n                sub_itr.set();\n            }\n            bool       clear() \n            { \n                if( sub_itr.clear() )\n                {\n                  return bit_idx->_base_index.clear(sub_idx); \n                }\n                return false;\n            }\n\n            /**\n             *  Find the next bit after this one that is set..\n             */\n            iterator&  next_set_bit()\n            {\n                if( end() ) return *this;\n                sub_itr.next_set_bit();\n                if( sub_itr.end() )\n                {\n                   sub_idx = bit_idx->_base_index.at(sub_idx).next_set_bit().bit();\n                   if( end() )\n                   {\n                      pos = Size;\n                      return *this;\n                   }\n                   auto fb = bit_idx->_sub_index[sub_idx].first_set_bit();\n                   sub_itr = bit_idx->_sub_index[sub_idx].at(fb);\n                }\n                pos = sub_idx * sub_index_size + sub_itr.bit();\n                return *this;\n            }\n\n            /**\n             *  Move to the next bit.\n             */\n            iterator&  operator++()\n            { \n               assert( !end() );\n               ++pos;\n               ++sub_itr;\n               if( sub_itr.end() )\n               {\n                  ++sub_idx;\n                  if( !end() )\n                  {\n                     sub_itr = bit_idx->_sub_index[sub_idx].begin();\n                  }\n                  else pos = Size;\n               }\n               return *this;\n            }\n            iterator& operator++(int) { return this->operator++(); }\n            iterator operator+(uint64_t delta) { return iterator( bit_idx, pos + delta ); }\n\n\n            iterator( bit_index* self=nullptr, int64_t bit=Size)\n            :bit_idx(self),pos(bit),sub_idx((bit/64)%64)\n            {\n               sub_itr = bit_idx->_sub_index[sub_idx].at(bit%sub_index_size);\n            }\n            iterator& operator=(const iterator& i )\n            {\n               bit_idx = i.bit_idx;\n               pos = i.pos;\n               sub_idx = i.sub_idx;\n               sub_itr = i.sub_itr;\n               return *this;\n            }\n         private:\n            friend class bit_index;\n            bit_index*                          bit_idx;\n            int64_t                             pos;\n            int8_t                              sub_idx;\n            typename bit_index<sub_index_size>::iterator sub_itr;\n      };\n\n      iterator begin()            { return iterator( this, 0 );    }\n      iterator end()              { return iterator( this, Size ); }\n      iterator at(int64_t p)      { return iterator( this, p );    }\n    protected:\n      friend class iterator;\n      bit_index<64>              _base_index;\n      bit_index<sub_index_size>  _sub_index[sub_index_count];\n};\n\n\n\n\n\n"
  },
  {
    "path": "disruptor.hpp",
    "content": "#pragma once\n#include <memory>\n#include <vector>\n#include <stdint.h>\n#include <unistd.h>\n#include <atomic>\n#include <assert.h>\n#include <iostream>\n\nnamespace disruptor\n{\n\nclass eof : public std::exception\n{\n   public:\n    virtual const char* what()const noexcept { return \"eof\"; }\n};\n\n\n/**\n *  A sequence number must be padded to prevent false sharing and\n *  access to the sequence number must be protected by memory barriers.\n *\n *  In addition to tracking the sequence number, additional state associated\n *  with the sequence number is also made available.  No false sharing \n *  should occur because all 'state' is only written by one thread. This\n *  extra state includes whether or not this sequence number is 'EOF' and\n *  whether or not any alerts have been published.\n */\nclass sequence\n{\n   public:\n      sequence( int64_t v = 0 ):_sequence(v),_alert(0){}\n\n      int64_t  lazy_read()const                   { return *((volatile int64_t*)&_sequence);}// .load( std::memory_order_acquire); }\n      //volatile int64_t& lazy_write()              { return *((volatile int64_t*)&_sequence);}// .load( std::memory_order_acquire); }\n      int64_t  aquire()const                      { return _sequence.load( std::memory_order_acquire); }\n      int64_t  aquire_pending()const              { return _pending_sequence.load( std::memory_order_acquire); }\n      void     lazy_store( int64_t value )        { _sequence.store(value, std::memory_order_relaxed); }\n      void     store( int64_t value )             { _sequence.store(value, std::memory_order_release); }\n      void     store_pending( int64_t value )     { _pending_sequence.store(value, std::memory_order_release); }\n      void     set_eof()    { _alert = 1; }\n      void     set_alert()  { _alert = -1; }\n      bool     eof()const   { return _alert == 1; }\n      bool     alert()const { return _alert != 0; }\n\n      int64_t atomic_increment_and_get( uint64_t inc ) \n      { \n        return _sequence.fetch_add(inc, std::memory_order::memory_order_release) + inc;\n      }\n\n      int64_t increment_and_get( uint64_t inc ) \n      { \n          auto tmp = aquire() + inc;\n          store( tmp );\n          return tmp;\n      }\n\n   private:\n      std::atomic<int64_t> _sequence;\n      volatile int64_t     _alert;\n      std::atomic<int64_t> _pending_sequence;\n      int64_t              _post_pad[5];\n};\n\nclass event_cursor;\n\n/**\n *   A barrier will block until all cursors it is following are\n *   have moved past a given position.  The barrier uses a\n *   progressive backoff strategy of busy waiting for 1000 \n *   tries, yielding for 1000 tries, and the usleeping in 10 ms\n *   intervals.   \n *\n *   No wait conditions or locks are used because they would\n *   be 'intrusive' to publishers which must check to see whether\n *   or not they must 'notify'.  The progressive backoff approach\n *   uses little CPU and is a good compromise for most use cases.\n */\nclass barrier \n{\n   public:\n      void follows( const event_cursor& e );\n\n      /**\n       *  Used to check how much you can read/write without blocking.\n       *\n       *  @return the min position of every cusror this barrier follows.\n       */\n      int64_t get_min();\n\n      /*\n       *  This method will wait until all s in seq >= pos using a progressive\n       *  backoff of busy wait, yield, and usleep(10*1000)\n       *\n       *  @return the minimum value of every dependency\n       */\n      int64_t wait_for( int64_t pos )const;\n   private:\n      mutable int64_t                   _last_min;\n      std::vector<const event_cursor*>  _limit_seq;\n};\n\n/**\n *  Provides a automatic index into a ringbuffer with\n *  a power of 2 size.\n */\ntemplate<typename EventType, uint64_t Size = 1024>\nclass ring_buffer\n{\n    public:\n      typedef EventType event_type;\n\n      static_assert( ((Size != 0) && ((Size & (~Size + 1)) == Size)), \n                     \"Ring buffer's must be a power of 2\" );\n\n      /** @return a read-only reference to the event at pos */\n      const EventType& at( int64_t pos )const \n      {\n        return _buffer[pos & (Size-1)];\n      }\n\n      /** @return a reference to the event at pos */\n      EventType& at( int64_t pos )\n      {\n        return _buffer[pos & (Size-1)];\n      }\n\n      /** useful to check for contiguous ranges when EventType is\n       *  POD and memcpy can be used.  OR if the buffer is being used\n       *  by a socket dumping raw bytes in.  In which case memcpy\n       *  would have to use to ranges instead of 1.\n       */\n      int64_t get_buffer_index( int64_t pos )const { return pos & (Size-1); }\n      int64_t get_buffer_size()const               { return Size;           }\n\n    private:\n      EventType            _buffer[Size];\n};\n\n/**\n *  A cursor is used to track the location of a publisher / subscriber within\n *  the ring buffer.  Cursors track a range of entries that are waiting\n *  to be processed.  After a cursor is 'done' with an entry it can publish\n *  that fact.  \n *\n *  There are two types of cursors, read_cursors and write cursors.  read_cursors\n *  block when they need to \n *\n *  Events between [begin,end) may be processed at will for readers.  When a reader\n *  is done they can 'publish' their progress which will move begin up to\n *  published position+1.   When begin == end, the cursor must call wait_for(end), \n *  wait_for() will return a new 'end'.  \n *\n *  @section read_cursor_example Read Cursor Example\n *  @code\n      auto source   = std::make_shared<ring_buffer<EventType,SIZE>>();\n      auto dest     = std::make_shared<ring_buffer<EventType,SIZE>>();\n      auto p        = std::make_shared<write_cursor>(\"write\",SIZE);\n      auto a        = std::make_shared<read_cursor>(\"a\");\n\n      a->follows(p);\n      p->follows(a);\n\n      auto pos      = a->begin();\n      auto end      = a->end();\n      while( true ) \n      {\n         if( pos == end )\n         {\n             a->publish(pos-1);\n             end = a->wait_for(end);\n         }\n         dest->at(pos) = source->at(pos);\n         ++pos;\n      }\n *  @endcode\n *\n *\n *  @section write_cursor_example Write Cursor Example\n *\n *  The following code would run in the publisher thread.  The\n *  publisher can write data without 'waiting' until it pos is\n *  greater than or equal to end.  The 'initial condition' of\n *  a publisher is with pos > end because the write cursor\n *  cannot 'be valid' for readers until after the first element\n *  is written.  \n *\n    @code\n        auto pos = p->begin();\n        auto end = p->end();\n        while( !done )\n        {\n           if( pos >= end )\n           {  \n              end = p->wait_for(end);\n           }\n           source->at( pos ) = i;\n           p->publish(pos);\n           ++pos;\n        }\n        // set eof to signal any followers to stop waiting after\n        // they hit this position.\n        p->set_eof();\n    @endcode\n *\n *\n *\n */\nclass event_cursor\n{\n   public:\n      event_cursor(int64_t b=-1):_name(\"\"),_begin(b),_end(b){}\n      event_cursor(const char* n, int64_t b=0):_name(n),_begin(b),_end(b){}\n\n      /** this event processor will process every event\n       *  upto, but not including s\n       */\n      void follows( const event_cursor& s ) { _barrier.follows(s); }\n\n      /** returns one after cursor */\n      int64_t begin()const { return _begin; }\n\n      /** returns one after the last ready as of last call to wait_for() */\n      int64_t end()const   { return _end;   }\n\n\n      /** makes the event at p available to those following this cursor */\n      void     publish( int64_t p )\n      {\n         check_alert();\n         _begin = p + 1;\n         _cursor.store( p );\n      }\n      void    lazy_publish( int64_t p )\n      {\n         _begin = p + 1;\n         _cursor.lazy_store(p);\n      }\n\n      /** when the cusor hits the end of a stream, it can set the eof flag */\n      void set_eof(){ _cursor.set_eof(); }\n\n      /** If an error occurs while processing data the cursor can set an \n       *  alert that will be thrown whenever another cursor attempts to wait\n       *  on this cursor.\n       */\n      void  set_alert( std::exception_ptr e ) \n      {   \n          _alert = std::move(e); \n          _cursor.set_alert(); \n      }\n\n      /** @return any alert set on this cursor */\n      const std::exception_ptr& alert()const { return _alert; }\n\n\n      /** If an alert has been set, throw! */\n      inline void check_alert()const; \n\n      /** the last sequence number this processor has \n       *  completed.\n       */\n      const sequence& pos()const { return _cursor; }\n      sequence&       pos(){ return _cursor; }\n\n      /** used for debug messages */\n      const char* name()const { return _name; }\n\n    protected:\n      /** last know available, min(_limit_seq) */\n      const char*                   _name;\n      int64_t                       _begin;\n      int64_t                       _end;\n      std::exception_ptr            _alert;\n      barrier                       _barrier;\n      sequence                      _cursor;\n};\n\n/**\n *  Tracks the read position in a buffer\n */\nclass read_cursor : public event_cursor\n{\n    public:\n      read_cursor(int64_t p=0):event_cursor(p){}\n      read_cursor(const char* n, int64_t p=0):event_cursor(n,p){}\n\n      /** @return end() which is > pos */\n      int64_t wait_for( int64_t pos )\n      {\n         try {\n          return _end = _barrier.wait_for(pos) + 1;\n         }\n         catch ( const eof& ) { _cursor.set_eof(); throw; }\n         catch ( ... ) { set_alert( std::current_exception() ); throw; }\n      }\n\n      /** find the current end without blocking */\n      int64_t check_end()\n      {\n          return _end = _barrier.get_min() + 1;\n      }\n};\n\nclass shared_read_cursor : public read_cursor\n{\n    public:\n      shared_read_cursor(int64_t p=0):read_cursor(p){}\n      shared_read_cursor(const char* n, int64_t p=0):read_cursor(n,p){}\n\n      /**\n       *  This method will block until 'after_pos' is the \n       *  current pos, then it will set pos to 'pos'\n       */\n      void publish_after( int64_t pos, int64_t after_pos )\n      {\n         try {\n            assert( pos > after_pos );\n            while( _cursor.aquire() < after_pos )\n            {\n              // TODO:... this is a spinlock, ease CPU HERE... \n            } \n            // _barrier.wait_for(after_pos);\n            publish( pos );\n         }\n         catch ( const eof& ) { _cursor.set_eof(); throw; }\n         catch ( ... ) { set_alert( std::current_exception() ); throw; }\n      }\n\n      bool is_available( int64_t pos )\n      {\n         return pos <= _barrier.get_min(); \n      }\n\n      int64_t claim(int64_t num) \n      {  \n         auto pos = _claim_cursor.atomic_increment_and_get( num );\n         return pos - num;\n      }\n\n\n      sequence      _claim_cursor;\n};\n\ntypedef std::shared_ptr<read_cursor> read_cursor_ptr;\n\n/**\n *  Tracks the write position in a buffer.\n *\n *  Write cursors need to know the size of the buffer\n *  in order to know how much space is available. \n */\nclass write_cursor : public event_cursor\n{\n    public:\n      /** @param s - the size of the ringbuffer, \n       *  required to do proper wrap detection \n       **/\n      write_cursor(int64_t s)\n      :_size(s),_size_m1(s-1)\n      {\n        _begin = 0;\n        _end   = _size;\n        _cursor.store(-1);\n      }\n\n      /**\n       * @param n - name of the cursor for debug purposes\n       * @param s - the size of the buffer.  \n       */\n      write_cursor(const char* n, int64_t s)\n      :event_cursor(n),_size(s),_size_m1(s-1)\n      {\n         _begin = 0;\n         _end   = _size;\n         _cursor.store(-1);\n      }\n\n      /** waits for begin() to be valid and then\n       *  returns it.  This is only safe for \n       *  single producers, multi-producers should \n       *  use claim(1) instead.\n       */\n      int64_t wait_next() \n      {\n          wait_for( _begin );\n          return _begin;\n      }\n\n      /**\n       *   We need to wait until the available space in\n       *   the ring buffer is  pos - cursor which means that\n       *   all readers must be at least to pos - _size and\n       *   that our new end is the min of the readers + _size\n       */\n      int64_t wait_for( int64_t pos )\n      {\n         try \n         {\n           // throws exception on error, returns 'short' on eof\n           return _end = _barrier.wait_for(  pos - _size ) + _size;  \n         } \n         catch ( ... ) \n         { \n            set_alert( std::current_exception() ); throw; \n         }\n      }\n      int64_t check_end()\n      {\n          return _end = _barrier.get_min() + _size;\n      }\n    private:\n      const int64_t _size;\n      const int64_t _size_m1;\n};\n\ntypedef std::shared_ptr<write_cursor> write_cursor_ptr;\n/**\n *  When there are multiple writers this cursor can\n *  be used to reserve space in the write buffer \n *  in an atomic manner.\n *\n *  @code\n *  auto start = cur->claim(slots);\n *  ... do your writes...\n *  cur->publish_after( start + slots, start -1 );\n *  @endcode\n *\n *  @todo\n *  An alternative implementation of this would involve\n *  having a sequence number for each thread.  A pre-allocated\n *  array of sequence pointers would be initialized to null.\n *  There would be a 'thread-specific' index into this array\n *  that would be allocated by an atomic inc the first time\n *  a new thread attempts to write.   Each sequence number\n *  would maintain two sequence numbers: published and\n *  pending.  \n *\n *  To determine the actual 'position' of the write\n *  cursor one would return the MIN( pending ) -1 or\n *  if no sequences are in the 'pending state' the\n *  MAX(published).  The pending state is any time\n *  the pending > published.\n *\n *  The consequence of this approach is that readers\n *  would have to perform more work to determine the end\n *  (reading from all thread positions), the benefit is\n *  that the producers would never have to 'wait' on\n *  each other.  \n *\n *  A variation on this would be to have a fixed \n *  set of producers instead of a dynamic set.  This \n *  fixed set would be configured at the start.\n *\n *  If there is low write-contention then this approach\n *  would probably be poor.\n */\nclass shared_write_cursor : public write_cursor \n{\n   public:\n      /** @param s - the size of the ringbuffer, \n       *  required to do proper wrap detection \n       **/\n      shared_write_cursor(int64_t s)\n      :write_cursor(s){}\n\n      /**\n       * @param n - name of the cursor for debug purposes\n       * @param s - the size of the buffer.  \n       */\n      shared_write_cursor(const char* n, int64_t s)\n      :write_cursor(n,s){}\n\n      /** When there are multiple writers they cannot both\n       *  assume the right to write to begin() to end(), \n       *  instead they must first claim some slots in an\n       *  atomic manner.\n       *\n       *\n       *  After pos().aquire() == claim( slots ) -1 the claimer\n       *  is free to call publish up to start + slots -1 \n       *\n       *  @return the first slot the caller may write to.\n       */   \n      int64_t claim( size_t num_slots )\n      {\n           auto pos = _claim_cursor.atomic_increment_and_get( num_slots );\n      //     std::cerr<<\"  shared_write: publish \"<<pos<<\" after \" << (pos-1) << \" current pos: \"<<_cursor.aquire()<<\"\\n\";\n           // make sure there is enough space to write\n           wait_for( pos -1 ); // TODO: -1????\n           return pos - num_slots;\n      }\n\n      /**\n       *  This method will block until 'after_pos' is the \n       *  current pos, then it will set pos to 'pos'\n       */\n      void publish_after( int64_t pos, int64_t after_pos )\n      {\n         try {\n            assert( pos > after_pos );\n          //  std::cerr<<\"publish \"<<pos<<\" after \" << after_pos << \" current pos: \"<<_cursor.aquire()<<\"\\n\";\n            while( _cursor.aquire() != after_pos )\n            {\n              // TODO:... this is a spinlock, ease CPU HERE... \n              usleep(0);\n            } \n            // _barrier.wait_for(after_pos);\n            publish( pos );\n         }\n         catch ( const eof& ) { _cursor.set_eof(); throw; }\n         catch ( ... ) { set_alert( std::current_exception() ); throw; }\n      }\n    private:\n      sequence      _claim_cursor;\n};\n\n\n\ntypedef std::shared_ptr<shared_write_cursor> shared_write_cursor_ptr;\n\n\n\ninline void barrier::follows( const event_cursor& e )\n{\n    _limit_seq.push_back( &e );\n}\n\ninline int64_t barrier::get_min()\n{\n   int64_t min_pos = 0x7fffffffffffffff;\n   for( auto itr = _limit_seq.begin(); itr != _limit_seq.end(); ++itr )\n   {\n      auto itr_pos = (*itr)->pos().aquire();\n      if( itr_pos < min_pos ) min_pos = itr_pos;\n   }\n   return _last_min = min_pos;\n}\n\ninline int64_t barrier::wait_for( int64_t pos )const\n{\n   if( _last_min > pos ) \n      return _last_min;\n\n   int64_t min_pos = 0x7fffffffffffffff;\n   for( auto itr = _limit_seq.begin(); itr != _limit_seq.end(); ++itr )\n   {\n      int64_t itr_pos = 0;\n      itr_pos = (*itr)->pos().aquire();\n      // spin for a bit \n      for( int i = 0; itr_pos < pos && i < 10000; ++i  )\n      {\n         itr_pos = (*itr)->pos().aquire();\n         if( (*itr)->pos().alert() ) break;\n      }\n      // yield for a while, queue slowing down\n      for( int y = 0; itr_pos < pos && y < 10000; ++y )\n      {\n         usleep(0);\n         itr_pos = (*itr)->pos().aquire();\n         if( (*itr)->pos().alert() ) break;\n      }\n\n      // queue stalled, don't peg the CPU but don't wait\n      // too long either...\n      while( itr_pos < pos )\n      {\n         usleep( 10*1000 );\n         itr_pos = (*itr)->pos().aquire();\n         if( (*itr)->pos().alert() ) break;\n      }\n\n      if( (*itr)->pos().alert() )\n      {\n         (*itr)->check_alert();\n         if( itr_pos > pos ) \n            return itr_pos -1; // process everything up to itr_pos\n         throw eof();\n      }\n\n\n      if( itr_pos < min_pos ) \n          min_pos = itr_pos; \n   }\n   //assert( min_pos != 0x7fffffffffffffff );\n   return _last_min = min_pos;\n}\n\ninline void event_cursor::check_alert()const\n{\n    if( _alert != std::exception_ptr() ) std::rethrow_exception( _alert );\n}\n\n\n} // namespace disruptor\n"
  },
  {
    "path": "fast_rand.cpp",
    "content": "#include <stdint.h>\n#include <memory.h>\n#include <stdlib.h>\n#include <iostream>\n#include <vector>\n#include <assert.h>\n#include <unistd.h>\n#ifdef _MSC_VER\n#pragma intrinsic(__rdtsc)\nuint64_t get_cc_time () {\n    return __rdtsc();\n}\n#else\n/* define this somewhere */\n#ifdef __i386\n__inline__ uint64_t rdtsc() {\n     uint64_t x;\n       __asm__ volatile (\"rdtsc\" : \"=A\" (x));\n         return x;\n}\n#elif __amd64\n__inline__ uint64_t rdtsc() {\n     uint64_t a, d;\n       __asm__ volatile (\"rdtsc\" : \"=a\" (a), \"=d\" (d));\n         return a; //(d<<32) | a;\n}\n#endif\n\n\nuint64_t get_cc_time () {\n   return rdtsc();\n}\n#endif\n\n\n// Some primes between 2^63 and 2^64 for various uses.\n// source: CityHash\nstatic const uint64_t k0 = 0xc3a5c85c97cb3127ULL;\nstatic const uint64_t k1 = 0xb492b66fbe98f273ULL;\nstatic const uint64_t k2 = 0x9ae16a3b2f90404fULL;\n\ninline uint64_t ShiftMix(uint64_t val) { return val ^ (val >> 47); }\n\nuint64_t fast_rand()\n{\n  int64_t now = rdtsc(); //get_cc_time();\n  char*   s = (char*)&now; // note first 4 bits are 'LSB' on intel... \n                           // on bigendian machine we want to add 4\n                           // LSB is most rand, the higher-order bits\n                           // will not change much if at all between\n                           // calls...\n\n  const uint8_t a = s[0];\n  const uint8_t b = s[4 >> 1];\n  const uint8_t c = s[4 - 1];\n  const uint32_t y = static_cast<uint32_t>(a) + (static_cast<uint32_t>(b) << 8);\n  const uint32_t z = 4 + (static_cast<uint32_t>(c) << 2);\n  return ShiftMix(y * k2 ^ z * k0) * k2;\n}\n"
  },
  {
    "path": "fc_heap.hpp",
    "content": "#pragma once\n#include \"mmap_alloc.hpp\"\n#include <iostream>\n#include <sstream>\n#include <assert.h>\n#include <string.h>\n#include <vector>\n#include <unordered_set>\n\n\n#define CHECK_SIZE( x ) assert(((x) != 0) && !((x) & ((x) - 1)))\n#define PAGE_SIZE (2*1024*1024)\n#define LOG2(X) ((unsigned) (8*sizeof (unsigned long long) - __builtin_clzll((X)) - 1))\n#define LZERO(X)  (__builtin_clzll((X)) )\n#define NUM_BINS 32 // log2(PAGE_SIZE)\n\nclass block_header\n{\n  public: \n      block_header()\n      :_prev_size(0),_size(-PAGE_SIZE),_flags(0)\n      {\n          //fprintf( stderr, \"constructor... size: %d\\n\", _size );\n          //memset( data(), 0, size() - 8 );\n          assert( page_size() == PAGE_SIZE );\n      }\n\n      void* operator new (size_t s) { return malloc(PAGE_SIZE);/*mmap_alloc( PAGE_SIZE );*/ }\n      void operator delete( void* p ) { free(p); /*mmap_free( p, PAGE_SIZE );*/ }\n\n      void dump( const char* label )\n      {\n         fprintf( stderr, \"%s ]  _prev_size: %d  _size: %d\\n\", label, _prev_size, _size);//, int(_flags) );\n      }\n\n      /** size of the block header including the header, data size is size()-8 */\n      uint32_t      size()const { return abs(_size);                                            } \n      char*         data()      { return reinterpret_cast<char*>(((char*)this)+8);                       }\n\n      block_header* next()const \n      { \n        return _size <= 0 ? nullptr : reinterpret_cast<block_header*>(((char*)this)+size());\n      }\n\n      block_header* prev()const      \n      { \n        return _prev_size <= 0 ? nullptr : reinterpret_cast<block_header*>(((char*)this)-_prev_size); \n      }\n\n      /** \n       *  creates a new block of size S at the end of this block.\n       *\n       *  @pre size is a power of 2\n       *  @return a pointer to the new block, or null if no split was possible\n       */ \n      block_header* split( uint32_t sz )\n      {\n         assert( sz >= 32 );\n         assert( size() >= 32 );\n         assert( sz <= (size() - 32) );\n         assert( page_size() == PAGE_SIZE );\n         assert( _size != 0xbad );\n         CHECK_SIZE(sz);\n\n         int32_t old_size      = _size;\n         block_header* old_nxt = next(); \n\n         _size = size() - sz;\n         assert( _size != 0 );\n         block_header* nxt = next();\n         assert( nxt != 0 );\n\n         nxt->_prev_size   = _size;\n         nxt->_size        = old_size < 0 ? -sz : sz;\n         assert( _size != 0 );\n\n         if( old_nxt ) old_nxt->_prev_size = nxt->_size;\n\n         //memset( data(), 0, size()-8 );\n\n         assert( size() + nxt->size() == uint32_t(abs(old_size)) );\n         assert( nxt->next() == old_nxt );\n         assert( nxt->prev() == this );\n         assert( next() == nxt );\n         assert( page_size() == PAGE_SIZE );\n         assert( nxt->page_size() == PAGE_SIZE );\n         assert( nxt != this );\n         nxt->_flags = 0;\n         return nxt;\n      }\n\n      /**\n       *   @return the merged node, if any\n       */\n      block_header* merge_next()\n      {\n         assert( _size != 0xbad );\n         block_header* cur_next = next();\n         if( !cur_next ) return this;\n         assert( cur_next->_size != 0xbad );\n         assert( cur_next->size() > 0 );\n\n       //  if( !cur_next->is_idle() ) return this;\n\n         auto s = size();\n\n         assert( _size > 0 );\n         _size += cur_next->size();\n         assert( _size != 0 );\n\n         if( cur_next->_size > 0 ) \n         {\n            block_header* new_next = next();\n            new_next->_prev_size = size();\n         }\n         else\n         {\n            _size = -_size; // we are at the end.\n            assert( _size != 0 );\n         }\n         assert( cur_next->_size = 0xbad );\n\n\n        // memset( data(), 0, size()-8 );\n         assert( size() > s );\n         if( next() )\n         {\n          assert( size()/8 == next() - this );\n          assert( next()->_prev_size == size() );\n          assert( page_size() == PAGE_SIZE );\n         }\n         return this;\n      }\n\n      /**\n       *   @return the merged node, or this.\n       */\n      block_header* merge_prev()\n      {\n         assert( page_size() == PAGE_SIZE );\n         block_header* pre = prev();\n         if( !pre ) return this;\n         return prev()->merge_next();\n      }\n\n      block_header* head()\n      {\n         if( !prev() ) return this;\n         return prev()->head();\n      }\n      block_header* tail()\n      {\n         if( !next() ) return this;\n         return next()->tail();\n      }\n\n      size_t        page_size()\n      {\n         auto t = tail();\n         auto h = head();\n         return ((char*)t-(char*)h) + t->size();\n      }\n\n      struct queue_state // the block is serving as a linked-list node\n      {\n          block_header*    qnext;\n          block_header*    qprev;\n          block_header**   head;\n          block_header**   tail;\n      };\n\n      enum flag_enum \n      { \n        queued = 1, \n        idle   = 2,\n        active = 4\n      };\n\n      bool         is_idle()const { return _flags & idle;  }\n      bool         is_active()const { return _flags & active; }\n      bool         is_queued()const { return _flags & queued;  }\n\n      void         set_active( bool s )\n      {\n        if( s ) _flags |= active;\n        else    _flags &= ~active;\n      }\n      void         set_queued( bool s ) \n      {\n        if( s ) _flags |= queued;\n        else    _flags &= ~queued;\n\n        // anytime we change state it should be reset..\n        if( is_queued() )\n        {\n          as_queue().qnext = nullptr;\n          as_queue().qprev = nullptr;\n        }\n      }\n\n      /** removes this node from any queue it is in */\n      void dequeue()\n      {\n         block_header* pre = as_queue().qprev; \n         block_header* nxt = as_queue().qnext; \n         if( pre ) pre->as_queue().qnext = nxt;\n         if( nxt ) nxt->as_queue().qprev = pre;\n         set_queued(false);\n      }\n\n      void         set_idle( bool s ) \n      {\n        if( s ) _flags |= idle;\n        else    _flags &= ~idle;\n        assert( is_idle() == s );\n      }\n      queue_state& as_queue()  \n      { \n    //    assert( is_queued() );\n        return *reinterpret_cast<queue_state*>(data()); \n      }\n\n//  private:\n      int32_t   _prev_size; // size of previous header.\n      int32_t   _size:24; // offset to next, negitive indicates tail, 8 MB max, it could be neg\n      int32_t   _flags:8; // offset to next, negitive indicates tail\n};\nstatic_assert( sizeof(block_header) == 8, \"Compiler is not packing data\" );\n\ntypedef block_header* block_header_ptr;\n\nstruct block_stack\n{\n    public:\n      block_stack():_head(nullptr){}\n\n      void push( block_header* h )\n      {\n         h->as_queue().qnext = _head;\n         if( _head ) _head->as_queue().qprev = h;\n         _head = h;\n         //_head.push_back(h);\n      }\n      void push_all( block_header* h )\n      {\n         assert( h->is_queued() );\n         assert( _head == nullptr );\n         _head = h;\n      }\n\n      /*\n      bool pop( block_header* h )\n      {\n         if( _head == nullptr ) return null;\n         return _head.erase(h) != 0;\n      }\n      */\n\n      /** returns all blocks */\n      block_header* pop_all()\n      {\n        block_header* h = _head;\n        _head = nullptr;\n        return h;\n      }\n\n      block_header* pop()\n      {\n         if( _head )\n         {\n            auto tmp = _head;\n            _head = _head->as_queue().qnext;\n            if( _head )\n            _head->as_queue().qprev = nullptr;\n            return tmp;\n         }\n         return nullptr;\n         /*\n         if( _head.size() == 0 ) return nullptr;\n         auto f = _head.begin();\n         auto h = *f;\n         _head.erase(f);\n         return h;\n         */\n      }\n\n      block_header* head(){ return _head; }\n\n      //int size() { return int(_head.size()); }\n    \n    private:\n      //std::unordered_set<block_header*> _head;\n      block_header* _head;\n};\n\n/**\n *  Single threaded heap implementation, foundation\n *  for multi-threaded version;\n */\nclass fc_heap \n{\n   public:\n      block_header* alloc( size_t s );\n      void          free( block_header* h );\n\n      fc_heap()\n      {\n        memset(_bins, 0, sizeof(_bins) ); \n        _free_32_data = mmap_alloc( PAGE_SIZE );\n        _free_64_data = mmap_alloc( PAGE_SIZE );\n\n        _free_32_data_end = _free_32_data + PAGE_SIZE;\n        _free_64_data_end = _free_64_data + PAGE_SIZE;\n\n        _free_32_scan_end = &_free_32_state[PAGE_SIZE/32/64];\n        _free_64_scan_end = &_free_64_state[PAGE_SIZE/64/64];\n\n        _free_32_scan_pos = _free_32_state;\n        _free_64_scan_pos = _free_64_state;\n\n        memset( _free_32_state, 0xff, sizeof(_free_32_state ) );\n        memset( _free_64_state, 0xff, sizeof(_free_64_state ) );\n      }\n      ~fc_heap()\n      {\n        mmap_free( _free_64_data, PAGE_SIZE );\n        mmap_free( _free_32_data, PAGE_SIZE );\n      }\n\n //  private:\n      char* alloc32()\n      {\n         uint32_t c = 0;\n         while( 0 == *_free_32_scan_pos )\n         {\n            ++_free_32_scan_pos;\n            if( _free_32_scan_pos == _free_32_scan_end )\n            {\n                _free_32_scan_pos = _free_32_state;\n            }\n            if( ++c == sizeof(_free_32_state)/sizeof(int64_t) )\n            {\n              return alloc64();\n            }\n         }\n         int bit = LZERO(*_free_32_scan_pos);\n         int offset = (_free_32_scan_pos - _free_32_state)*64;\n\n         *_free_32_scan_pos ^= (1ll<<(63-bit)); // flip the bit\n        // fprintf( stderr, \"alloc offset: %d bit %d  pos %d\\n\", offset,bit,(offset+bit) );\n\n         return _free_32_data + (offset+bit)*32;\n      }\n\n      char* alloc64()\n      {\n         uint32_t c = 0;\n         while( 0 == *_free_64_scan_pos )\n         {\n            ++_free_64_scan_pos;\n            if( _free_64_scan_pos == _free_64_scan_end )\n            {\n                _free_64_scan_pos = _free_64_state;\n            }\n            if( ++c == sizeof(_free_64_state)/sizeof(int64_t) )\n            {\n              return nullptr;\n            }\n         }\n         int bit = LZERO(*_free_64_scan_pos);\n         int offset = (_free_64_scan_pos - _free_64_state)*64;\n\n         *_free_64_scan_pos ^= (1ll<<(63-bit)); // flip the bit\n\n         return _free_64_data + (offset+bit)*64;\n      }\n\n      bool free32( char* p )\n      {\n         if( p >= _free_32_data &&\n              _free_32_data_end > p )\n         {\n            uint32_t offset = (p - _free_32_data)/32;\n            uint32_t bit = offset & (64-1);\n            uint32_t idx = offset/64;\n            \n            _free_32_state[idx] ^= (1ll<<((63-bit))); \n            return true;\n         }\n         return false;\n      }\n      bool free64( char* p )\n      {\n         if( p >= _free_64_data &&\n              _free_64_data_end > p )\n         {\n          uint32_t offset = (p - _free_64_data)/64;\n          uint32_t bit = offset & (64-1);\n          uint32_t idx = offset/64;\n\n          _free_64_state[idx] ^= (1ll<<((63-bit))); \n          return true;\n         }\n         return false;\n      }\n\n      char*                       _free_32_data;\n      char*                       _free_64_data;\n      char*                       _free_32_data_end;\n      char*                       _free_64_data_end;\n      uint64_t*                   _free_32_scan_pos;\n      uint64_t*                   _free_64_scan_pos;\n      uint64_t*                   _free_32_scan_end;\n      uint64_t*                   _free_64_scan_end;\n      uint64_t                    _free_32_state[PAGE_SIZE/32/64];\n      uint64_t                    _free_64_state[PAGE_SIZE/64/64];\n      block_stack _bins[NUM_BINS]; // anything less than 1024 bytes\n};\n\n\n/**\n *  Return a block of size s or greater\n *  @pre size >= 32\n *  @pre size is power of 2\n */\nblock_header* fc_heap::alloc( size_t s )\n{\n   assert( s >= 32 );\n   CHECK_SIZE( s ); // make sure it is a power of 2\n   uint32_t min_bin = LOG2(s); // find the min bin for it.\n   while( min_bin < 32 )\n   {\n      block_header* h = _bins[min_bin].pop();\n      if( h )\n      {\n          assert( h->_size != 0 );\n          assert( h->_size != 0xbad );\n          assert( h->is_queued() );\n          h->set_queued(false);\n          if( h->size() - 32 < s  )\n          {\n            h->set_active(true);\n            return h;\n          }\n          block_header* tail = h->split(s); \n          assert( h->_size != 0 );\n\n          h->set_active(true);\n          this->free(h);\n\n          tail->set_active(true);\n          return tail;\n      }\n      ++min_bin;\n   }\n   // mmap a new page\n   block_header* h = new block_header();\n   block_header* t = h->split(s);\n\n   h->set_active(true);\n   free(h);\n\n   t->set_active(true);\n   return t;\n}\n\nvoid fc_heap::free( block_header* h )\n{\n    assert( h != nullptr );\n    assert( h->is_active() );\n    assert( h->_size != 0 );\n    assert( h->size() < PAGE_SIZE );\n\n    auto pre = h->prev();\n    auto nxt = h->next();\n\n    if( nxt && !nxt->is_active() && nxt->is_queued() )\n    {\n        auto nxt_bin = LOG2(nxt->size());\n        if( _bins[nxt_bin].head() == nxt )\n        {\n          _bins[nxt_bin].pop();\n          nxt->set_queued(false);\n        }\n        else\n        {\n          nxt->dequeue();\n        }\n        h = h->merge_next();\n    }\n\n    if( pre && !pre->is_active() && pre->is_queued() )\n    {\n        auto pre_bin = LOG2(pre->size());\n        if( _bins[pre_bin].head() == pre )\n        {\n          _bins[pre_bin].pop();\n          pre->set_queued(false);\n        }\n        else\n        {\n          pre->dequeue();\n        }\n        h = pre->merge_next();\n    }\n\n    if( h->size() == PAGE_SIZE )\n    {\n      delete h;\n      return;\n    }\n\n    h->set_active(false);\n    h->set_queued(true );\n    auto hbin = LOG2(h->size());\n    _bins[hbin].push(h);\n}\n\nclass thread_heap;\n\nclass garbage_thread\n{\n   public:\n      static garbage_thread& get();\n      uint64_t               avail( int bin );\n      int64_t                claim( int bin, int64_t num );\n      block_header*          get_claim( int bin, int64_t pos );\n\n   protected:\n      void   register_thread_heap( thread_heap* h );\n\n      friend class thread_heap;\n      static void run();\n};\n\n\nclass thread_heap\n{\n  public:\n    static thread_heap& get();\n\n    block_header* allocate( size_t s )\n    {\n       if( s >= PAGE_SIZE )\n       {\n          // TODO: allocate special mmap region...\n       }\n\n       uint32_t min_bin = LOG2(s); // find the min bin for it.\n       while( min_bin < NUM_BINS )\n       {\n          block_header* h = cache_alloc(min_bin, s);\n          if( h ) return h;\n\n          garbage_thread& gc = garbage_thread::get();\n          if( auto av = gc.avail( min_bin ) )\n          {\n             int64_t claim_num = std::min<int64_t>(4,av);\n             int64_t claim = gc.claim( min_bin, claim_num );\n             int64_t end = claim + claim_num;\n             while( claim < end )\n             {\n                block_header* h = gc.get_claim(min_bin,claim);\n                if( h )\n                {\n                   cache(h);\n                }\n                ++claim;\n             }\n             h = cache_alloc(min_bin, s);\n             if( h ) return h; // else... we actually didn't get our claim\n          }\n          ++min_bin;\n       }\n       block_header* h = new block_header();\n       h->set_active(true);\n       if( s <= PAGE_SIZE - 32 )\n       {\n          block_header* t = h->split(s);\n          t->set_active(true);\n          cache( h );\n          return t;\n       }\n       return h;\n    }\n\n    block_header* cache_alloc( int bin, size_t s )\n    {\n       block_header* c = pop_cache(bin);\n       if( c && (c->size() - 32) > s )\n       {\n           block_header* t = c->split(s);\n           c->set_active(true);\n           if( !cache( c ) )\n           {\n             this->free(c);\n           }\n           t->set_active(true);\n           return t;\n       }\n       return nullptr;\n    }\n\n    bool          cache( block_header* h )\n    {\n       uint32_t b = LOG2( h->size() );\n       if( _cache_size[b] < 4 ) \n       {\n         h->set_queued(true);\n         _cache[b].push(h);\n         _cache_size[b]++;\n         return true;\n       }\n       return false;\n    }\n\n    block_header* pop_cache( int bin )\n    {\n        block_header* h = _cache[bin].pop();\n        if( h ) \n        { \n          _cache_size[bin]--; \n          h->set_queued(false);\n          return h;\n        }\n        return nullptr;\n    }\n\n    void free( block_header* h )\n    {\n       h->set_queued(true);\n       _gc_on_deck.push( h );\n       if( !_gc_at_bat.head() )\n         _gc_at_bat.push_all( _gc_on_deck.pop_all() );\n    }\n  private:\n    thread_heap();\n\n    friend garbage_thread;\n    block_stack _gc_at_bat; // waiting for gc to empty\n    block_stack _gc_on_deck; // caching until gc pickups at bat\n    block_stack _cache[NUM_BINS];\n    int16_t     _cache_size[NUM_BINS];\n\n};\n\n\n\n\n\n\n\n\n\n\n\n\nstatic fc_heap static_heap;\n\nvoid* fc_malloc( size_t s )\n{\n  if( s <= 64 ) \n  {\n    if( s <= 32 )\n        return static_heap.alloc32();\n    else\n        return static_heap.alloc64();\n  }\n  // round up to nearest power of 2 > 32\n  s += 8; // room for header.\n  if( s < 32 ) s = 32; // min size\n  s = (1<<(LOG2(s-1)+1)); // round up to nearest power of 2\n  if( s < 24 ) s = 24;\n\n  block_header* h = static_heap.alloc( s );\n  assert( h->is_active() );\n//  h->set_idle(false); \n//  assert( h->page_size() == PAGE_SIZE );\n  return h->data();\n}\nvoid fc_free( void* f )\n{\n  if( static_heap.free32((char*)f) || static_heap.free64((char*)f) ) return; \n  block_header* bh = (block_header*)(((char*)f)-8);\n // fprintf( stderr, \"fc_free(block: %p)\\n\", bh );\n//  assert( bh->is_active() );\n  //assert( bh->page_size() == PAGE_SIZE );\n  static_heap.free(bh);\n}\n\n\n"
  },
  {
    "path": "fc_malloc.cpp",
    "content": "\n\n/*\npool<24>   p24;\npool<58>   p58;\npool<120>  p120;\npool<248>  p248;\npool<504>  p504;\npool<1016> p1016;\npool<2040> p2040;\npool<4088> p4088;\n*/\n\n\nvoid* fc_malloc( size_t s )\n{\n#define TRY_POOL(I,X,S)   if( len < X ) return pool<I,X,S>::alloc(); \n    TRY_POOL(1,24,256);\n    TRY_POOL(2,58,256);\n    TRY_POOL(3,120,256);\n    TRY_POOL(4,248,128);\n    TRY_POOL(5,504,128);\n    TRY_POOL(6,1016,128);\n    TRY_POOL(7,2040,64);\n    TRY_POOL(8,4088,64);\n    TRY_POOL(9,8184,64);\n\n\n\n    if( len < 64*1024 )\n    {\n    }\n    if( len < 1024*1024 )\n    {\n\n    }\n    else\n    {\n       uint64_t* m = malloc( s+8);\n       *m = -1;\n       return m+1;\n    }\n}\n\nfree( void* f )\n{\n\n}\n"
  },
  {
    "path": "fc_malloc.h",
    "content": "void* fc_malloc( size_t s );\nfree( void* f );\n"
  },
  {
    "path": "fixed_pool.hpp",
    "content": "#include <thread>\n#include <atomic>\n#include \"mmap_alloc.hpp\"\n#include \"bit_index.hpp\"\n\n#define GB (1024LL*1014LL*1024LL)\n#define MB (1024LL*1024LL)\n#define LOG2(X) ((unsigned) (8*sizeof (unsigned long long) - __builtin_clzll((X)) - 1))\n\nclass basic_page\n{\n  public:\n     basic_page():_next_page(nullptr){}\n    virtual ~basic_page(){}\n    virtual void  release() = 0;\n    virtual void* alloc() = 0;\n    virtual void  free( void* ) = 0;\n    virtual int   get_page_pos() = 0;\n    virtual int   get_pool() = 0;\n    virtual int64_t   get_available()const = 0;\n    basic_page* _next_page;\n //   virtual void  item_size()const = 0;\n};\n\ntypedef basic_page* basic_page_ptr;\nclass basic_pool\n{\n  public:\n    virtual ~basic_pool(){}\n    virtual basic_page* claim_page() = 0;\n    virtual bool  gc_free(void*) = 0;\n    virtual void gc_release( basic_page_ptr p ) = 0;\n};\ntypedef basic_pool* basic_pool_ptr;\n\n\nstruct free_node\n{\n  free_node* next;\n};\n\ntemplate<uint64_t ItemSize, uint64_t PageSize = 1*MB>\nclass fixed_pool : public basic_pool\n{\n  public:\n    \n    class page : public basic_page\n    {\n       public:\n          page( int64_t claim_pos )\n          {\n              fprintf( stderr, \"CLAIM POS %lld\\n\", claim_pos );\n              _data = (char*)mmap_alloc( PageSize, (void*)((ItemSize << 32) + claim_pos * PageSize) );\n              fprintf( stderr, \" PAGE DATA: %p\\n\", _data );\n              assert( (int64_t(_data) >> 32) == ItemSize );\n              _next_data       = _data;\n              _page_end        = _data + PageSize;\n              _alloc_free      = nullptr;\n              _gc_free_at_bat  = nullptr;\n              _gc_free_on_deck = nullptr;\n              _claim_pos = claim_pos;\n              _alloc = 0;\n              _free  = 0;\n          }\n\n          int _claim_pos;\n          virtual int   get_page_pos() { return _claim_pos; }\n\n          int get_pool() { return LOG2(ItemSize)-4; }\n\n          ~page()\n          {\n            mmap_free( _data, PageSize );\n          }\n\n          void* alloc()\n          {\n              if( _gc_free_at_bat )\n              {\n                  fprintf( stderr, \"%p   _gc_free_at_bat   page pos %d\\n\", this, _claim_pos );\n                 free_node* gc = _gc_free_at_bat;\n                 _gc_free_at_bat = nullptr;\n\n                 while( gc )\n                 {\n                    free_node* n = gc->next;\n                    gc->next = _alloc_free;\n                    _alloc_free = gc;\n                    gc = n;\n                 }\n              }\n              if( _alloc_free )\n              {\n                 free_node* n = _alloc_free;\n                 _alloc_free = n->next;\n                 ++_alloc;\n                 return n;\n              }\n              else if( _next_data != _page_end )\n              {\n                char* n = _next_data;\n                _next_data += ItemSize;\n                assert( n < _page_end );\n                ++_alloc;\n                return n;\n              }\n              else\n              {\n                fprintf( stderr, \"_next_data == _page_end\\n\" );\n                return nullptr;\n              }\n          }\n\n          int64_t get_available()const\n          {\n              return PageSize/ItemSize - _alloc + _free; //_avail;\n          }\n\n          void free( void* c )\n          {\n              assert( c > _data && c < _page_end );\n              free_node* n = (free_node*)c;\n              n->next = _alloc_free;\n              _alloc_free = n;\n          }\n\n          void gc_free( void* c )\n          {\n              //fprintf( stderr, \"gc_free(%p)   _data %p   _end %p\\n\", c, _data, _page_end );\n              assert( c >= _data && c < _page_end );\n              free_node* n = (free_node*)c;\n              n->next = _gc_free_on_deck;\n              _gc_free_on_deck = n;\n\n              if( !_gc_free_at_bat )\n              {\n                _gc_free_at_bat = _gc_free_on_deck;\n                _gc_free_on_deck = nullptr;\n              }\n              ++_free;\n          }\n\n          bool is_claimed()const\n          {\n            return 0 != _claim.load(std::memory_order_relaxed);\n          }\n\n          bool claim()\n          {\n            return 0 == _claim.fetch_add(1);\n          }\n\n          void release() \n          {\n            _claim.store(0);\n          }\n       protected:\n          friend class thread_local_heap;\n          friend class fixed_pool;\n\n          int64_t             _alloc; // count managed by alloc thread\n          int64_t             _free;  // count managed by the gc thread\n\n          std::atomic<int>    _claim; // when 0 no one owns this page, first person to inc owns the page.\n          \n          free_node*          _alloc_free; // free list managed by alloc thread\n                              \n          free_node*          _gc_free_at_bat; \n          free_node*          _gc_free_on_deck;\n          char*               _data;\n          char*               _page_end;\n          char*               _next_data;\n\n    }; // class page\n\n\n    /**\n     *  Grab the next page with free space or allocate on\n     *  if necessary.  This method may be called from any\n     *  thread.\n     */\n    virtual basic_page* claim_page()\n    {\n        auto rp = _pending_read_pos.load( std::memory_order_relaxed );\n        auto wp = _pending_write_pos.load( std::memory_order_relaxed );\n        if( rp <= wp )\n        {\n          int64_t claim = _pending_read_pos.fetch_add(1);\n          if( claim <= wp )\n          {\n             basic_page* p = _pending_pages[claim%32];\n             _pending_pages[claim%32] = 0;\n             if( p )\n             {\n              fprintf( stderr, \"claiming pending page %p  \\n\", p);//, p->get_page_pos() );\n              return p;\n             }\n             else\n             {\n              fprintf( stderr, \"pending pages[claim] == null\\n\" );\n             }\n          }\n        }\n        \n        int64_t claim = _next_page.fetch_add(1);\n        page* p = new page(claim);\n        fprintf( stderr, \"alloc new page pending page %p  %d\\n\", p, p->get_page_pos() );\n        //p->claim();\n        _pages[claim] = p;\n        return p;\n    }\n\n    virtual bool gc_free( void* v )\n    {\n        int64_t byte_pos      = (int64_t(v)<<32)>>32;\n        int64_t page_num      = byte_pos/(PageSize);\n        auto pg = _pages[page_num];\n        fprintf( stderr, \"page_num %lld  %p\\n\", page_num, v );\n        assert( pg );\n        if( pg  )\n        {\n          pg->gc_free(v);\n          return true;\n        }\n        return false;\n    }\n    virtual void gc_release( basic_page_ptr p )\n    {\n       _free_pages.set( p->get_page_pos() );\n       auto rp = _pending_read_pos.load(std::memory_order_relaxed);\n       auto wp = _pending_write_pos.load(std::memory_order_relaxed);\n       while( rp > wp - 31 )\n       {\n          ++wp;\n          auto pos = wp%32;\n          if( _pending_pages[pos] == nullptr )\n          {\n            int b = _free_pages.first_set_bit();\n            if( _pages[b] && _pages[b]->get_available() )\n            {\n              _free_pages.clear(b);\n              fprintf( stderr, \"pending_pages[%lld] = %p\\n\", pos, _pages[b] );\n              _pending_pages[ pos ] = _pages[b];\n            }\n            if( !_pages[b] ){ --wp; break; }\n         }\n       }\n       _pending_write_pos.store(wp);\n    }\n\n    fixed_pool()\n    :_pending_read_pos(0),_pending_write_pos(-1)\n    {\n       _free_pages.set_all();\n       memset( _pages, 0, sizeof(_pages) );\n       memset( _pending_pages, 0, sizeof(_pending_pages) );\n    }\n\n    typedef page*        page_ptr;\n    std::atomic<int>     _next_page; // inc to allocate a new page.\n\n    std::atomic<int64_t> _pending_read_pos;\n    std::atomic<int64_t> _pending_write_pos;\n    page_ptr             _pending_pages[32];\n\n    // updated by gc thread... 'unclaimed pages' with free data.\n    bit_index<64*64/*2*GB/PageSize*/>  _free_pages;\n    page_ptr                  _pages[2*GB/PageSize];\n};\n\nclass thread_local_heap;\n\nclass garbage_collector\n{\n  public:\n    garbage_collector()\n    :_done(false),\n      _tlheaps(nullptr),\n     _gc_thread(&garbage_collector::run){}\n    ~garbage_collector()\n    {\n      _done.store(true);\n      _gc_thread.join();\n    }\n\n    void register_thread_local_heap( thread_local_heap* t );\n\n    static garbage_collector& get()\n    {\n      static garbage_collector gc;\n      return gc;\n    }\n\n    static void run();\n\n  private:\n    std::atomic<bool>               _done;\n    std::atomic<thread_local_heap*> _tlheaps;\n    std::thread                     _gc_thread;\n};\n\nstatic basic_pool_ptr get_pool( int p )\n{\n  if( !(p >= 0 && p < 16 ) )\n      fprintf( stderr, \"%d\", p );\n  assert( (p >= 0 && p < 16 ) );\n  static basic_pool_ptr _pools[16];\n  static bool           _init = [&]()->bool{\n     // allocate the pools for all size classes\n     _pools[0]  = new fixed_pool<16>();\n     _pools[1]  = new fixed_pool<32>();\n     _pools[2]  = new fixed_pool<64>();\n     _pools[3]  = new fixed_pool<128>();\n     _pools[4]  = new fixed_pool<256>();\n     _pools[5]  = new fixed_pool<512>();\n     _pools[6]  = new fixed_pool<1024>();\n     _pools[7]  = new fixed_pool<2*1024>();\n     _pools[8]  = new fixed_pool<4*1024>();\n     _pools[9]  = new fixed_pool<8*1024>();\n     _pools[10] = new fixed_pool<16*1024>();\n     _pools[11] = new fixed_pool<32*1024>();\n     _pools[12] = new fixed_pool<64*1024>();\n     _pools[13] = new fixed_pool<128*1024>();\n     _pools[14] = new fixed_pool<256*1024>();\n     _pools[15] = new fixed_pool<512*1024>();\n     return true;\n  }();\n  (void)_init; // unused warning\n  return _pools[p];\n}\n\n\nclass thread_local_heap\n{\n   public:\n      thread_local_heap()\n      :_gc_at_bat(nullptr),\n       _release_at_bat(nullptr),\n       _gc_on_deck(nullptr),\n       _release_on_deck(nullptr)\n      {\n        garbage_collector::get().register_thread_local_heap(this);\n      }\n\n      ~thread_local_heap()\n      {\n      }\n\n      static thread_local_heap& get()\n      {\n        static __thread thread_local_heap* tlh = nullptr;\n        if( !tlh ) tlh = new thread_local_heap();\n        return *tlh;\n      }\n\n      void* alloc( size_t s )\n      {\n          int32_t pool  = LOG2(s-1) + 1 - 4;\n   //       fprintf( stderr, \"pool %d  for size %d\\n\", pool, int(s) );\n\n          if( !_pages[pool] )\n          {\n              basic_page_ptr p = get_pool(pool)->claim_page();\n              fprintf( stderr, \"claim pool! %p\\n\", p );\n              assert(p);\n              _pages[pool] = p;\n              auto r = p->alloc();\n              assert(r);\n              return r;\n          }\n          void* a = _pages[pool]->alloc();\n\n          if( !a )  // the page must be full... release it and get a new one\n          {\n              fprintf( stderr, \"release pool %d  %p\\n\", pool, _pages[pool] );\n              basic_page_ptr p = get_pool(pool)->claim_page();\n              assert( p );\n              fprintf( stderr, \"new page %p   avail: %lld\\n\", p, p->get_available() );\n\n              _pages[pool]->_next_page = _release_on_deck;\n              _release_on_deck = _pages[pool];\n\n              if( _release_at_bat == nullptr )\n              {\n                _release_at_bat = _release_on_deck;\n                _release_on_deck = nullptr;\n              }\n              _pages[pool] = p;\n              assert(p);\n              auto r = p->alloc();\n              assert(r);\n              return r;\n          }\n          assert( a );\n          return a;\n      }\n\n      void  free( void* v )\n      {\n          assert( v != nullptr );\n\n     //     fprintf( stderr, \"free %p      tld: %p\\n\", v, this );\n          \n         // size_t   s     = int64_t(v)>>32;\n         // int32_t  pool  = LOG2(s) - 4;\n\n  //        fprintf( stderr, \"Free size: %llu  on pool %d\\n\", s, pool );\n\n          // try local free first.\n         //  if( _pages[pool] && _pages[pool]->free(v) )\n         //     return;\n\n          free_node* fv = (free_node*)v;\n          assert( fv != _gc_on_deck );\n\n          fv->next = _gc_on_deck;\n          _gc_on_deck = fv;\n\n          if( _gc_at_bat == nullptr )\n          {\n            _gc_at_bat = _gc_on_deck;\n            _gc_on_deck = nullptr;\n          }\n      }\n\n   private:\n      friend class garbage_collector;\n\n      free_node*         _gc_at_bat;\n      basic_page_ptr     _release_at_bat;\n      uint64_t           _gc_pad[7];\n      free_node*         _gc_on_deck;\n      basic_page_ptr     _release_on_deck;\n                         \n      // current page for this thread...\n      basic_page_ptr     _pages[32]; // sized every power of 2 up to 1MB\n      thread_local_heap* _next;\n};\n\n\nvoid garbage_collector::register_thread_local_heap( thread_local_heap* t )\n{\n   auto* stale_head = _tlheaps.load(std::memory_order_relaxed);\n   do { t->_next = stale_head;\n   }while( !_tlheaps.compare_exchange_weak( stale_head, t, std::memory_order_release ) );\n}\n\nvoid garbage_collector::run()\n{\n  garbage_collector& gc = garbage_collector::get();\n  while( true )\n  {\n    bool found_work = false;\n    thread_local_heap* cur = gc._tlheaps.load( std::memory_order_relaxed );\n    while( cur )\n    {\n        free_node* n = cur->_gc_at_bat;\n        if( n )\n        {\n          cur->_gc_at_bat = nullptr;\n          found_work = true;\n        }\n        while( n )\n        {\n          auto next = n->next;\n          // TODO: free N\n          int pool = LOG2( int64_t(n) >> 32 ) - 4;\n       //   fprintf( stderr, \"pool %d  gc_free %p\\n\", pool, n );\n          get_pool( pool )->gc_free(n);\n          //fprintf( stderr, \".\" );\n          assert( n != next );\n          n = next;\n        }\n        if( cur->_release_at_bat != nullptr )\n        {\n           basic_page_ptr p = cur->_release_at_bat;\n           cur->_release_at_bat = nullptr;\n\n           while( p )\n           {\n              p->release();\n              int pool = p->get_pool(); //LOG2( int64_t(p) >> 32 ) - 4;\n              get_pool( pool )->gc_release(p);\n              p = p->_next_page;\n           }\n        }\n        assert( cur != cur->_next );\n        cur = cur->_next;\n    }\n    if( !found_work )\n    {\n       // TODO: replace with something better..\n       ::usleep( 100 );\n       if( gc._done.load() ) return;\n    }\n  }\n}\n\n\n\nvoid* fp_malloc( size_t s )\n{\n  return thread_local_heap::get().alloc(s);\n}\n\nvoid fp_free( void* v )\n{\n  thread_local_heap::get().free(v);\n}\n"
  },
  {
    "path": "garbage_collector.hpp",
    "content": "\n"
  },
  {
    "path": "hheap.cpp",
    "content": "#include <atomic>\n#include <stdint.h>\n#include <memory.h>\n#include <stdlib.h>\n#include <iostream>\n#include <vector>\n#include <assert.h>\n#include <unistd.h>\n#include <mutex>\n#include <thread>\nstd::mutex print_mutex;\n#include \"disruptor.hpp\"\n\nusing namespace disruptor;\n\n#if 0\n#define PRINT( ... )  \\\n{ std::unique_lock<std::mutex> _lock(print_mutex); \\\n    __VA_ARGS__ \\\n}\n#define NEW_PRINT( ... ) \\\n{ std::unique_lock<std::mutex> _lock(print_mutex); \\\n  __VA_ARGS__ \\\n}\n#define PAGE_FREE_PRINT( ... ) \\\n{ std::unique_lock<std::mutex> _lock(print_mutex); \\\n  __VA_ARGS__ \\\n}\n#else\n  #define PRINT(...)\n  #define NEW_PRINT(...)\n  #define PAGE_FREE_PRINT(...)\n#endif\n\n\n\nint64_t fast_rand();\n\nstruct slot_header\n{ \n    int32_t page_id;     // used by free to find the page in the pool\n    int16_t pool_id;     // used by free to find the pool\n    uint8_t page_slot;   // the slot in the page in the pool\n    uint8_t alignment;   // 8 if reserved, 0 if free... byte _data[alignment-1] = alignment.\n};\n\n\n\ntemplate<uint32_t Size, uint32_t NumSlots>\nstruct page\n{\n  public:\n    struct slot \n    { \n        int32_t page_id;     // used by free to find the page in the pool\n        int16_t pool_id;     // used by free to find the pool\n        uint8_t page_slot;   // the slot in the page in the pool\n        uint8_t alignment;   // 8 if reserved, 0 if free... byte _data[alignment-1] = alignment.\n        char    _data[Size]; // alignment helps us find the page_id/pool_id when allocated aligned objects.\n    };\n\n    page(int16_t page_id, int16_t pool_id)\n    :_free_write_cursor(NumSlots)\n    {\n       _pool_id = pool_id;\n       _page_id = page_id;\n       _posted  = false;\n\n     // ... \n       _free_write_cursor.follows( _free_read_cursor );\n       _free_read_cursor.follows( _free_write_cursor );\n\n       for( int i = 0; i < NumSlots; ++i )\n       {\n          slot& s = _slot[i];\n          s.page_id = page_id;\n          s.pool_id = pool_id;\n          s.page_slot = i;\n          s.alignment = 8; // free expects this \n          this->free(i); // increment the free write cursor\n       }\n       _release_free_pos = 0;\n\n       assert( free_estimate() == NumSlots );\n       assert( can_alloc() );\n    }\n\n    int32_t free_estimate() \n    { \n       if( _release_free_pos < 0 ) return 0;\n       return _free_write_cursor.begin() - _release_free_pos;\n    }\n\n    bool can_alloc()\n    {\n       if( _free_read_cursor.begin() == _free_read_cursor.end() &&\n           _free_read_cursor.begin() == _free_read_cursor.check_end() )\n       { \n    //      std::cerr<<\"    CAN ALLOC? page: \"<<_page_id<<\" free read cursor begin: \"<<_free_read_cursor.begin()<<\"   end: \"<<_free_read_cursor.end()<<\"\\n\";\n          return false; \n       }\n       return true;\n    }\n\n    char*  alloc(uint8_t align = 8)\n    {\n       if( !can_alloc() ) return nullptr; \n      \n       auto    pos       = _free_read_cursor.begin();\n       int64_t free_slot = _free_list.at(pos);\n       _free_read_cursor.publish( pos );\n        \n     //  std::cerr<<\"page: \"<<_page_id<<\" alloc slot: \"<<int(free_slot)<<\"  alignment: \"<<int(align)<<\"  free list pos: \"<<pos<<\"\\n\";\n       assert( free_slot < NumSlots);\n       assert( _slot[free_slot].alignment == 0 ); // make the spot as used and take its alignment\n\n       //assert( _slot[free_slot].alignment == 0); // make the spot as used and take its alignment\n       _slot[free_slot].alignment = align; // make the spot as used and take its alignment\n       return _slot[free_slot]._data;\n     //  uint8_t* rtn = (uint8_t*)_slot[free_slot]._data + align - 8; // TODO: adjust for alignment..\n     //  rtn[-1] = align;\n     //  return (char*)rtn;\n    } \n    \n    /** return the number of slots freed since this page was 'released' */\n    uint64_t    free( uint8_t slot )\n    {\n    //   std::cerr<<\"free slot: \"<<int(slot)<<\"  alignment: \"<<int(_slot[slot].alignment)<<\"\\n\";\n\n       assert( slot < NumSlots );\n       assert( _slot[slot].alignment >= 8 );\n       assert( _slot[slot].pool_id == _pool_id );\n       \n       _slot[slot].alignment = 0; // last thing we do is set alignment.\n\n       auto cl = _free_write_cursor.claim(1);\n       _free_list.at(cl) = slot;\n       //_free_write_cursor.publish_after( cl, cl - 1 );\n       _free_write_cursor.publish( cl );//, cl - 1 );\n\n       return free_estimate();\n       return 0;\n    }\n\n    /** called to save the free cursor position so we can track how many\n     *  slots have been freed since this thread gave up control \n     */\n    void  release()\n    {\n        _posted = false;\n        _free_claim.store(0,std::memory_order_relaxed);\n        _release_free_pos = _free_write_cursor.begin();\n    }\n\n    void  claim()\n    {\n        _release_free_pos = -1;\n    }\n    bool  claim_free()\n    {\n       if( !_posted && 0 == _free_claim.fetch_add(1, std::memory_order_release ) )\n       {\n         return  _posted = true;\n       }\n       return false;\n    }\n    bool  is_posted_to_free_list(){ return _posted; }\n\n   private:\n    slot                                         _slot[NumSlots]; // actual data storage\n\n    /** the position of the free_write_cursor at the time this page was 'released' \n     *  by the last allocator thread.\n     **/\n    int64_t                                      _release_free_pos;\n\n    ring_buffer<uint16_t,2*NumSlots>             _free_list;\n    shared_write_cursor                          _free_write_cursor;\n    read_cursor                                  _free_read_cursor;\n    uint32_t                                     _pool_id;\n    uint32_t                                     _page_id;\n    bool                                         _posted; \n    std::atomic<int>                             _free_claim;\n};\n\n/**\n *    A pool is a collection of 'pages' that threads can claim to use\n *    for allocation.  \n *\n*/\ntemplate<uint16_t PoolId, uint32_t Size,uint32_t SlotsPerPage,uint32_t MaxPages=1024*32>\nstruct pool\n{\n   typedef page<Size,SlotsPerPage>  page_type;\n   typedef page_type*               page_ptr;\n   typedef typename page_type::slot slot_type;\n   typedef slot_type*               slot_ptr;\n\n   struct thread_local_data \n   {\n      thread_local_data()\n      :current_page_num(-1),\n       current_page(nullptr){}\n\n      int32_t    current_page_num;\n      page_ptr   current_page;\n   };\n\n   ring_buffer<uint32_t,MaxPages>   _free_pages; // indexes into _alloc_pages\n   shared_write_cursor              _free_page_write_cursor;\n   shared_read_cursor               _free_page_read_cursor;\n\n   ring_buffer<page_ptr,MaxPages>   _alloc_pages; // pages allocated (fixed index)\n   shared_write_cursor              _page_alloc_cursor;\n   const read_cursor                _page_alloc_begin; // used to prevent alloc_cursor from wrapping\n\n   pool()\n   :_free_page_write_cursor( MaxPages ),\n    _free_page_read_cursor( MaxPages ),\n    _page_alloc_cursor( MaxPages )\n   {\n      _free_page_write_cursor.follows( _free_page_read_cursor );\n      _free_page_read_cursor.follows( _free_page_write_cursor );\n     // _page_alloc_cursor.follows( _page_alloc_begin );\n      //_page_alloc_begin.follows( _page_alloc_cursor ); // begin shouldn't move\n   }\n\n   static pool& instance() \n   { \n      static pool _p;\n      return _p;\n   }\n\n   static thread_local_data*& local_pool()\n   {\n      static thread_local thread_local_data*  _current = nullptr;\n      return _current;\n   }\n\n   thread_local_data&  get_local_pool()\n   {\n      thread_local_data*& cur = local_pool();\n      if( cur == nullptr )\n      {\n         cur = new thread_local_data();\n      }\n      return *cur;\n   }\n\n   char* do_alloc( uint16_t align = 8 )\n   {\n       thread_local_data& tld = get_local_pool(); //get thread local data\n\n       if( tld.current_page_num == -1 )  // we need to claim a page\n       {\n          claim_page(tld);\n          assert( tld.current_page_num != -1 );\n          assert( tld.current_page );\n       }\n       char* c = tld.current_page->alloc(align);\n\n       while( !c )  // no space available, claim a new page\n       { \n         claim_page(tld); \n         c = tld.current_page->alloc(align);\n         if( !c ) \n         {\n          std::cerr<<\"!!?? NULL??\\n\";\n         }\n       }\n       return c;\n   }\n\n   void do_free( char* c )\n   {\n      uint8_t* s = reinterpret_cast<uint8_t*>(c);\n      assert( c != nullptr );\n      assert( s[-1] == 8 ); \n      uint8_t* slot_pos = (uint8_t*)c-8;//s + s[-1]-16; // s-1 == alignment, default 8 byte\n\n      slot_ptr sl = reinterpret_cast<slot_ptr>(slot_pos);\n      assert( sl->pool_id == PoolId        ); \n      assert( sl->page_slot < SlotsPerPage );\n      assert( sl->page_id < MaxPages       );\n     \n      auto p = _alloc_pages.at(sl->page_id);\n      if( p->free(sl->page_slot) > SlotsPerPage/4 )\n      {\n          if( !p->claim_free() ) return; // do I get to post this.. or does someone else..\n          // move page into free queue\n          auto claim = _free_page_write_cursor.claim(1);\n          _free_pages.at(claim) = sl->page_id;\n\n\n          PAGE_FREE_PRINT(std::cerr<<\"PAGE AVAILABLE: \"<<sl->page_id<<\"\\n\";\n          std::cerr<<\"    sl->pool_id: \"<<int(sl->pool_id)<<\"  slot: \"<<int(sl->page_slot)<<\"  id: \"<<int(sl->page_id)<<\" SlotsPerPage: \"<<SlotsPerPage<<\"   available_slots: \"<<p->free_estimate()<<\" \\n\";\n          std::cerr<<\"    free_page_write claim: \"<<claim<<\"\\n\";\n          )\n\n          _free_page_write_cursor.publish_after( claim, claim -1 );\n      }\n   }\n\n   void claim_page( thread_local_data& tld )\n   {\n       if( tld.current_page ) tld.current_page->release(); \n\n       auto read_claim =  _free_page_read_cursor.claim(1);\n       if(  !_free_page_read_cursor.is_available( read_claim ) )\n       { \n          NEW_PRINT(std::cerr<<\"NEW PAGE:    free_read_claim_idx: \"<<read_claim<<\"\\n\";)\n          auto free_write_idx = _free_page_write_cursor.claim(1); // claim a place to store the 'free' allocated page\n          NEW_PRINT(std::cerr<<\"             free_write_idx: \"<<free_write_idx<<\"\\n\";)\n\n       // the read position is after the next write position... allocate\n          // allocate and publish page_idx ... to both free page cursors\n          auto alloc_idx  = _page_alloc_cursor.claim(1); // claim a place to allocate.. \n          NEW_PRINT(std::cerr<<\"             alloc_write_idx: \"<<alloc_idx<<\" READ  \"<<read_claim<<\"\\n\";)\n\n          _alloc_pages.at(alloc_idx) = new page_type( alloc_idx, PoolId ); // TODO: replace with mmap\n          _page_alloc_cursor.publish_after( alloc_idx, alloc_idx-1 ); // publish the allocated buffer\n          NEW_PRINT(std::cerr<<\"                 alloc published: \"<<alloc_idx<<\"  READ \"<<read_claim<<\" \\n\";)\n\n          _free_pages.at(free_write_idx) = alloc_idx;           \n          //_free_page_write_cursor.publish_after(free_write_idx,free_write_idx-1); // publish the new 'free' buffer\n          _free_page_write_cursor.publish(free_write_idx);//,free_write_idx-1); // publish the new 'free' buffer\n          NEW_PRINT(std::cerr<<\"                 free write idx published: \"<<free_write_idx<<\"  value: \"<<_free_pages.at(free_write_idx)<<\"\\n\";)\n\n          NEW_PRINT( std::cerr<<\"                READ CLAIM: \"<<read_claim<<\"\\n\";);\n         // _free_page_read_cursor.wait_for( read_claim );\n          auto ridx = _free_pages.at(read_claim);\n          NEW_PRINT( std::cerr<<\"                 free_page read publish: \"<<read_claim<<\"  value: \"<<ridx<<\"\\n\";)\n          //_free_page_read_cursor.publish_after(read_claim,read_claim-1);\n          _free_page_read_cursor.publish(read_claim);//,read_claim-1);\n\n          tld.current_page_num = ridx;\n          tld.current_page     = _alloc_pages.at(tld.current_page_num);\n       }\n       else\n       {\n        NEW_PRINT( std::cerr<<\"RECLAIM PAGE:  free_read_claim_idx: \"<<read_claim<<\"  page: \"<<_free_pages.at(read_claim)<<\"\\n\";)\n          tld.current_page_num = _free_pages.at(read_claim);\n          //_free_page_read_cursor.publish_after(read_claim,read_claim-1);\n          _free_page_read_cursor.publish( read_claim );\n          tld.current_page     = _alloc_pages.at(tld.current_page_num);\n        NEW_PRINT( std::cerr<<\"               published free_read_claim_idx: \"<<read_claim<<\"\\n\"; )\n        NEW_PRINT( std::cerr<<\"               available: \"<< tld.current_page->free_estimate()<<\"\\n\"; )\n       }\n       tld.current_page->claim();\n   }\n   \n   static void   free( char* c )             { instance().do_free(c);             };\n   static char*  alloc( uint16_t align = 8 ) { return instance().do_alloc(align); };\n};\n\n\n#define BENCH_SIZE ( (1024*256) )\n#define ROUNDS 100 \n//#define BENCH_SIZE ( (512) )\n//#define ROUNDS 5 \n\n\n\n#include <thread>\nvoid malloc_bench( int tid )\n{\n  std::vector<char*> a(BENCH_SIZE);\n  memset( a.data(), 0, a.size() * sizeof(char*));\n  for( int x = 0; x < ROUNDS; ++x )\n  {\n    for( int i = 0; i < BENCH_SIZE; ++i )\n    {\n      int pos = rand() & 1;\n      if( a[i] && pos )\n      {\n          free(a[i]); \n          a[i]=0;\n      }\n      else if( !a[i] && pos )\n      {\n          a[i] = (char*)malloc(64);\n      }\n    }\n  }\n}\nvoid bench(int tid)\n{\n  std::vector<char*> a(BENCH_SIZE);\n  memset( a.data(), 0, a.size() * sizeof(char*));\n  for( int x = 0; x < ROUNDS; ++x )\n  {\n    for( int i = 0; i < BENCH_SIZE; ++i )\n    {\n      int pos = rand() & 1;\n      if( a[i] && pos )\n      {\n          pool<1,64,256>::free(a[i]); \n          a[i] = 0;//free(a[i]); \n      }\n      else if( !a[i] && pos )\n      {\n         a[i] = pool<1,64,256>::alloc();\n      }\n    }\n  }\n}\n\nstd::vector<char*>  buffers[16];\n\n\nvoid pc_bench_worker( int pro, int con, char* (*do_alloc)(int s), void (*do_free)(char*)  )\n{\n  for( int r = 0; r < ROUNDS; ++r )\n  {\n      for( int x = 0; x < buffers[pro].size()/2 ; ++x )\n      {\n         int p = fast_rand() % buffers[pro].size();\n         if( !buffers[pro][p] )\n         {\n           auto si = 60; //fast_rand() % (1<<15);\n           auto r = do_alloc( si );\n\n           slot_header* sh = (slot_header*)(r-8);// TODO: handle alignment\n           //assert( sh->alignment == 8 );\n           //assert( sh->pool_id > 3 );\n\n           if( r == nullptr )\n           {\n            std::cerr<<\"size: \"<<si<<\"  returned null\\n\";\n           }\n           assert( r != nullptr );\n           assert( r[0] != 99 ); \n           r[0] = 99; \n           buffers[pro][p] = r;\n         }\n      }\n      for( int x = 0; x < buffers[con].size()/2 ; ++x )\n      {\n         int p = fast_rand() % buffers[con].size();\n         if( buffers[con][p] ) \n         { \n           //assert( buffers[con][p][0] == 99 ); \n           buffers[con][p][0] = 0; \n           do_free(buffers[con][p]);\n           buffers[con][p] = 0;\n         }\n      }\n  }\n}\n#if 0\nvoid pc_bench_worker( int pro, int con, char* (*do_alloc)(), void (*do_free)(char*)  )\n{\n  for( int r = 0; r < ROUNDS; ++r )\n  {\n     // produce some\n     for( int i = 0; i < buffers[pro].size(); ++i )\n     {\n        // don't wrap...\n      //  while( buffers[pro][i] ) usleep(0);\n        buffers[pro][i] = do_alloc();\n     }\n     for( int i = 0; i < BENCH_SIZE*2; ++i )\n     {\n        rand() % buffers[con].size()\n     }\n\n     usleep( 100 );\n     for( int i = 0; i < buffers[pro].size(); ++i )\n     {\n     //   while( !buffers[con][i] ) usleep(0);\n        if( buffers[con][i] )\n        {\n           do_free(buffers[con][i]);\n           buffers[con][i] = 0;\n        }\n     }\n  }\n}\n#endif\n\nvoid pc_bench(char* (*do_alloc)(int s), void (*do_free)(char*)  )\n{\n  for( int i = 0; i < 16; ++i )\n  {\n    buffers[i].resize( BENCH_SIZE );\n    memset( buffers[i].data(), 0, 8 * BENCH_SIZE );\n  }\n  std::thread a( [=](){ pc_bench_worker( 1, 2, do_alloc, do_free ); } );\n  std::thread b( [=](){ pc_bench_worker( 2, 3, do_alloc, do_free ); } );\n  std::thread c( [=](){ pc_bench_worker( 3, 4, do_alloc, do_free ); } );\n  std::thread d( [=](){ pc_bench_worker( 4, 5, do_alloc, do_free ); } );\n  std::thread e( [=](){ pc_bench_worker( 5, 6, do_alloc, do_free ); } );\n  std::thread f( [=](){ pc_bench_worker( 6, 7, do_alloc, do_free ); } );\n  std::thread g( [=](){ pc_bench_worker( 7, 8, do_alloc, do_free ); } );\n  std::thread h( [=](){ pc_bench_worker( 8, 9, do_alloc, do_free ); } );\n  std::thread i( [=](){ pc_bench_worker( 9, 10, do_alloc, do_free ); } );\n  std::thread j( [=](){ pc_bench_worker( 10, 1, do_alloc, do_free ); } );\n\n  a.join();\n  b.join();\n  c.join();\n  d.join();\n  e.join();\n  f.join();\n  g.join();\n  h.join();\n  i.join();\n  j.join();\n}\nvoid pc_bench_st(char* (*do_alloc)(int s), void (*do_free)(char*)  )\n{\n  for( int i = 0; i < 16; ++i )\n  {\n    buffers[i].resize( BENCH_SIZE );\n    memset( buffers[i].data(), 0, 8 * BENCH_SIZE );\n  }\n  int i = 0;\n  std::thread a( [=](){ pc_bench_worker( 1, 1, do_alloc, do_free ); } );\n  /*\n  std::thread b( [=](){ pc_bench_worker( 2, 2, do_alloc, do_free ); } );\n  std::thread c( [=](){ pc_bench_worker( 3, 3, do_alloc, do_free ); } );\n  std::thread d( [=](){ pc_bench_worker( 4, 4, do_alloc, do_free ); } );\n  std::thread e( [=](){ pc_bench_worker( 5, 5, do_alloc, do_free ); } );\n  std::thread f( [=](){ pc_bench_worker( 6, 6, do_alloc, do_free ); } );\n  std::thread g( [=](){ pc_bench_worker( 7, 7, do_alloc, do_free ); } );\n  std::thread h( [=](){ pc_bench_worker( 8, 8, do_alloc, do_free ); } );\n  */\n\n  a.join();\n  /*\n  b.join();\n  c.join();\n  d.join();\n  e.join();\n  f.join();\n  g.join();\n  h.join();\n  */\n}\n\nchar* do_malloc(int s){ return (char*)malloc(s); }\nvoid  do_malloc_free(char* c){ free(c); }\nchar* do_hash_malloc(int s)\n{ \n    #define LOG2(X) ((unsigned) (8*sizeof (unsigned long long) - __builtin_clzll((X)) - 1))\n    switch( LOG2(s)+1 )\n    {\n       case 64:\n            assert(\"!dont malloc yet..\" );\n            return (char*)malloc(s);\n       case 16:\n            return pool<16,1<<16,8>::alloc(); \n       case 15:\n            return pool<15,1<<15,16>::alloc(); \n       case 14:\n            return pool<14,1<<14,32>::alloc(); \n       case 13:\n            return pool<13,1<<13,64>::alloc(); \n       case 12:\n            return pool<12,1<<12,64>::alloc(); \n       case 11:\n            return pool<11,1<<11,64>::alloc(); \n       case 10:\n            return pool<10,1<<10,128>::alloc(); \n       case 9:\n            return pool<9,1<<9,128>::alloc(); \n       case 8:\n            return pool<8,1<<8,128>::alloc(); \n       case 7:\n            return pool<7,1<<7,256>::alloc(); \n       case 6:\n            return pool<6,1<<6,256>::alloc(); \n       case 5:\n       default:\n            return pool<5,1<<5,256>::alloc(); \n    }\n    assert( !\"we shoudln't get here!\" );\n}\n\n\nvoid  do_hash_free(char* c)\n{ \n    assert( c != nullptr );\n    uint8_t a = *(c-1); // alignment\n    slot_header* sh = (slot_header*)(c-8);// TODO: handle alignment\n    assert( a == 8 );\n    if( !(sh->pool_id >=5 && sh->pool_id <= 16 ) )\n    {\n      PRINT( std::cerr<< \"ERROR: pool_id: \"<<sh->pool_id<<\"\\n\"; \n          std::cerr.flush();\n          assert( sh->pool_id >=5 && sh->pool_id <= 16 );\n      );\n    }\n    switch( sh->pool_id )\n    {\n       case 16:\n            pool<16,1<<16,8>::free(c); \n            return;\n       case 15:\n            pool<15,1<<15,16>::free(c); \n            return;\n       case 14:\n            pool<14,1<<14,32>::free(c); \n            return;\n       case 13:\n            pool<13,1<<13,64>::free(c); \n            return;\n       case 12:\n            pool<12,1<<12,64>::free(c); \n            return;\n       case 11:\n            pool<11,1<<11,64>::free(c); \n            return;\n       case 10:\n            pool<10,1<<10,128>::free(c); \n            return;\n       case 9:\n            pool<9,1<<9,128>::free(c); \n            return;\n       case 8:\n            pool<8,1<<8,128>::free(c); \n            return;\n       case 7:\n            pool<7,1<<7,256>::free(c); \n            return;\n       case 6:\n            pool<6,1<<6,256>::free(c); \n            return;\n       case 5:\n       default:\n            pool<5,1<<5,256>::free(c); \n            return;\n    }\n    assert( !\"we shoudln't get here!\" );\n}\n\n\nint main( int argc, char** argv )\n{\n  if( argc > 1 && argv[1][0] == 'm' )\n  {\n    std::cerr<<\"malloc multi\\n\";\n    pc_bench( do_malloc, do_malloc_free );\n  }\n  if( argc > 1 && argv[1][0] == 'M' )\n  {\n    std::cerr<<\"hash malloc multi\\n\";\n    pc_bench( do_hash_malloc, do_hash_free );\n  }\n  if( argc > 1 && argv[1][0] == 's' )\n  {\n    std::cerr<<\"malloc single\\n\";\n    pc_bench_st( do_malloc, do_malloc_free );\n  }\n  if( argc > 1 && argv[1][0] == 'S' )\n  {\n    std::cerr<<\"hash malloc single\\n\";\n    pc_bench_st( do_hash_malloc, do_hash_free );\n  }\n  return 0;\n}\n\n\n"
  },
  {
    "path": "ideas.txt",
    "content": "Global Ready Queue per Size Class of 256 each.... combined with 16 per thread per size assuming 16 threads... class means that\nin the 'idle state' we have\n\nSize allocations are not 'random', but usually fall into predictable patterns.\nThe 'ideal' buffer size is one that is never full and never empty... if it ever empties then the next time\nyou fill it you should fill it 'fuller' than the last time... and attempt to keep it there.  \n\nIf the buffer is 'full' when you check then you can start reclaiming data from that buffer.\n\n\nGC Thread:\n\nFor each size class... maintain a hash 'set' of free chunks in that set.\n\nWhen a new chunk comes in, look for its 'prev' in its hash set, if found remove it, merge the two... then look for the 'next' if found merge the two...\nthen store the result back in the new hash table after checking to see if the queue for that size class is waiting for data.\n\n\nGC Thread Loop:\n{\n  foreach thread_garabage_bin\n     pull all chunks, insert them into merge set, then merge them if possible\n\n  foreach size class\n     refill the queue\n        if queue was empty... grow the queue by 4\n        if queue was full... increment full count\n            if full count > N then reclaim 25% and reset full count.\n        pull chunks from proper size heap... \n          - if not enough are available then divide up chunks from the \n             next size up.\n        if a chunk reaches the 'page size' and the 'page size' block queue is\n        empty then we can release it back to the OS.\n\n   when there is no merging / reclaiming to do... set a flag and\n   wait on a mutex... next person to call free will wake me up when\n   they see the flag set.\n\n   When choosing empty chunks to place in the queue... pick the chunk from the\n   block with the 'oldest' creation time.  This optimization requires more\n   expensive 'sorting', we can skip this step when ever there is demand for\n   'all chunks' of a particular class size, but when there is only demand for\n   a fraction of the available chunks, then, because we are scanning the\n   hash table linerally... \n\n   Each node in the hash table points to prev/next pairs... when a hash is\n   'inserted' its memory location is based on its hash value, but its prev/next\n   is based upon order of arival.   Thus you can quickly find a node, then\n   extract like a double-linked-list.  \n}\n\nMerge Cost:\n  2 hash lookups + 1 hash set and perhaps 2 hash clears\n  3 total calls to city hash...\n\nThe 'free queue' can be a linked list of the 'freed chunks'.\n  Each thread has its 'ready bin' which it will set 'if null', and\n  its pending bin which it will fill if the ready bin is not null.\n  the memory space in the block is converted into a 'next' pointer.\n  no large per-thread 'free queues'.\n\n  \n\n\n\n\n   queues will adjust in length until they can handle the 'burst' processing rate\n   of the GC thread.\n\n  When the GC thread cannot keep the queues full, then threads fall back on\n  directly allocating their own chunks.\n  \n\n\nOverhead per block.. 8 byte header + 4 byte in free table or 8 byte in queue.\nQueue sizes adjust \n\nHeader:\nprev + next offsets.\nstart of mmap chunk sets prev to 0\nend of mmap chunk is a header with next = 0.\n\n\n\n\n\n"
  },
  {
    "path": "malloc2.cpp",
    "content": "/**\n *   Each thread has its own 'arena' where it can allocate 'new' blocks of what ever size it needs (buckets). After\n *   a thread is done with memory it places it in a garbage collection queue.\n *\n *   The garbage collector follows each threads trash bin and moves the blocks into a recycled list that\n *   all other threads can pull from.\n *\n *   The garbage collector can grow these queues as necessary and shrink them as time progresses.\n */\n\n#include <vector>\n//#include \"mmap_alloc.hpp\"\n#include \"disruptor.hpp\"\n#include <thread>\n#include \"fast_rand.cpp\"\n\nusing namespace disruptor;\n\n#define PAGE_SIZE (4*1024*1024)\n#define BENCH_SIZE ( (2024) )\n#define ROUNDS 200000 \n#define LOG2(X) ((unsigned) (8*sizeof (unsigned long long) - __builtin_clzll((X)) - 1))\n\nstruct block_header\n{\n   uint32_t   _page_pos; // how far from start of page\n   uint32_t   _prev;\n   uint32_t   _next;\n   uint32_t   _timestamp;// creation time... we want to use 'old blocks' first\n                         // because they are most likley to contain long-lived objects\n   size_t calc_size(){ return _next - _page_pos;   }\n   int calc_bin_num(){ return LOG2(calc_size())+1; }\n};\nblock_header* allocate_block_page();\n\n/**\n *  2MB chunk of memory that gets divided up\n *  'on request', rounded to the nearest multiple\n *  of 128 bytes so that it can be binned/cached\n *  effectively.\n */\nstruct page\n{\n  block_header   data[PAGE_SIZE/sizeof(block_header)]; \n};\n\nclass thread_allocator\n{\n  public:\n    void    free( char* c )\n    {\n      block_header* b = reinterpret_cast<block_header*>(c) - 1;\n      int bin = b->calc_bin_num();\n      if( _cache_pos[bin] > _cache_end[bin] - 32 )\n      {\n         _cache[bin].at(_cache_end[bin]++) = c;\n         return;\n      }\n      \n      auto pos = _gc_read_end_buffer;\n      _garbage_bin.at(pos) = c;\n      _gc_read_end_buffer = pos + 1;\n      /*\n      _gc_read_end_buffer = pos + 1;\n      */\n      if( _gc_read_end_buffer - _gc_read_end_last_write > 10 )\n      {\n        _gc_read_end = _gc_read_end_last_write = _gc_read_end_buffer;\n      }\n    }\n\n    char*   alloc( size_t s );\n\n    static thread_allocator& get()\n    {\n        static __thread thread_allocator* tld = nullptr;\n        if( !tld )  // new is not an option\n        { \n            tld = reinterpret_cast<thread_allocator*>( malloc(sizeof(thread_allocator))/*mmap_alloc( sizeof(thread_allocator)*/ );\n            tld = new (tld) thread_allocator(); // inplace construction\n\n            // TODO: allocate  pthread_threadlocal var, attach a destructor /clean up callback\n            //       to that variable... \n        }\n        return *tld;\n    }\n\n  protected:\n    char*  split_chunk( char* c, size_t l );\n\n    thread_allocator();\n    ~thread_allocator();\n\n    friend class garbage_collector;\n    \n    int64_t           _gc_begin;               // how far has gc processed\n    int64_t           _pad[7];                 // save the cache lines/prevent false sharing\n    int64_t           _gc_read_end;            // how far can gc read\n    int64_t           _pad2[7];                // save the cache lines/prevent false sharing\n    int64_t           _gc_read_end_buffer;     // cache writes to gc_read_end to every 10 writes\n    int64_t           _gc_read_end_last_write; // cache writes to gc_read_end to every 10 writes\n    int64_t           _cache_pos[32];\n    int64_t           _cache_end[32];\n\n    char*   get_garbage( int64_t pos ) // grab a pointer previously claimed.\n    {\n      // we may have to dynamically reallocate our gbin\n      return _garbage_bin.at(pos);\n    }\n    block_header*               _next_block;\n    ring_buffer<char*,1024*8>   _garbage_bin;\n    ring_buffer<char*,4>        _cache[32];\n};\n\n\ntypedef thread_allocator* thread_alloc_ptr;\n\n\n/**\n *   Polls all threads for freed items.\n *   Upon receiving a freed item, it will look\n *   at its size and move it to the proper recycle\n *   bin for other threads to consume.\n *\n *   When there is less work to do, the garbage collector\n *   will attempt to combine blocks into larger blocks\n *   and move them to larger cache sizes until it\n *   ultimately 'completes a page' and returns it to\n *   the system.  \n *\n *   From the perspective of the 'system' an alloc\n *   involves a single atomic fetch_add.\n *\n *   A free involves a non-atomic store.\n *\n *   No other sync is necessary.\n */\nclass garbage_collector\n{\n  public:\n    garbage_collector();\n    ~garbage_collector();\n    /**\n     *  Handles objects of the same size.\n     */\n    class recycle_bin\n    {\n       public:\n          recycle_bin(int num = 0)\n          :_next_write(0),_write_pos(0),_read_pos(0),_bin_num(num)\n          {\n          }\n          void sync_write_pos()\n          {\n     //       ((std::atomic<int64_t>*)&_write_pos)->load();\n          }\n\n          int64_t                       _next_write;\n          int64_t                       _pad0[7];\n          int64_t                       _write_pos;\n          int64_t                       _pad[7];\n          std::atomic<int64_t>          _read_pos;\n          int64_t                       _pad2[7];\n          ring_buffer<char*,1024*256>   _free_bin;\n          int                           _bin_num;\n    };\n\n    std::atomic<int64_t>  _sync;\n\n    int get_bin_num( size_t s )\n    {\n      return LOG2(s)+1;\n    }\n\n    recycle_bin&  get_bin( size_t bin_num ) \n    { \n        assert( bin_num < 32 );\n        return _bins[bin_num];\n    }\n\n    void register_allocator( thread_alloc_ptr ta );\n    void unregister_allocator( thread_alloc_ptr ta );\n\n    static garbage_collector& get()\n    {\n        static garbage_collector gc;\n        return gc;\n    }\n  private:\n    static void  run();\n    void  recycle( char* c );\n\n    std::thread                _thread;\n    recycle_bin                _bins[32];\n    std::atomic<uint32_t>      _next_talloc;\n    thread_alloc_ptr           _tallocs[128];\n    static std::atomic<bool>   _done;\n};\nstd::atomic<bool> garbage_collector::_done(false);\n\ngarbage_collector::garbage_collector()\n:_thread( &garbage_collector::run )\n{\n  memset( _tallocs, 0, sizeof(_tallocs) );\n}\ngarbage_collector::~garbage_collector()\n{\n  _done.store(true, std::memory_order_release );\n  _thread.join();\n}\n\nvoid garbage_collector::register_allocator( thread_alloc_ptr ta )\n{\n  printf( \"registering thread allocator %p\\n\", ta );\n  // TODO: just lock here... \n  auto pos = _next_talloc.fetch_add(1);\n  _tallocs[pos] = ta;\n}\nvoid garbage_collector::unregister_allocator( thread_alloc_ptr ta )\n{\n  for( int i = 0; i < 128; ++i )\n  {\n    if( _tallocs[i] == ta ) \n    {\n      _tallocs[i] = nullptr;\n    }\n  }\n}\n\nvoid  garbage_collector::run()\n{\n    garbage_collector& self = garbage_collector::get();\n    while( true )\n    {\n        bool found_work = false;\n        for( int i = 0; i < 128; i++ )\n        {\n             // TODO: not safe assumption, threads can come/go at will\n             // leaving holes... thread cleanup code needs locks around it\n             // to prevent holes..\n            if( self._tallocs[i] != nullptr ) \n            {\n                auto b = self._tallocs[i]->_gc_begin;\n                auto e = self._tallocs[i]->_gc_read_end;\n\n                if( b != e ) found_work = true;\n                for( auto p = b; p < e; ++p )\n                {\n                    char* c = self._tallocs[i]->get_garbage(p);\n\n\n                    self.recycle( c);\n                }\n                self._tallocs[i]->_gc_begin = e; \n            }\n        }\n        if( !found_work ) \n        {\n        //  usleep(0);\n            if( _done.load( std::memory_order_acquire ) ) return;\n        }\n    }\n}\n\nvoid garbage_collector::recycle( char* c )\n{\n   block_header* h = ((block_header*)c)-1;\n   assert( h->_next - h->_page_pos > 0 );\n   recycle_bin& b = get_bin( get_bin_num(h->_next - h->_page_pos)  );\n   auto p = b._next_write++;\n   while( b._free_bin.at(p) != nullptr )\n   {\n//      fprintf( stderr, \"opps.. someone left something behind...\\n\" );\n      p = b._next_write++;\n   }\n   b._free_bin.at(p) = c;\n   b._write_pos = p;\n//   if( b._write_pos % 256 == 128 ) \n //     b.sync_write_pos();\n}\n\nblock_header* allocate_block_page()\n{\n    fprintf( stderr, \"#\" );\n    auto limit = malloc(PAGE_SIZE);//mmap_alloc( PAGE_SIZE );\n\n    block_header* _next_block = reinterpret_cast<block_header*>(limit);\n    _next_block->_page_pos = 0;\n    _next_block->_prev = 0;\n    _next_block->_next = PAGE_SIZE; // next block always goes to end...; \n    _next_block->_timestamp = 0; // TODO... \n    return _next_block;\n}\n\n\nthread_allocator::thread_allocator()\n{\n  _gc_begin = 0;\n  _gc_read_end = 0;\n  _gc_read_end_buffer = 0;\n  _gc_read_end_last_write = 0;\n  _next_block = allocate_block_page();\n  memset( _cache_pos, 0, sizeof(_cache_pos) );\n  memset( _cache_end, 0, sizeof(_cache_end) );\n\n  garbage_collector::get().register_allocator(this);\n}\n\nthread_allocator::~thread_allocator()\n{\n  // give the rest of our allocated chunks to the gc thread\n  free( reinterpret_cast<char*>(_next_block+1) ); \n  garbage_collector::get().unregister_allocator(this);\n\n  // GARBAGE COLLECTOR must do the mmap free because we don't know\n  // when it will notice this thread going away... \n  // TODO: post a message to GC to track thread cleanup.\n  \n  // mmap_free( this, sizeof(*this) );\n}\n\n/**\n *  returns len bytes starting at s, potentially freeing \n *  anything after s+len.\n */\nchar* thread_allocator::split_chunk( char* s, size_t len )\n{\n  return s; \n}\n\nchar* thread_allocator::alloc( size_t s )\n{\n    assert( s > 0 );\n    s = 64*((s + 63)/64); // multiples of 64 bytes\n\n    if( s+sizeof(block_header) >= PAGE_SIZE  )\n    {\n       assert( false );\n       // do direct mmap \n      return nullptr;\n    }\n    int bin_num = garbage_collector::get().get_bin_num( s );\n\n    int limit = std::min<int>(bin_num + 4,32);\n    for( int i = bin_num; i < limit; ++i )\n    {\n      if( _cache_pos[i] < _cache_end[i] )\n      {\n         char* c = _cache[i].at(_cache_pos[i]);\n         ++_cache_pos[i];\n\n         return split_chunk( c, s );\n      }\n    }\n    static int64_t hit = 0;\n    static int64_t miss = 0;\n    static int64_t sync_count = 0;\n    ++sync_count;\n //   if( sync_count % 64  == 63 ) \n //       rb->sync_write_pos();\n\n\n    int end_bin = bin_num+1;// + 4;\n    for( ; bin_num < end_bin; ++ bin_num )\n    {\n       garbage_collector::recycle_bin* rb = &garbage_collector::get().get_bin( bin_num );\n       while( rb )\n       {\n          // TODO: ATOMIC ... switch to non-atomic check\n          auto write_pos = rb->_write_pos;\n         // printf( \"recyclebin wirte_pos: %d  read_cur.begin %d\\n\", write_pos, rb->_read_cur.pos().aquire()  );\n       \n          auto avail = write_pos - *((int64_t*)&rb->_read_pos);\n          if(  avail > 16 )// /*.load( std::memory_order_relaxed )*/ < write_pos )\n          {\n             // ATOMIC CLAIM FROM SHARED POOL... MOST EXPENSIVE OP WE HAVE...\n             //auto pos = rb->_read_cur.pos().atomic_increment_and_get(1)-1;\n             //auto pos = rb->_read_pos.fetch_add(4,std::memory_order_relaxed);\n             auto pos = rb->_read_pos.fetch_add(8);//,std::memory_order_acquire);\n             auto e = pos + 8;\n             while( pos < e )\n             {\n                char* b = rb->_free_bin.at(pos);\n                if( b )\n                {\n                   _cache[bin_num].at(_cache_end[bin_num]++) = b;\n                   rb->_free_bin.at(pos) = nullptr;\n                } \n                else\n                {\n         //         fprintf( stderr, \"read too much..\\n\" );\n                }\n                ++pos;\n             }\n       \n             if( _cache_pos[bin_num] < _cache_end[bin_num] )\n             {\n                char* c = _cache[bin_num].at(_cache_pos[bin_num]);\n                ++_cache_pos[bin_num];\n                ++hit;\n                return c;\n             }\n          } // else there are no blocks our size... go up a size or two?..\n          break;\n       }\n       ++miss;\n   //    if( miss % 10000 == 0 ) fprintf( stderr, \"\\nHit: %lld    Miss: %lld          \\r\", hit, miss );\n    }\n    // we already checked the 'best fit' bin and failed to find \n    // anything that size ready, so we can allocate it from our \n    // thread local block\n\n //   printf( \"allocating new chunk from thread local page\\n\" );\n\n    // make sure the thread local block has enough space...\n    if( _next_block->_page_pos + s + sizeof(block_header) >= PAGE_SIZE )\n    {\n        // not enough space left in current block.. free it... if it has any space at all.\n        if( _next_block->_page_pos != PAGE_SIZE )\n        {\n            free( (char*)(_next_block+1) );\n        }\n\n        _next_block = allocate_block_page();\n        assert( _next_block != nullptr );\n    }\n   // fprintf( stderr, \"alloc %d   at block pos %d\\n\", s+1, _next_block->_page_pos );\n\n    block_header* new_b   = _next_block;\n    _next_block = new_b + 1 + s/sizeof(block_header);\n\n    _next_block->_page_pos  = new_b->_page_pos + sizeof(block_header) + s;\n    _next_block->_prev      = new_b->_page_pos; \n    _next_block->_next      = PAGE_SIZE; // next block always goes to end...\n    _next_block->_timestamp = new_b->_timestamp; // TODO...\n\n    new_b->_next            = _next_block->_page_pos;\n    \n    // our work here is done give them the newly allocated block (pointing after the header\n    return reinterpret_cast<char*>(new_b+1);\n}\n\nchar* malloc2( int s )\n{\n  return thread_allocator::get().alloc(s);\n}\n\nvoid  free2( char* s )\n{\n  return thread_allocator::get().free(s);\n}\n\n\n/*  SEQUENTIAL BENCH\nint main( int argc, char** argv )\n{\n  if( argc == 2 && argv[1][0] == 'S' )\n  {\n     printf( \"malloc2\\n\");\n     for( int i = 0; i < 50000000; ++i )\n     {\n        char* test = malloc2( 128 );\n        assert( test != nullptr );\n        test[0] = 1;\n        free2( test );\n     }\n  }\n  if( argc == 2 && argv[1][0] == 's' )\n  {\n     printf( \"malloc\\n\");\n     for( int i = 0; i < 50000000; ++i )\n     {\n        char* test = (char*)malloc( 128 );\n        assert( test != nullptr );\n        test[0] = 1;\n        free( test );\n     }\n  }\n  fprintf( stderr, \"done\\n\");\n // sleep(5);\n  return 0;\n}\n*/\n\n/* RANDOM BENCH */\nstd::vector<char*>  buffers[16];\nvoid pc_bench_worker( int pro, int con, char* (*do_alloc)(int s), void (*do_free)(char*)  )\n{\n  for( int r = 0; r < ROUNDS; ++r )\n  {\n      for( int x = 0; x < buffers[pro].size()/2 ; ++x )\n      {\n         uint32_t p = fast_rand() % buffers[pro].size();\n         if( !buffers[pro][p] )\n         {\n           uint64_t si = 32 + fast_rand()%(8096*16); //4000;//32 + fast_rand() % (1<<16);\n           auto r = do_alloc( si );\n           assert( r != nullptr );\n         //  assert( r[0] != 99 ); \n         //  r[0] = 99; \n           buffers[pro][p] = r;\n         }\n      }\n      for( int x = 0; x < buffers[con].size()/2 ; ++x )\n      {\n         uint32_t p = fast_rand() % buffers[con].size();\n         assert( p < buffers[con].size() );\n         assert( con < 16 );\n         assert( con >= 0 );\n         if( buffers[con][p] ) \n         { \n           //assert( buffers[con][p][0] == 99 ); \n          // buffers[con][p][0] = 0; \n           do_free(buffers[con][p]);\n           buffers[con][p] = 0;\n         }\n      }\n  }\n}\n\n\nvoid pc_bench(int n, char* (*do_alloc)(int s), void (*do_free)(char*)  )\n{\n  for( int i = 0; i < 16; ++i )\n  {\n    buffers[i].resize( BENCH_SIZE );\n    memset( buffers[i].data(), 0, 8 * BENCH_SIZE );\n  }\n\n  std::thread* a = nullptr;\n  std::thread* b = nullptr;\n  std::thread* c = nullptr;\n  std::thread* d = nullptr;\n  std::thread* e = nullptr;\n  std::thread* f = nullptr;\n  std::thread* g = nullptr;\n  std::thread* h = nullptr;\n  std::thread* i = nullptr;\n  std::thread* j = nullptr;\n\n\n int s = 1;\n  switch( n )\n  {\n     case 10:\n     a = new std::thread( [=](){ pc_bench_worker( n, s, do_alloc, do_free ); } );\n     n--;\n     s++;\n     case 9:\n      b = new std::thread( [=](){ pc_bench_worker( n, s, do_alloc, do_free ); } );\n     n--;\n     s++;\n     case 8:\n      c = new std::thread( [=](){ pc_bench_worker( n, s, do_alloc, do_free ); } );\n     n--;\n     s++;\n     case 7:\n      d = new std::thread( [=](){ pc_bench_worker( n, s, do_alloc, do_free ); } );\n     n--;\n     s++;\n     case 6:\n     e = new std::thread( [=](){ pc_bench_worker( n, s, do_alloc, do_free ); } );\n     n--;\n     s++;\n     case 5:\n     f = new std::thread( [=](){ pc_bench_worker( n, s, do_alloc, do_free ); } );\n     n--;\n     s++;\n     case 4:\n      g = new std::thread( [=](){ pc_bench_worker( n, s, do_alloc, do_free ); } );\n     n--;\n     s++;\n     case 3:\n      h = new std::thread( [=](){ pc_bench_worker( n, s, do_alloc, do_free ); } );\n     n--;\n     s++;\n     case 2:\n      i = new std::thread( [=](){ pc_bench_worker( n, s, do_alloc, do_free ); } );\n     n--;\n     s++;\n     case 1:\n      j = new std::thread( [=](){ pc_bench_worker( n, s, do_alloc, do_free ); } );\n  }\n  if(a)\n  a->join();\n  if(b)\n  b->join();\n  if(c)\n  c->join();\n  if(d)\n  d->join();\n  if(e)\n  e->join();\n  if(f)\n  f->join();\n  if(g)\n  g->join();\n  if(h)\n  h->join();\n  if(i)\n  i->join();\n  if(j)\n  j->join();\n\n}\nvoid pc_bench_st(char* (*do_alloc)(int s), void (*do_free)(char*)  )\n{\n  for( int i = 0; i < 16; ++i )\n  {\n    buffers[i].resize( BENCH_SIZE );\n    memset( buffers[i].data(), 0, 8 * BENCH_SIZE );\n  }\n  int i = 0;\n  std::thread a( [=](){ pc_bench_worker( 1, 1, do_alloc, do_free ); } );\n  a.join();\n}\n#include <tbb/scalable_allocator.h>\n\nchar* do_malloc(int s)\n{ \n//    return (char*)::malloc(s); \n   return (char*)scalable_malloc(s);\n}\nvoid  do_malloc_free(char* c)\n{ \n    scalable_free(c);\n  // ::free(c); \n}\n\nint main( int argc, char** argv )\n{\n  if( argc > 2 && argv[1][0] == 'm' )\n  {\n    std::cerr<<\"malloc multi\\n\";\n    pc_bench( atoi(argv[2]), do_malloc, do_malloc_free );\n  }\n  if( argc > 2 && argv[1][0] == 'M' )\n  {\n    std::cerr<<\"hash malloc multi\\n\";\n    pc_bench( atoi(argv[2]), malloc2, free2 );\n  }\n  if( argc > 1 && argv[1][0] == 's' )\n  {\n    std::cerr<<\"malloc single\\n\";\n    pc_bench_st( do_malloc, do_malloc_free );\n  }\n  if( argc > 1 && argv[1][0] == 'S' )\n  {\n    std::cerr<<\"hash malloc single\\n\";\n    pc_bench_st( malloc2, free2 );\n  }\n  return 0;\n}\n\n\n\n\n\n\n\n"
  },
  {
    "path": "malloc2.hpp",
    "content": "\n\n\n\n"
  },
  {
    "path": "malloc3.cpp",
    "content": "/**\n *   Each thread has its own 'arena' where it can allocate 'new' blocks of what ever size it needs (buckets). After\n *   a thread is done with memory it places it in a garbage collection queue.\n *\n *   The garbage collector follows each threads trash bin and moves the blocks into a recycled list that\n *   all other threads can pull from.\n *\n *   The garbage collector can grow these queues as necessary and shrink them as time progresses.\n */\n\n#include <vector>\n#include <unordered_set>\n\n#include \"mmap_alloc.hpp\"\n#include \"disruptor.hpp\"\n#include <thread>\n#include <stdint.h>\n#include <memory.h>\n#include <stdlib.h>\n#include <iostream>\n#include <vector>\n#include <assert.h>\n#include <unistd.h>\n#include <iostream>\n#include <sstream>\n//#include \"rand.cpp\"\n\n\nusing namespace disruptor;\n\n#define PAGE_SIZE (4*1024*1024)\n#define BENCH_SIZE ( (1024) )\n#define ROUNDS 200000\n#define LOG2(X) ((unsigned) (8*sizeof (unsigned long long) - __builtin_clzll((X)) - 1))\n#define NUM_BINS 32 // log2(PAGE_SIZE)\n\nclass block_header\n{\n   public:\n      block_header* next()\n      { \n         assert(this);\n         if( _size > 0 ) return reinterpret_cast<block_header*>(data()+_size); \n         else return nullptr;\n      }\n      block_header* prev()\n      { \n         assert(this);\n         if( _prev_size <= 0 ) return nullptr;\n         return reinterpret_cast<block_header*>(reinterpret_cast<char*>(this) - _prev_size - 8);\n      }\n\n      enum flags_enum\n      {\n         unknown  = 0,\n         idle     = 1, // in storage, mergable\n         queued   = 2, // in waiting queue...\n         cached   = 4, // cached in thread\n         active   = 8, // in use by app\n         mergable = 16 // track this or will false sharing kill me?\n      };\n\n      struct queue_state // the block is serving as a linked-list node\n      {\n          block_header* next;\n          block_header* prev;\n      };\n\n      void set_state( flags_enum e )\n      {\n         _flags = e;\n      }\n      flags_enum get_state() { return (flags_enum)_flags; }\n\n      queue_state& as_queue_node()\n      {\n         return *reinterpret_cast<queue_state*>(data());\n      }\n\n      queue_state& init_as_queue_node()\n      {\n         // _flags |= queued;\n         queue_state& s = as_queue_node();\n         s.next = nullptr;\n         s.prev = nullptr;\n         return s;\n      }\n\n\n      void init( int s )\n      {\n         _prev_size = 0;\n         _size = - (s-8);\n      }\n\n      char*         data()      { return ((char*)this)+8; }\n      int           size()const { return abs(_size); }\n\n      int raw_size()const { return _size; }\n      int raw_prev_size()const { return _prev_size; }\n\n\n      int        calc_forward_extent()\n      {\n         // fprintf( stderr, \"pos %p + %d  -> \", this, _size );\n          int s = size() + 8;\n          auto n = next();\n          if( n ) s += n->calc_forward_extent();\n          return s;\n      }\n\n      int       page_size()\n      {\n          auto h = head();\n          assert(h);\n          return head()->calc_forward_extent(); \n      }\n      block_header*       head()\n      {\n          auto pre = prev();\n          if( !pre ) return this;\n          do {\n            auto next_prev = pre->prev();\n            if( !next_prev ) return pre;\n            pre = next_prev;\n          } while ( true );\n      }\n\n      /** create a new block at p and return it */\n      block_header* split_after( int s )\n      {\n         assert( s >= 32 );\n//         fprintf( stderr, \"prev_size %d  _size %d  Initial Error: %d\\n\", _prev_size, _size, int(PAGE_SIZE - this->page_size()) );\n         assert( PAGE_SIZE == page_size() );\n         \n         if(  (size() - 8 -32) < s ) return nullptr;// no point in splitting to less than 32 bytes\n\n         block_header* n = reinterpret_cast<block_header*>(data()+s);\n         n->_prev_size   = s;\n         n->_size        = size() -s -8;\n         \n         if( _size < 0 ) \n            n->_size = -n->_size; // we just split the tail\n\n         _size = s; // this node now has size s\n         assert( size() >= s );\n         assert( PAGE_SIZE == n->page_size() );\n         assert( PAGE_SIZE == page_size() );\n         return n;\n      }\n\n      // merge this block with next, return head of new block.\n      block_header* merge_next()\n      {\n         assert( PAGE_SIZE == page_size() );\n         assert( _flags == block_header::idle );\n         auto nxt = next();\n         if( !nxt ) return this;\n         assert( nxt->page_size() == PAGE_SIZE );\n\n         // next must be in the idle state\n         if( nxt->_flags != idle ) return this;\n\n         // extract node from the double link list it is in.\n         queue_state& qs = nxt->as_queue_node();\n         if( qs.next )\n         {\n      //      assert( qs.next->as_queue_node().prev == nxt );\n            qs.next->as_queue_node().prev = qs.prev;\n         }\n\n         if( qs.prev )\n         {\n       //     assert( qs.prev->as_queue_node().next == nxt );\n            qs.prev->as_queue_node().next = qs.next;\n         }\n\n         // now we are free to merge the memory\n         _size += nxt->size() + 8;\n         fprintf( stderr, \"merged to size %d\\n\", _size );\n         if( nxt->_size < 0 ) _size = -_size;\n\n         nxt = next(); // find the new next.\n         if( nxt )\n         {\n           nxt->_prev_size = size();\n         }\n         assert( PAGE_SIZE == page_size() );\n         if( next() ) assert( PAGE_SIZE == next()->page_size() );\n         if( prev() ) assert( PAGE_SIZE == prev()->page_size() );\n         return this;\n      }\n\n      // merge this block with the prev, return the head of new block\n      block_header* merge_prev() \n      {\n         _flags = idle; // mark myself as idle/mergable\n         auto p = prev();\n         if( !p ) return this;\n         if( p->_flags != idle ) return this;\n         return p->merge_next();\n      }\n\n   private:\n      int32_t   _prev_size; // size of previous header.\n      int32_t   _size:24; // offset to next, negitive indicates tail, 8 MB max, it could be neg\n      int32_t   _flags:8; // offset to next, negitive indicates tail\n};\nstatic_assert( sizeof(block_header) == 8, \"Compiler is not packing data\" );\n\n/** returns a new block page allocated via mmap \n *  The page has 2 block headers (head+tail) defined\n *  and head is returned.\n **/\nblock_header* allocate_block_page();\n\nstruct block_list_node\n{\n    block_list_node():next(nullptr){};\n    block_list_node* next;\n\n    block_header*    header()\n    {\n      return  reinterpret_cast<block_header*>(reinterpret_cast<char*>(this)-8);\n    }\n\n    int count()\n    {\n      int count = 1;\n      auto n = next;\n      while( n )\n      {\n        ++count;\n        assert( count < 1000 );\n        n = n->next;\n      }\n      return count;\n    }\n\n    block_list_node* find_end() \n    {\n       block_list_node* n = this;\n       while( n->next )\n       {\n          n = n->next;\n       }\n       return n;\n    }\n};\n\n\nclass thread_allocator\n{\n  public:\n    char*   alloc( size_t s );\n\n    void    free( char* c )\n    {\n        auto node = reinterpret_cast<block_header*>(c-8); // store a point\n        node->init_as_queue_node().next = _gc_on_deck;\n        if( !_gc_at_bat )\n        {\n           _gc_at_bat = node;\n           _gc_on_deck = nullptr;\n        }\n        else\n        {\n           _gc_on_deck = node;\n        }\n    }\n\n    static thread_allocator& get()\n    {\n        static __thread thread_allocator* tld = nullptr;\n        if( !tld )  // new is not an option\n        { \n            tld = reinterpret_cast<thread_allocator*>( mmap_alloc( sizeof(thread_allocator) ) );\n            tld = new (tld) thread_allocator(); // inplace construction\n\n            // TODO: allocate  pthread_threadlocal var, attach a destructor /clean up callback\n            //       to that variable... \n        }\n        return *tld;\n    }\n\n    void print_cache()\n    {\n       for( int i = 0; i < NUM_BINS; ++i )\n       {\n          fprintf( stderr, \"%d]  size %d   \\n\", i, _bin_cache_size[i] );\n       }\n    }\n\n  protected:\n\n    bool          store_cache( block_header* h )\n    {\n      assert( h->page_size() == PAGE_SIZE );\n       auto bin = LOG2( h->size() );\n      if( _bin_cache[bin] == nullptr )\n      {\n        _bin_cache[bin] = h;\n        return true;\n      }\n      return false;\n      /*\n       assert( h != nullptr );\n\n       if( _bin_cache_size[bin] < 4 )\n       {\n          if( _bin_cache_size[bin] == 0 ) assert( nullptr == _bin_cache[bin] );\n\n          block_list_node* bln = reinterpret_cast<block_list_node*>(h->data() );\n          bln->next = _bin_cache[bin];\n          _bin_cache[bin] = bln;\n          _bin_cache_size[bin]++;\n          assert( _bin_cache_size[bin] == _bin_cache[bin]->count() );\n          return true;\n       }\n       fprintf( stderr, \"cache full bin %d size %d\", bin, _bin_cache_size[bin] );\n       assert( _bin_cache[bin] != nullptr );\n       return false;\n       */\n    }\n\n    block_header* fetch_cache( int bin )\n    {\n       if( _bin_cache[bin] )\n       {\n         block_header* b = _bin_cache[bin];\n         assert( b->page_size() == PAGE_SIZE );\n         _bin_cache[bin] = nullptr;\n         return b;\n       }\n       return nullptr;\n      /*\n       if( _bin_cache_size[bin] > 0 )\n       {\n          assert( _bin_cache_size[bin] == _bin_cache[bin]->count() );\n          assert( _bin_cache[bin] );\n          auto h = _bin_cache[bin];\n          _bin_cache[bin] = h->next;\n          _bin_cache_size[bin]--;\n          auto head = h->header();\n          assert( head->page_size() == PAGE_SIZE );\n          assert( LOG2(head->size()) >= bin );\n          assert( LOG2(head->size()) == bin );\n          return head;\n       }\n       assert( !_bin_cache[bin] );\n       */\n       return nullptr;\n    }\n\n\n\n    block_header* fetch_block_from_bin( int bin );\n\n    thread_allocator();\n    ~thread_allocator();\n\n    friend class garbage_collector;\n    bool                         _done;       // cleanup and remove from list.\n    std::atomic<block_header*>   _gc_at_bat;  // where the gc pulls from.\n    uint64_t                     _gc_pad[7];  // gc thread and this thread should not false-share these values\n    block_header*                _gc_on_deck; // where we save frees while waiting on gc to bat.\n\n    /** \n     * called by gc thread and pops the at-bat free list\n     */\n    block_header*  get_garbage() // grab a pointer previously claimed.\n    {\n      if( block_header* gar = _gc_at_bat.load() )\n      {\n         _gc_at_bat.store(nullptr);// = nullptr;\n         return gar;\n      }\n      return nullptr;\n    }\n    block_header*               _bin_cache[NUM_BINS];      // head of cache for specific bin\n    int16_t                     _bin_cache_size[NUM_BINS]; // track num of nodes in cache\n\n    thread_allocator*           _next; // used by gc to link thread_allocs together\n};\n\n\ntypedef thread_allocator* thread_alloc_ptr;\n\n\n/**\n *   Polls all threads for freed items.\n *   Upon receiving a freed item, it will look\n *   at its size and move it to the proper recycle\n *   bin for other threads to consume.\n *\n *   When there is less work to do, the garbage collector\n *   will attempt to combine blocks into larger blocks\n *   and move them to larger cache sizes until it\n *   ultimately 'completes a page' and returns it to\n *   the system.  \n *\n *   From the perspective of the 'system' an alloc\n *   involves a single atomic fetch_add.\n *\n *   A free involves a non-atomic store.\n *\n *   No other sync is necessary.\n */\nclass garbage_collector\n{\n  public:\n    garbage_collector();\n    ~garbage_collector();\n\n    class recycle_bin\n    {\n       public:\n          recycle_bin()\n          :_read_pos(0),_full_count(0),_full(2),_write_pos(0)\n          {\n             memset( &_free_queue, 0, sizeof(_free_queue) );\n             _free_list = nullptr;\n          }\n\n          // read the _read_pos without any atomic sync, we only care about an estimate\n          int64_t available()                            { return _write_pos - *((int64_t*)&_read_pos); }\n          // reserve right to read the next num spots from buffer\n          int64_t claim( int64_t num )                   { return _read_pos.fetch_add(num); }\n          block_header* get_block( int64_t claim_pos )   { return _free_queue.at(claim_pos); }\n          void          clear_block( int64_t claim_pos ) { _free_queue.at(claim_pos) = nullptr; }\n\n          // determines how many chunks should be required to consider this bin full.\n          // TODO: this method needs to be tweaked to factor in 'time'... as it stands\n          // now the GC loop will be very agressive at shrinking the queue size\n          int64_t       check_status()\n          {\n              return 8 - available();\n            /*\n              auto av = available();\n              int consumed = _last_fill - av;\n              if( consumed > _last_fill/2 ) ++_full;\n\n              if( av <= 0 )\n              {\n                 // apparently there is high demand, the consumers cleaned us out.\n                 _full *= 2; // exponential growth..\n                 _full = std::min( _full+4, _free_queue.get_buffer_size() -1 );\n                 fprintf( stderr, \"%d  blocks available,   _full %d\\n\", int(av), int(_full) );\n              }\n              else if( av == _full )\n              {\n                 // apparently no one wanted any... we should shrink what we consider full\n                 _full -= 4; // fast back off\n                 if( _full < 2 ) _full = 2;\n              }\n              else // av < _full\n              {\n                 // some, but not all have been consumed... \n                 // if less than half have been consumed... reduce size,\n                 // else keep the size the same.\n                 if( av > _full/2 )\n                 {\n                     _full--; // reduce full size,slow back off\n                     if( _full < 2 ) _full = 2;\n                     return  _full - av; \n                 }\n                 else // more than half consumed... keep full size the same, refill\n                 {\n                 }\n              }\n               fprintf( stderr, \"%d  blocks available,   _full %d  post %d\\n\", int(av), int(_full), int(_full-av) );\n              return _full - av; \n              */\n          }\n\n\n\n          ring_buffer<block_header*,128>         _free_queue; \n          std::atomic<int64_t>                  _read_pos; //written to by read threads\n          int64_t _pad[7];     // below this point is written to by gc thread\n          int64_t _full_count; // how many times gc thread checked and found the queue full\n          int64_t _full;       // limit the number of blocks kept in queue\n          int64_t _write_pos;  // read by consumers to know the last valid entry.\n          int64_t _last_fill;  // status of the buffer at the last check.\n\n          void push( block_header* h )\n          {\n             h->set_state( block_header::idle );\n             block_header::queue_state& qs = h->init_as_queue_node(); \n             qs.next = _free_list;\n             if( _free_list ) \n             {\n                _free_list->as_queue_node().prev = h;\n             }\n             _free_list = h;\n          }\n\n          block_header* pop()\n          {\n              auto tmp = _free_list;\n              if( _free_list ) \n              {\n                 auto n = _free_list->as_queue_node().next;\n                 if( n ) \n                    n->as_queue_node().prev = nullptr;\n                 _free_list = n;\n                 assert( tmp->get_state() == block_header::idle );\n                 tmp->set_state( block_header::unknown ); // TODO: only if DEBUG\n              }\n              return tmp;\n          }\n\n          // blocks are stored as a double-linked list\n          block_header* _free_list;\n    };\n\n    recycle_bin& find_cache_bin_for( block_header* h ) \n    { \n      assert(h!=nullptr);\n      int bn = get_bin_num(h->size());\n  //    fprintf( stderr,  \"block header size %d  is cached in bin %d holding sizes %d\\n\", (int)h->size(), bn, (1<<(bn)) );\n      return get_bin(get_bin_num( h->size() )); \n    }\n\n    int get_bin_num( size_t s )\n    {\n      return LOG2(s);\n    }\n\n    recycle_bin&  get_bin( size_t bin_num ) \n    { \n        assert( bin_num < NUM_BINS );\n        return _bins[bin_num];\n    }\n\n    void register_allocator( thread_alloc_ptr ta );\n\n    static garbage_collector& get()\n    {\n        static garbage_collector gc;\n        return gc;\n    }\n  private:\n    static void  run();\n    // threads that we are actively looping on\n    std::atomic<thread_alloc_ptr> _thread_head;\n\n    std::thread                _thread; // gc thread.. doing the hard work\n    recycle_bin                _bins[NUM_BINS];\n\n\n    static std::atomic<bool>   _done;\n};\nstd::atomic<bool> garbage_collector::_done(false);\n\ngarbage_collector::garbage_collector()\n:_thread_head(nullptr),_thread( &garbage_collector::run )\n{\n   fprintf( stderr, \"allocating garbage collector\\n\" );\n}\ngarbage_collector::~garbage_collector()\n{\n  _done.store(true, std::memory_order_release );\n  _thread.join();\n}\n\nvoid garbage_collector::register_allocator( thread_alloc_ptr ta )\n{\n  printf( \"registering thread allocator %p\\n\", ta );\n\n  auto* stale_head = _thread_head.load(std::memory_order_relaxed);\n  do { ta->_next = stale_head;\n  }while( !_thread_head.compare_exchange_weak( stale_head, ta, std::memory_order_release ) );\n}\n\nvoid  garbage_collector::run()\n{\n    fprintf( stderr, \"Starting GC loop\\n\");\n    try\n    {\n      garbage_collector& self = garbage_collector::get();\n      while( true )\n      {\n          thread_alloc_ptr cur_al = *((thread_alloc_ptr*)&self._thread_head);\n          bool found_work = false;\n      \n          // for each thread, grab all of the free chunks and move them into\n          // the proper free set bin, but save the list for a follow-up merge\n          // that takes into consideration all free chunks.\n          while( cur_al )\n          {\n              auto cur = cur_al->get_garbage();\n              \n              if( cur )\n              {\n                assert( cur->page_size() == PAGE_SIZE );\n                found_work = true; \n              }\n              \n              while( cur )\n              {\n                  assert( cur->page_size() == PAGE_SIZE );\n                  block_header* nxt = cur->as_queue_node().next;\n                  assert( nxt != cur );\n                  if( nxt ) assert( nxt->page_size() == PAGE_SIZE );\n\n                  assert( cur->page_size() == PAGE_SIZE );\n                  auto before = cur->size();\n                //  fprintf( stderr, \"found free block of size: %d\\n\", cur->size() );\n                  cur->init_as_queue_node();\n                  assert( cur->page_size() == PAGE_SIZE );\n                  cur->set_state( block_header::idle );\n                  assert( cur->page_size() == PAGE_SIZE );\n\n                  cur = cur->merge_next();\n               //   cur = cur->merge_prev();\n                  if( before != cur->size() )\n                  fprintf( stderr, \"found free block of after merges..: %d\\n\", cur->size() );\n              \n                  assert( cur->page_size() == PAGE_SIZE );\n                  recycle_bin& c_bin = self.find_cache_bin_for(cur);\n                  assert( cur->page_size() == PAGE_SIZE );\n              //    fprintf( stderr, \"pushing into bin\\n\" );\n                  c_bin.push(cur); \n                  assert( cur->page_size() == PAGE_SIZE );\n              \n                  cur = nxt;\n                  assert( cur->page_size() == PAGE_SIZE );\n              }\n\n              assert( cur_al != cur_al->_next );\n              // get the next thread.\n              cur_al = cur_al->_next;\n          }\n      \n          // for each recycle bin, check the queue to see if it\n          // is getting low and if so, put some chunks in play\n          for( int i = 0; i < NUM_BINS; ++i )\n          {\n              garbage_collector::recycle_bin& bin = self._bins[i];\n              auto needed = bin.check_status(); // returns the number of chunks need\n              if( needed > 0 )\n              {\n                  int64_t next_write_pos = bin._write_pos;\n                  block_header* next = bin.pop();\n\n                  while( next && needed > 0 )\n                  {\n                   //   fprintf( stderr, \"poping block from bin %d and pushing into queue\\n\", i );\n                      found_work = true;\n                      ++next_write_pos;\n                      if( bin._free_queue.at(next_write_pos) )\n                      {\n                          // someone left something behind... \n                      }\n                      else\n                      {\n                          bin._free_queue.at(next_write_pos) = next;\n                          next = bin.pop();\n                      }\n                      --needed;\n                  }\n                  if( next ) bin.push(next); // leftover... \n                  bin._write_pos = next_write_pos;\n              }\n              else if( needed < 0 )\n              {\n                // apparently no one is checking this size class anymore, we can reclaim some nodes.\n                // TODO:  perhaps we only do this if there is no other work found as work implies\n                // that the user is still allocating / freeing objects and thus we don't want to\n                // compete to start freeing cache yet... \n              }\n          }\n          if( !found_work ) usleep( 1000 );\n      \n          if( _done.load( std::memory_order_acquire ) ) return;\n          if( !found_work ) \n          {\n              // reclaim cache\n              // sort... and optimize....\n          }\n      }\n    }\n    catch ( ... )\n    {\n        fprintf( stderr, \"gc caught exception\\n\" );\n    }\n    fprintf( stderr, \"exiting gc loop\\n\" );\n}\n\n\nblock_header* allocate_block_page()\n{\n    fprintf( stderr, \"\\n\\n                                                                               ALLOCATING NEW PAGE\\n\\n\" );\n    auto limit = mmap_alloc( PAGE_SIZE );\n\n    block_header* bl = reinterpret_cast<block_header*>(limit);\n    bl->init( PAGE_SIZE );\n    \n    return bl;\n}\n\nthread_allocator::thread_allocator()\n{\n  _done            = false;\n  _next            = nullptr;\n  //_gc_at_bat       = nullptr;\n  _gc_on_deck      = nullptr;\n\n  memset( _bin_cache, 0, sizeof(_bin_cache) );\n  memset( _bin_cache_size, 0, sizeof(_bin_cache_size) );\n  garbage_collector::get().register_allocator(this);\n}\n\nthread_allocator::~thread_allocator()\n{\n  // give the rest of our allocated chunks to the gc thread\n  // free all cache, free _alloc_block\n  _done = true;\n}\n\nint get_min_bin( size_t s )\n{\n  return LOG2(s)+1;\n}\n\nchar* thread_allocator::alloc( size_t s )\n{\n //   fprintf( stderr, \"    alloc %d\\n\", (int)s );\n    if( s == 0 ) return nullptr;\n    size_t data_size = s;\n\n    // we need 8 bytes for the header, then round to the nearest\n    // power of 2.\n    int min_bin = LOG2(s+7)+1; // this is the bin size.\n    s = (1<<min_bin)-8; // the data size is bin size - 8\n    assert( s >= data_size );\n    \n    for( int bin = min_bin; bin < NUM_BINS; ++bin )\n    {\n        block_header* b = fetch_block_from_bin(bin);\n        if( b )\n        {\n           fprintf( stderr, \"found cache in bin %d\\r\", bin );\n           assert( b->page_size() == PAGE_SIZE );\n           block_header* tail = b->split_after( s );\n           assert( b->page_size() == PAGE_SIZE );\n           if( tail ) assert( tail->page_size() == PAGE_SIZE );\n           assert( b->size() >= s );\n           if( tail && !store_cache( tail ) ) \n           {\n              fprintf( stderr, \"unable to cache tail, free it\\n\" );\n              this->free( tail->data() );\n           }\n           assert( b->size() >= s );\n           return b->data();\n        }\n    }\n\n\n    block_header* new_page = allocate_block_page();\n    //printf( \"      alloc new block page   %p  _size  %d _prev_size %d  next %p  prev %p\\n\",\n      //    new_page, new_page->_size, new_page->_prev_size, new_page->next(), new_page->prev() );\n    block_header* tail = new_page->split_after(s);\n//    printf( \"      alloc free tail  %p  _size  %d _prev_size %d  next %p  prev %p  tail %p\\n\",\n //         tail, tail->_size, tail->_prev_size, tail->next(), tail->prev(), tail );\n    \n    if( tail && !store_cache( tail ) )\n    {\n       this->free( tail->data() );\n    }\n\n    assert( new_page->size() >= s-8 );\n    return new_page->data();\n}\n\n/**\n *  Checks our local bin first, then checks the global bin.\n *\n *  @return null if no block found in cache.\n */\nblock_header* thread_allocator::fetch_block_from_bin( int bin )\n{\n//    fprintf( stderr, \"fetch cache %d  has %d items remaining\\n\", bin, int(_bin_cache_size[bin]) );    \n    auto lo = fetch_cache(bin);\n    if( lo ) return lo;\n    assert( _bin_cache_size[bin] == 0 );\n\n    garbage_collector& gc              = garbage_collector::get();\n    garbage_collector::recycle_bin& rb = gc.get_bin( bin );\n\n    if( auto avail = rb.available()  )\n    {\n        // claim up to half of the available, just incase 2\n        // threads try to claim at once, they both can, but\n        // don't hold a cache of more than 4 items\n        auto claim_num = 2;//std::min<int64_t>( avail/2, 1 ); \n        // claim_num could now be 0 to 3\n        //claim_num++; // claim at least 1 and at most 4\n\n        // this is our one and only atomic 'sync' operation... \n        auto claim_pos = rb.claim( claim_num );\n        auto claim_end = claim_pos + claim_num;\n        bool found = false;\n        while( claim_pos != claim_end )\n        {\n           block_header* h = rb.get_block(claim_pos);\n           if( h )\n           {\n              found = true;\n              rb.clear_block(claim_pos); // let gc know we took it. \n              ++claim_pos;\n              if( claim_pos == claim_end )\n              {\n                  return h;\n              }\n              else if( !store_cache(h ) )\n              {\n                assert( !\"unable to cache something we asked for!\"  );\n              }\n           }\n           else // oops... I guess 3 tried to claim at once...\n           {\n              ++claim_pos;\n              // drop it on the floor and let the\n              // gc thread pick it up next time through the\n              // ring buffer.\n           }\n        }\n        if( found ) \n        {\n           fprintf( stderr, \"apparently we were over drew the queue...\\n\" );\n           return fetch_cache(bin); // grab it from the cache this time.\n        }\n    }\n    return nullptr;\n}\n\nchar* malloc2( int s )\n{\n  return thread_allocator::get().alloc(s);\n}\n\nvoid  free2( char* s )\n{\n  return thread_allocator::get().free(s);\n}\n\n\n#include \"bench.cpp\"\n"
  },
  {
    "path": "mmap_alloc.hpp",
    "content": "#pragma once\n#include <algorithm>\nextern \"C\" {\n#include <fcntl.h>\n#include <sys/mman.h>\n#include <sys/stat.h>\n#include <sys/types.h>\n#include <unistd.h>\n#include <math.h>\n}\n\n\nsize_t pagesize()\n{\n  return ::getpagesize();\n}\nsize_t page_count( size_t s )\n{\n    return static_cast< size_t >( ceilf( static_cast< float >( s) / pagesize() ) );\n}\n\nchar* mmap_alloc( size_t s, void* loc = 0 )\n{\n   //fprintf( stderr, \"mmap_alloc %llu   %p\\n\", s, loc );\n   const std::size_t pages( page_count(s) ); // add +1 for guard page\n   std::size_t size_ = pages * pagesize();\n   \n   # if defined(macintosh) || defined(__APPLE__) || defined(__APPLE_CC__)\n    void* limit = ::mmap( loc, size_, PROT_READ | PROT_WRITE, MAP_FIXED | MAP_PRIVATE | MAP_ANON, -1, 0);\n   # else\n    const int fd( ::open(\"/dev/zero\", O_RDONLY) );\n    assert( -1 != fd);\n    void* limit = ::mmap( loc, size_, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0);\n   # endif\n   if ( !limit ) throw std::bad_alloc();\n   return static_cast<char*>(limit);\n}\n\nvoid mmap_free( void* pos, size_t s )\n{\n   const std::size_t pages( page_count( s) ); // add +1 for guard page\n   std::size_t size_ = pages * pagesize();\n   ::munmap( pos, size_);\n}\n"
  }
]