[
  {
    "path": ".gitignore",
    "content": "*~\n*.o\n*.os\n*.lo\n*.so\n*.dylib\n*.la\n*.lai\n*.libs\n*.sw*\nMakefile\nMakefile.fragments\nMakefile.global\nMakefile.objects\nbuild\nconfig.*\nconfigure\nconfigure.in\ndoc\nCVS\n.sconf_temp\n.sconsign.dblite\n.deps\nac*.m4\ninstall-sh\nlibtool\nltmain.sh\nmissing\nmkinstalldirs\nrun-tests.php\nautom4te.cache\nnbproject/\n.project\ntest2.php\nrecompile.sh\n\n"
  },
  {
    "path": ".travis.yml",
    "content": "language: c\nsudo: required\ncompiler:\n    - gcc\nos:\n    - linux\n\nbefore_install:\n    - sudo apt-get update -qq\n    - sudo apt-get install -y php5-dev php5-cli\nscript: ./travis.sh\n"
  },
  {
    "path": "CREDITS",
    "content": "phpkafka\nAleksandar Babic\nPatrick Reilly\nElias Van Ootegem\n"
  },
  {
    "path": "EXPERIMENTAL",
    "content": ""
  },
  {
    "path": "LICENSE",
    "content": "The MIT License (MIT)\r\n\r\nCopyright (c) 2014 Aleksandar Babic\r\n\r\nPermission is hereby granted, free of charge, to any person obtaining a copy of\r\nthis software and associated documentation files (the \"Software\"), to deal in\r\nthe Software without restriction, including without limitation the rights to\r\nuse, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of\r\nthe Software, and to permit persons to whom the Software is furnished to do so,\r\nsubject to the following conditions:\r\n\r\nThe above copyright notice and this permission notice shall be included in all\r\ncopies or substantial portions of the Software.\r\n\r\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\r\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS\r\nFOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR\r\nCOPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER\r\nIN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN\r\nCONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\r\n"
  },
  {
    "path": "README.md",
    "content": "Master build: [![Build Status](https://travis-ci.org/EVODelavega/phpkafka.svg?branch=master)](https://travis-ci.org/EVODelavega/phpkafka)\n\nDev build: [![Build Status](https://travis-ci.org/EVODelavega/phpkafka.svg?branch=consume-with-meta)](https://travis-ci.org/EVODelavega/phpkafka)\n\nSRP build: [![Build Status](https://travis-ci.org/EVODelavega/phpkafka.svg?branch=feature%2FSRP)](https://travis-ci.org/EVODelavega/phpkafka)\n\n## Common issues:\n\nHere's a short list of common issues people run into when installing this extension (only 1 issue so far)\n\n#### _\"Unable to load dynamic library '/usr/lib64/php/modules/kafka.so' - librdkafka.so.1\"_\n\nWhat this, basically means is that PHP can't find the shared object (librdkafka) anywhere. Thankfully, the fix is trivial:\nFirst, make sure you've actually compiled and installed librdkafka. Then run these commands:\n\n```bash\nsudo updatedb\nlocate librdkafka.so.1 # locate might not exist on some systems, like slackware, which uses slocate\n```\n\nThe output should show a full path to the `librdkafka.so.1` file, probably _\"/usr/local/lib/librdkafka.so.1\"_. Edit `/etc/ld.so.conf` to make sure _\"/usr/local/lib\"_ is included when searching for libraries. Either add id directly to the aforementioned file, or if your system uses a /etc/ld.so.conf.d/ directory, create a new .conf file there:\n\n```bash\nsudo touch /etc/ld.so.conf.d/librd.conf\necho \"/usr/local/lib\" >> /etc/ld.so.conf.d/librd.conf\n```\n\nOr simply type `vim /etc/ld.so.conf.d/librd.conf`, when the editor opens, tap _\":\"_ (colon), and run the command `read !locate librdkafka.so.1`, delete the filename from the path (move your cursor to the last `/` of the line that just appeared in the file and type `d$` (delete until end of line). Save and close the file (`:wq`).\n\nOnce updated, run the following:\n\n```bash\nsudo ldconfig\n```\n\n\n_Note:_\n\nWhatever gets merged into the master branch should work just fine. The main dev build is where small tweaks, bugfixes and minor improvements are tested (ie sort-of beta branch).\n\nThe SRP build is a long-term dev branch, where I'm currently in the process of separating the monolithic `Kafka` class into various logical sub-classes (a `KafkaTopic` class, perhaps a `KafkaMeta` object, `KafkaConfig` is another candidate...) to make this extension as intuitive as I can.\n\n#This fork is still being actively developed\n\nGiven that the original repo is no longer supported by the author, I've decided to keep working at this PHP-Kafka extension instead.\nThe branch where most of the work is being done is the `consume-with-meta` branch.\n\nChanges that have happened thusfar:\n\n* Timeout when disconnecting is reduced (significantly)\n* Connections can be closed as you go\n* The librdkafka meta API is used\n* New methods added (`Kafka::getTopics`, `Kafka::getPartitionsFor($topic)` most notable additions)\n* `Kafka::set_partition` is deprecated, in favour of the more PSR-compliant `Kafka::setPartition` method\n* A PHP stub was added for IDE code-completion\n* Argument checks were added, and exceptions are thrown in some places\n* Class constants for an easier API (`Kafka::OFFSET_*`)\n* The extension logged everything in `/var/etc/syslog`, this is still the default behaviour (as this extension is under development), but can be turned off (`Kafka::setLogLevel(Kafka::LOG_OFF)`)\n* Exceptions (`KafkaException`) in case of errors (still work in progress, though)\n* Thread-safe Kafka connections\n* Easy configuration: passing an array of options to the constructor, `setBrokers` or `setOptions` method like you would with `PDO`\n* Compression support added (when produing messages, a compressed message is returned _as-is_)\n* Each instance holds 2 distinct connections (at most): a producer and a consumer\n* CI (travis), though there is a lot of work to be done putting together useful tests\n\nChanges that are on the _TODO_ list include:\n\n* Separating kafka meta information out into a separate class (`KafkaTopic`  and `KafkaMessage` classes)\n* Allow PHP to determine what the timeouts should be (mainly when disconnecting, or producing messages) (do we still need this?)\n* Add custom exceptions (partially done)\n* Overall API improvements (!!)\n* Performance - it's what you make of it (test results varied from 2 messages/sec to 2.5 million messages per second - see examples below)\n* Adding tests to the build (very much a work in progress)\n* PHP7 support\n\nAll help is welcome, of course...\n\n\nPHP extension for **Apache Kafka 0.8**. It's built on top of kafka C driver ([librdkafka](https://github.com/edenhill/librdkafka/)).\nThis extension requires the version 0.8.6 (ubuntu's librdkafka packages won't do it, they do not implement the meta API yet).\n\nIMPORTANT: Library is in heavy development and some features are not implemented yet.\n\nRequirements:\n-------------\nDownload and install [librdkafka](https://github.com/edenhill/librdkafka/). Run `sudo ldconfig` to update shared libraries.\n\nInstalling PHP extension:\n----------\n```bash\nphpize\n./configure --enable-kafka\nmake\nsudo make install\nsudo sh -c 'echo \"extension=kafka.so\" >> /etc/php5/conf.d/kafka.ini'\n#For CLI mode:\nsudo sh -c 'echo \"extension=kafka.so\" >> /etc/php5/cli/conf.d/20-kafka.ini'\n```\n\nExamples:\n--------\n```php\n// Produce a message\n$kafka = new Kafka(\"localhost:9092\");\n$kafka->produce(\"topic_name\", \"message content\");\n//get all the available partitions\n$partitions = $kafka->getPartitionsForTopic('topic_name');\n//use it to OPTIONALLY specify a partition to consume from\n//if not, consuming IS slower. To set the partition:\n$kafka->setPartition($partitions[0]);//set to first partition\n//then consume, for example, starting with the first offset, consume 20 messages\n$msg = $kafka->consume(\"topic_name\", Kafka::OFFSET_BEGIN, 20);\nvar_dump($msg);//dumps array of messages\n```\n\nA more complete example of how to use this extension if performance is what you're after:\n\n```php\n$kafka = new Kafka(\n    'broker-1:9092,broker-2:9092',\n    [\n        Kafka::LOGLEVEL         => Kafka::LOG_OFF,//while in dev, default is Kafka::LOG_ON\n        Kafka::CONFIRM_DELIVERY => Kafka::CONFIRM_OFF,//default is Kafka::CONFIRM_BASIC\n        Kafka::RETRY_COUNT      => 1,//default is 3\n        Kafka::RETRY_INTERVAL   => 25,//default is 100\n    ]\n);\n$fh = fopen('big_data_file.csv', 'r');\nif (!$fh)\n    exit(1);\n$count = 0;\n$lines = [];\nwhile ($line = fgets($fh, 2048))\n{\n    $lines[] = trim($line);\n    ++$count;\n    if ($count >= 200)\n    {\n        $kafka->produceBatch('my_topic', $lines);\n        $lines = [];\n        $count = 0;\n        //in theory, the next bit is optional, but Kafka::disconnect\n        //waits for the out queue to be empty before closing connections\n        //it's a way to sort-of ensure messages are delivered, even though Kafka::CONFIRM_DELIVERY\n        //was set to Kafka::CONFIRM_OFF... This approach can be used to speed up your code\n        $kafka->disconnect(Kafka::MODE_PRODUCER);//disconnect the producer\n    }\n}\nif ($count)\n{\n    $kafka->produceBatch('my_topic', $lines);\n}\n$kafka->disconnect();//disconnects all opened connections, in this case, only a producer connection will exist, though\n```\n\nI've used code very similar to the code above to produce ~3 million messages, and got an average throughput rate of 2000 messages/second.\nRemoving the disconnect call, or increasing the batches to produce will change the rate at which messages get produced.\n\nNot disconnecting at all yielded the best performance (by far): 2.5 million messages in just over 1 second (though depending on the output buffer, and how kafka is set up to handle full produce-queue's, this is not to be recommended!).\n"
  },
  {
    "path": "kafka.c",
    "content": "/**\n *  Copyright 2015 Elias Van Ootegem.\n *\n *  Licensed under the Apache License, Version 2.0 (the \"License\");\n *  you may not use this file except in compliance with the License.\n *  You may obtain a copy of the License at\n *\n *  http://www.apache.org/licenses/LICENSE-2.0\n *\n *  Unless required by applicable law or agreed to in writing, software\n *  distributed under the License is distributed on an \"AS IS\" BASIS,\n *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n *  See the License for the specific language governing permissions and\n *  limitations under the License.\n *\n * Special thanks to Patrick Reilly and Aleksandar Babic for their work\n * On which this extension was actually built.\n */\n#include <php.h>\n#include \"kafka.h\"\n#include <php_kafka.h>\n#include <inttypes.h>\n#include <ctype.h>\n#include <string.h>\n#include <unistd.h>\n#include <stdlib.h>\n#include <syslog.h>\n#include <sys/time.h>\n#include <errno.h>\n#include <time.h>\n#include \"kafka.h\"\n#include \"librdkafka/rdkafka.h\"\n\nstruct consume_cb_params {\n    int read_count;\n    zval *return_value;\n    union {\n        int *partition_ends;\n        long *partition_offset;\n    };\n    int error_count;\n    int eop;\n    int auto_commit;\n};\n\nstruct produce_cb_params {\n    int msg_count;\n    int err_count;\n    int offset;\n    int partition;\n    int errmsg_len;\n    char *err_msg;\n};\n\nstatic int log_level = 1;\nstatic rd_kafka_t *rk = NULL;\nstatic rd_kafka_type_t rk_type;\nchar *brokers = \"localhost:9092\";\nint partition = RD_KAFKA_PARTITION_UA;\n\nvoid kafka_connect(char *brokers)\n{\n    kafka_setup(brokers);\n}\n\nvoid kafka_set_log_level( int ll )\n{\n    log_level = ll;\n}\n\nvoid kafka_msg_delivered (rd_kafka_t *rk,\n                           void *payload, size_t len,\n                           int error_code,\n                           void *opaque, void *msg_opaque)\n{\n    if (error_code && log_level) {\n        openlog(\"phpkafka\", 0, LOG_USER);\n        syslog(LOG_INFO, \"phpkafka - Message delivery failed: %s\",\n                rd_kafka_err2str(error_code));\n    }\n}\n\nvoid kafka_err_cb (rd_kafka_t *rk, int err, const char *reason, void *opaque)\n{\n    if (log_level) {\n        openlog(\"phpkafka\", 0, LOG_USER);\n        syslog(LOG_INFO, \"phpkafka - ERROR CALLBACK: %s: %s: %s\\n\",\n            rd_kafka_name(rk), rd_kafka_err2str(err), reason);\n    }\n    if (rk)\n        rd_kafka_destroy(rk);\n}\n\nvoid kafka_produce_cb_simple(rd_kafka_t *rk, void *payload, size_t len, int err_code, void *opaque, void *msg_opaque)\n{\n    struct produce_cb_params *params = msg_opaque;\n    if (params)\n    {\n        params->msg_count -=1;\n    }\n    if (log_level)\n    {\n        if (params)\n            params->err_count += 1;\n        openlog(\"phpkafka\", 0, LOG_USER);\n        if (err_code)\n            syslog(LOG_ERR, \"Failed to deliver message %s: %s\", (char *) payload, rd_kafka_err2str(err_code));\n        else\n            syslog(LOG_DEBUG, \"Successfuly delevired message (%zd bytes)\", len);\n    }\n}\n\nvoid kafka_produce_detailed_cb(rd_kafka_t *rk, const rd_kafka_message_t *msg, void *opaque)\n{\n    struct produce_cb_params *params = opaque;\n    if (params)\n    {\n        params->msg_count -= 1;\n    }\n    if (msg->err)\n    {\n        int offset = params->errmsg_len,\n            err_len = 0;\n        const char *errstr = rd_kafka_message_errstr(msg);\n        err_len = strlen(errstr);\n        if (log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(LOG_ERR, \"Failed to deliver message: %s\", errstr);\n        }\n        if (params)\n        {\n            params->err_count += 1;\n            params->err_msg = realloc(\n                params->err_msg,\n                (offset + err_len + 2) * sizeof params->err_msg\n            );\n            if (params->err_msg == NULL)\n            {\n                params->errmsg_len = 0;\n            }\n            else\n            {\n                strcpy(\n                    params->err_msg + offset,\n                    errstr\n                );\n                offset += err_len;//get new strlen\n                params->err_msg[offset] = '\\n';//add new line\n                ++offset;\n                params->err_msg[offset] = '\\0';//ensure zero terminated string\n            }\n        }\n        return;\n    }\n    if (params)\n    {\n        params->offset = msg->offset;\n        params->partition = msg->partition;\n    }\n}\n\nrd_kafka_t *kafka_get_connection(kafka_connection_params params, const char *brokers)\n{\n    rd_kafka_t *r = NULL;\n    char errstr[512];\n    rd_kafka_conf_t *conf = rd_kafka_conf_new();\n    //set error callback\n    rd_kafka_conf_set_error_cb(conf, kafka_err_cb);\n    if (params.type == RD_KAFKA_CONSUMER)\n    {\n        if (params.queue_buffer)\n            rd_kafka_conf_set(conf, \"queued.min.messages\", params.queue_buffer, NULL, 0);\n        r = rd_kafka_new(params.type, conf, errstr, sizeof errstr);\n        if (!r)\n        {\n            if (params.log_level)\n            {\n                openlog(\"phpkafka\", 0, LOG_USER);\n                syslog(LOG_ERR, \"Failed to connect to kafka: %s\", errstr);\n            }\n            //destroy config, no connection to use it...\n            rd_kafka_conf_destroy(conf);\n            return NULL;\n        }\n        if (!rd_kafka_brokers_add(r, brokers))\n        {\n            if (params.log_level)\n            {\n                openlog(\"phpkafka\", 0, LOG_USER);\n                syslog(LOG_ERR, \"Failed to connect to brokers %s\", brokers);\n            }\n            rd_kafka_destroy(r);\n            return NULL;\n        }\n        return r;\n    }\n    if (params.compression)\n    {\n        rd_kafka_conf_res_t result = rd_kafka_conf_set(\n            conf, \"compression.codec\",params.compression, errstr, sizeof errstr\n        );\n        if (result != RD_KAFKA_CONF_OK)\n        {\n            if (params.log_level)\n            {\n                openlog(\"phpkafka\", 0, LOG_USER);\n                syslog(LOG_ALERT, \"Failed to set compression %s: %s\", params.compression, errstr);\n            }\n            rd_kafka_conf_destroy(conf);\n            return NULL;\n        }\n    }\n    if (params.retry_count)\n    {\n        rd_kafka_conf_res_t result = rd_kafka_conf_set(\n            conf, \"message.send.max.retries\",params.retry_count, errstr, sizeof errstr\n        );\n        if (result != RD_KAFKA_CONF_OK)\n        {\n            if (params.log_level)\n            {\n                openlog(\"phpkafka\", 0, LOG_USER);\n                syslog(LOG_ALERT, \"Failed to set compression %s: %s\", params.compression, errstr);\n            }\n            rd_kafka_conf_destroy(conf);\n            return NULL;\n        }\n    }\n    if (params.retry_interval)\n    {\n        rd_kafka_conf_res_t result = rd_kafka_conf_set(\n            conf, \"retry.backoff.ms\",params.retry_interval, errstr, sizeof errstr\n        );\n        if (result != RD_KAFKA_CONF_OK)\n        {\n            if (params.log_level)\n            {\n                openlog(\"phpkafka\", 0, LOG_USER);\n                syslog(LOG_ALERT, \"Failed to set compression %s: %s\", params.compression, errstr);\n            }\n            rd_kafka_conf_destroy(conf);\n            return NULL;\n        }\n    }\n    if (params.reporting == 1)\n        rd_kafka_conf_set_dr_cb(conf, kafka_produce_cb_simple);\n    else if (params.reporting == 2)\n        rd_kafka_conf_set_dr_msg_cb(conf, kafka_produce_detailed_cb);\n    r = rd_kafka_new(params.type, conf, errstr, sizeof errstr);\n    if (!r)\n    {\n        if (params.log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(LOG_ERR, \"Failed to connect to kafka: %s\", errstr);\n        }\n        //destroy config, no connection to use it...\n        rd_kafka_conf_destroy(conf);\n        return NULL;\n    }\n    if (!rd_kafka_brokers_add(r, brokers))\n    {\n        if (params.log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(LOG_ERR, \"Failed to connect to brokers %s\", brokers);\n        }\n        rd_kafka_destroy(r);\n        return NULL;\n    }\n    return r;\n}\n\nrd_kafka_t *kafka_set_connection(rd_kafka_type_t type, const char *b, int report_level, const char *compression)\n{\n    rd_kafka_t *r = NULL;\n    char *tmp = brokers;\n    char errstr[512];\n    rd_kafka_conf_t *conf = rd_kafka_conf_new();\n    if (!(r = rd_kafka_new(type, conf, errstr, sizeof(errstr)))) {\n        if (log_level) {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(LOG_INFO, \"phpkafka - failed to create new producer: %s\", errstr);\n        }\n        exit(1);\n    }\n    /* Add brokers */\n    if (rd_kafka_brokers_add(r, b) == 0) {\n        if (log_level) {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(LOG_INFO, \"php kafka - No valid brokers specified\");\n        }\n        exit(1);\n    }\n    /* Set up a message delivery report callback.\n     * It will be called once for each message, either on successful\n     * delivery to broker, or upon failure to deliver to broker. */\n    if (type == RD_KAFKA_PRODUCER)\n    {\n        if (compression && !strcmp(compression, \"none\"))\n        {//silently fail on error ATM...\n            if (RD_KAFKA_CONF_OK != rd_kafka_conf_set(conf, \"compression.codec\", compression, errstr, sizeof errstr))\n            {\n                if (log_level)\n                {\n                    openlog(\"phpkafka\", 0, LOG_USER);\n                    syslog(LOG_INFO, \"Failed to set compression to %s\", compression);\n                }\n            }\n        }\n        if (report_level == 1)\n            rd_kafka_conf_set_dr_cb(conf, kafka_produce_cb_simple);\n        else if (report_level == 2)\n            rd_kafka_conf_set_dr_msg_cb(conf, kafka_produce_detailed_cb);\n    }\n    rd_kafka_conf_set_error_cb(conf, kafka_err_cb);\n\n    if (log_level) {\n        openlog(\"phpkafka\", 0, LOG_USER);\n        syslog(LOG_INFO, \"phpkafka - using: %s\", brokers);\n    }\n    return r;\n}\n\nvoid kafka_set_partition(int partition_selected)\n{\n    partition = partition_selected;\n}\n\nvoid kafka_setup(char* brokers_list)\n{\n    brokers = brokers_list;\n}\n\nvoid kafka_destroy(rd_kafka_t *r, int timeout)\n{\n    if(r != NULL)\n    {\n        //poll handle status\n        rd_kafka_poll(r, 0);\n        if (rd_kafka_outq_len(r) > 0)\n        {//wait for out-queue to clear\n            while(rd_kafka_outq_len(r) > 0)\n                rd_kafka_poll(r, timeout);\n            timeout = 1;\n        }\n        rd_kafka_destroy(r);\n        //this wait is blocking PHP\n        //not calling it will yield segfault, though\n        rd_kafka_wait_destroyed(timeout);\n        r = NULL;\n    }\n}\n\n//We're no longer relying on the global rk variable (not thread-safe)\nstatic void kafka_init( rd_kafka_type_t type )\n{\n    if (rk && type != rk_type)\n    {\n        rd_kafka_destroy(rk);\n        rk = NULL;\n    }\n    if (rk == NULL)\n    {\n        char errstr[512];\n        rd_kafka_conf_t *conf = rd_kafka_conf_new();\n        if (!(rk = rd_kafka_new(type, conf, errstr, sizeof(errstr)))) {\n            if (log_level) {\n                openlog(\"phpkafka\", 0, LOG_USER);\n                syslog(LOG_INFO, \"phpkafka - failed to create new producer: %s\", errstr);\n            }\n            exit(1);\n        }\n        /* Add brokers */\n        if (rd_kafka_brokers_add(rk, brokers) == 0) {\n            if (log_level) {\n                openlog(\"phpkafka\", 0, LOG_USER);\n                syslog(LOG_INFO, \"php kafka - No valid brokers specified\");\n            }\n            exit(1);\n        }\n        /* Set up a message delivery report callback.\n         * It will be called once for each message, either on successful\n         * delivery to broker, or upon failure to deliver to broker. */\n        if (type == RD_KAFKA_PRODUCER)\n            rd_kafka_conf_set_dr_cb(conf, kafka_produce_cb_simple);\n        rd_kafka_conf_set_error_cb(conf, kafka_err_cb);\n\n        if (log_level) {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(LOG_INFO, \"phpkafka - using: %s\", brokers);\n        }\n    }\n}\n\nint kafka_produce_report(rd_kafka_t *r, const char *topic, char *msg, int msg_len, long timeout)\n{\n    char errstr[512];\n    rd_kafka_topic_t *rkt = NULL;\n    int partition = RD_KAFKA_PARTITION_UA;\n    rd_kafka_topic_conf_t *conf = NULL;\n    struct produce_cb_params pcb = {1, 0, 0, 0, 0, NULL};\n\n    if (r == NULL)\n    {\n        if (log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(LOG_ERR, \"No connection provided to produce to topic %s\", topic);\n        }\n        return -2;\n    }\n\n    /* Topic configuration */\n    conf = rd_kafka_topic_conf_new();\n\n    rd_kafka_topic_conf_set(conf,\"produce.offset.report\", \"true\", errstr, sizeof errstr );\n\n    char timeoutStr[64];\n    snprintf(timeoutStr, 64, \"%lu\", timeout);\n    if (rd_kafka_topic_conf_set(conf, \"message.timeout.ms\", timeoutStr, errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK)\n    {\n        if (log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(\n                LOG_ERR,\n                \"Failed to configure topic param 'message.timeout.ms' to %lu before producing; config err was: %s\",\n                timeout,\n                errstr\n            );\n        }\n        rd_kafka_topic_conf_destroy(conf);\n        return -3;\n    }\n\n    //callback already set in kafka_set_connection\n    rkt = rd_kafka_topic_new(r, topic, conf);\n    if (!rkt)\n    {\n        if (log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(LOG_ERR, \"Failed to open topic %s\", topic);\n        }\n        rd_kafka_topic_conf_destroy(conf);\n        return -1;\n    }\n\n    //begin producing:\n    if (rd_kafka_produce(rkt, partition, RD_KAFKA_MSG_F_COPY, msg, msg_len,NULL, 0,&pcb) == -1)\n    {\n        if (log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(LOG_ERR, \"Failed to produce message: %s\", rd_kafka_err2str(rd_kafka_errno2err(errno)));\n        }\n        //handle delivery response (callback)\n        rd_kafka_poll(rk, 0);\n        rd_kafka_topic_destroy(rkt);\n        return -1;\n    }\n    rd_kafka_poll(rk, 0);\n    while(pcb.msg_count && rd_kafka_outq_len(r) > 0)\n        rd_kafka_poll(r, 10);\n    rd_kafka_topic_destroy(rkt);\n    return 0;\n}\n\nint kafka_produce_batch(rd_kafka_t *r, char *topic, char **msg, int *msg_len, int msg_cnt, int report, long timeout)\n{\n    char errstr[512];\n    rd_kafka_topic_t *rkt;\n    struct produce_cb_params pcb = {msg_cnt, 0, 0, 0, 0, NULL};\n    void *opaque;\n    int partition = RD_KAFKA_PARTITION_UA;\n    int i,\n        err_cnt = 0;\n\n    if (report)\n        opaque = &pcb;\n    else\n        opaque = NULL;\n    rd_kafka_topic_conf_t *topic_conf;\n\n    if (r == NULL)\n    {\n        if (log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(LOG_ERR, \"phpkafka - no connection to produce to topic: %s\", topic);\n        }\n        return -2;\n    }\n\n    /* Topic configuration */\n    topic_conf = rd_kafka_topic_conf_new();\n\n    char timeoutStr[64];\n    snprintf(timeoutStr, 64, \"%lu\", timeout);\n    if (rd_kafka_topic_conf_set(topic_conf, \"message.timeout.ms\", timeoutStr, errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK)\n    {\n        if (log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(\n                LOG_ERR,\n                \"Failed to configure topic param 'message.timeout.ms' to %lu before producing; config err was: %s\",\n                timeout,\n                errstr\n            );\n        }\n        rd_kafka_topic_conf_destroy(topic_conf);\n        return -3;\n    }\n\n    /* Create topic */\n    rkt = rd_kafka_topic_new(r, topic, topic_conf);\n\n    //do we have VLA?\n    rd_kafka_message_t *messages = calloc(sizeof *messages, msg_cnt);\n    if (messages == NULL)\n    {//fallback to individual produce calls\n        for (i=0;i<msg_cnt;++i)\n        {\n            if (rd_kafka_produce(rkt, partition, RD_KAFKA_MSG_F_COPY, msg[i], msg_len[i], NULL, 0, opaque) == -1)\n            {\n                if (log_level)\n                {\n                    openlog(\"phpkafka\", 0, LOG_USER);\n                    syslog(LOG_INFO, \"phpkafka - %% Failed to produce to topic %s \"\n                        \"partition %i: %s\",\n                        rd_kafka_topic_name(rkt), partition,\n                        rd_kafka_err2str(\n                        rd_kafka_errno2err(errno)));\n                }\n            }\n        }\n    }\n    else\n    {\n        for (i=0;i<msg_cnt;++i)\n        {\n            messages[i].payload = msg[i];\n            messages[i].len = msg_len[i];\n        }\n        i = rd_kafka_produce_batch(rkt, partition, RD_KAFKA_MSG_F_COPY, messages, msg_cnt);\n        if (i < msg_cnt)\n        {\n            if (log_level)\n            {\n                openlog(\"phpkafka\", 0, LOG_USER);\n                syslog(LOG_WARNING, \"Failed to queue full message batch, %d of %d were put in queue\", i, msg_cnt);\n            }\n        }\n        err_cnt = msg_cnt - i;\n        free(messages);\n        messages = NULL;\n    }\n    /* Poll to handle delivery reports */\n    rd_kafka_poll(r, 0);\n\n    /* Wait for messages to be delivered */\n    while (report && pcb.msg_count && rd_kafka_outq_len(r) > 0)\n        rd_kafka_poll(r, 10);\n\n    //set global to NULL again\n    rd_kafka_topic_destroy(rkt);\n    if (report)\n        err_cnt = pcb.err_count;\n    return err_cnt;\n}\n\nint kafka_produce(rd_kafka_t *r, char* topic, char* msg, int msg_len, int report, long timeout)\n{\n\n    char errstr[512];\n    rd_kafka_topic_t *rkt;\n    struct produce_cb_params pcb = {1, 0, 0, 0, 0, NULL};\n    void *opaque;\n    int partition = RD_KAFKA_PARTITION_UA;\n\n    //decide whether to pass callback params or not...\n    if (report)\n        opaque = &pcb;\n    else\n        opaque = NULL;\n\n    rd_kafka_topic_conf_t *topic_conf;\n\n    if (r == NULL)\n    {\n        if (log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(LOG_ERR, \"phpkafka - no connection to produce to topic: %s\", topic);\n        }\n        return -2;\n    }\n\n    /* Topic configuration */\n    topic_conf = rd_kafka_topic_conf_new();\n\n    char timeoutStr[64];\n    snprintf(timeoutStr, 64, \"%lu\", timeout);\n    if (rd_kafka_topic_conf_set(topic_conf, \"message.timeout.ms\", timeoutStr, errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK)\n    {\n        if (log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(\n                LOG_ERR,\n                \"Failed to configure topic param 'message.timeout.ms' to %lu before producing; config err was: %s\",\n                timeout,\n                errstr\n            );\n        }\n        rd_kafka_topic_conf_destroy(topic_conf);\n        return -3;\n    }\n\n    /* Create topic */\n    rkt = rd_kafka_topic_new(r, topic, topic_conf);\n\n    if (rd_kafka_produce(rkt, partition,\n                     RD_KAFKA_MSG_F_COPY,\n                     /* Payload and length */\n                     msg, msg_len,\n                     /* Optional key and its length */\n                     NULL, 0,\n                     /* Message opaque, provided in\n                      * delivery report callback as\n                      * msg_opaque. */\n                     opaque) == -1) {\n       if (log_level) {\n           openlog(\"phpkafka\", 0, LOG_USER);\n           syslog(LOG_INFO, \"phpkafka - %% Failed to produce to topic %s \"\n               \"partition %i: %s\",\n               rd_kafka_topic_name(rkt), partition,\n               rd_kafka_err2str(\n               rd_kafka_errno2err(errno)));\n        }\n       rd_kafka_topic_destroy(rkt);\n       return -1;\n    }\n\n    /* Poll to handle delivery reports */\n    rd_kafka_poll(r, 0);\n\n    /* Wait for messages to be delivered */\n    while (report && pcb.msg_count && rd_kafka_outq_len(r) > 0)\n      rd_kafka_poll(r, 10);\n\n    //set global to NULL again\n    rd_kafka_topic_destroy(rkt);\n    return 0;\n}\n\nstatic\nvoid offset_queue_consume(rd_kafka_message_t *message, void *opaque)\n{\n    struct consume_cb_params *params = opaque;\n    if (params->eop == 0)\n        return;\n    if (message->err)\n    {\n        params->error_count += 1;\n        if (params->auto_commit == 0)\n            rd_kafka_offset_store(\n                message->rkt,\n                message->partition,\n                message->offset == 0 ? 0 : message->offset -1\n            );\n        if (message->err == RD_KAFKA_RESP_ERR__PARTITION_EOF)\n        {\n            if (params->partition_offset[message->partition] == -2)\n            {//no previous message read from this partition\n             //set offset value to last possible value (-1 or last existing)\n             //reduce eop count\n                params->eop -= 1;\n                params->read_count += 1;\n                params->partition_offset[message->partition] = message->offset -1;\n            }\n            if (log_level)\n            {\n                openlog(\"phpkafka\", 0, LOG_USER);\n                syslog(LOG_INFO,\n                    \"phpkafka - %% Consumer reached end of %s [%\"PRId32\"] \"\n                    \"message queue at offset %\"PRId64\"\\n\",\n                    rd_kafka_topic_name(message->rkt),\n                    message->partition, message->offset);\n            }\n        }\n        return;\n    }\n    if (params->partition_offset[message->partition] == -1)\n        params->eop -= 1;\n    //we have an offset, save it\n    params->partition_offset[message->partition] = message->offset;\n    //tally read_count\n    params->read_count += 1;\n    if (params->auto_commit == 0)\n        rd_kafka_offset_store(\n            message->rkt,\n            message->partition,\n            message->offset == 0 ? 0 : message->offset -1\n        );\n}\n\nstatic\nvoid queue_consume(rd_kafka_message_t *message, void *opaque)\n{\n    struct consume_cb_params *params = opaque;\n    zval *return_value = params->return_value;\n    //all partitions EOF\n    if (params->eop < 1)\n        return;\n    //nothing more to read...\n    if (params->read_count == 0)\n        return;\n    if (message->err)\n    {\n        params->error_count += 1;\n        //if auto-commit is disabled:\n        if (params->auto_commit == 0)\n            //store offset\n            rd_kafka_offset_store(\n                message->rkt,\n                message->partition,\n                message->offset == 0 ? 0 : message->offset -1\n            );\n        if (message->err == RD_KAFKA_RESP_ERR__PARTITION_EOF)\n        {\n            if (params->partition_ends[message->partition] == 0)\n            {\n                params->eop -= 1;\n                params->partition_ends[message->partition] = 1;\n            }\n            if (log_level)\n            {\n                openlog(\"phpkafka\", 0, LOG_USER);\n                syslog(LOG_INFO,\n                    \"phpkafka - %% Consumer reached end of %s [%\"PRId32\"] \"\n                    \"message queue at offset %\"PRId64\"\\n\",\n                    rd_kafka_topic_name(message->rkt),\n                    message->partition, message->offset);\n            }\n            return;\n        }\n        //add_next_index_string(return_value, rd_kafka_message_errstr(message), 1);\n        if (log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(LOG_INFO, \"phpkafka - %% Consume error for topic \\\"%s\\\" [%\"PRId32\"] \"\n                \"offset %\"PRId64\": %s\\n\",\n                rd_kafka_topic_name(message->rkt),\n                message->partition,\n                message->offset,\n                rd_kafka_message_errstr(message)\n            );\n        }\n        return;\n    }\n    //only count successful reads!\n    //-1 means read all from offset until end\n    if (params->read_count != -1)\n        params->read_count -= 1;\n    //add message to return value (perhaps add as array -> offset + msg?\n    if (message->len > 0) {\n        add_next_index_stringl(\n            return_value,\n            (char *) message->payload,\n            (int) message->len,\n            1\n        );\n    } else {\n        add_next_index_string(return_value, \"\", 1);\n    }\n\n    //store offset if autocommit is disabled\n    if (params->auto_commit == 0)\n        rd_kafka_offset_store(\n            message->rkt,\n            message->partition,\n            message->offset\n        );\n}\n\nstatic rd_kafka_message_t *msg_consume(rd_kafka_message_t *rkmessage,\n       void *opaque)\n{\n    int *run = opaque;\n    if (rkmessage->err)\n    {\n        *run = 0;\n        if (rkmessage->err == RD_KAFKA_RESP_ERR__PARTITION_EOF)\n        {\n            if (log_level)\n            {\n                openlog(\"phpkafka\", 0, LOG_USER);\n                syslog(LOG_INFO,\n                    \"phpkafka - %% Consumer reached end of %s [%\"PRId32\"] \"\n                    \"message queue at offset %\"PRId64\"\\n\",\n                    rd_kafka_topic_name(rkmessage->rkt),\n                    rkmessage->partition, rkmessage->offset);\n            }\n            return NULL;\n        }\n        if (log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(LOG_INFO, \"phpkafka - %% Consume error for topic \\\"%s\\\" [%\"PRId32\"] \"\n                \"offset %\"PRId64\": %s\\n\",\n                rd_kafka_topic_name(rkmessage->rkt),\n                rkmessage->partition,\n                rkmessage->offset,\n                rd_kafka_message_errstr(rkmessage)\n            );\n        }\n        return NULL;\n    }\n\n    return rkmessage;\n}\n\n//get topics + partition count\nvoid kafka_get_topics(rd_kafka_t *r, zval *return_value)\n{\n    int i;\n    const struct rd_kafka_metadata *meta = NULL;\n    if (r == NULL)\n    {\n        if (log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(LOG_ERR, \"phpkafka - no connection to get topics\");\n        }\n        return;\n    }\n    if (RD_KAFKA_RESP_ERR_NO_ERROR == rd_kafka_metadata(r, 1, NULL, &meta, 200)) {\n        for (i=0;i<meta->topic_cnt;++i) {\n            add_assoc_long(\n               return_value,\n               meta->topics[i].topic,\n               (long) meta->topics[i].partition_cnt\n            );\n        }\n    }\n    if (meta) {\n        rd_kafka_metadata_destroy(meta);\n    }\n}\n\nstatic\nint kafka_partition_count(rd_kafka_t *r, const char *topic)\n{\n    rd_kafka_topic_t *rkt;\n    rd_kafka_topic_conf_t *conf;\n    int i;//C89 compliant\n    //connect as consumer if required\n    if (r == NULL)\n    {\n        if (log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(LOG_ERR, \"phpkafka - no connection to get partition count for topic: %s\", topic);\n        }\n        return -1;\n    }\n    /* Topic configuration */\n    conf = rd_kafka_topic_conf_new();\n\n    /* Create topic */\n    rkt = rd_kafka_topic_new(r, topic, conf);\n    //metadata API required rd_kafka_metadata_t** to be passed\n    const struct rd_kafka_metadata *meta = NULL;\n    if (RD_KAFKA_RESP_ERR_NO_ERROR == rd_kafka_metadata(r, 0, rkt, &meta, 200))\n        i = (int) meta->topics->partition_cnt;\n    else\n        i = 0;\n    if (meta) {\n        rd_kafka_metadata_destroy(meta);\n    }\n    rd_kafka_topic_destroy(rkt);\n    return i;\n}\n\n//get the available partitions for a given topic\nvoid kafka_get_partitions(rd_kafka_t *r, zval *return_value, char *topic)\n{\n    //we need a connection!\n    if (r == NULL)\n        return;\n    int i, count = kafka_partition_count(r, topic);\n    for (i=0;i<count;++i) {\n        add_next_index_long(return_value, i);\n    }\n}\n\n/**\n * @brief Get all partitions for topic and their beginning offsets, useful\n * if we're consuming messages without knowing the actual partition beforehand\n * @param int **partitions should be pointer to NULL, will be allocated here\n * @param const char * topic topic name\n * @return int (0 == meta error, -2: no connection, -1: allocation error, all others indicate success (nr of elems in array))\n */\nint kafka_partition_offsets(rd_kafka_t *r, long **partitions, const char *topic)\n{\n    rd_kafka_topic_t *rkt = NULL;\n    rd_kafka_topic_conf_t *conf = NULL;\n    rd_kafka_queue_t *rkqu = NULL;\n    struct consume_cb_params cb_params = {0, NULL, NULL, 0, 0, 0};\n    int i = 0;\n    //make life easier, 1 level of indirection...\n    long *values = *partitions;\n    //connect as consumer if required\n    if (r == NULL)\n    {\n        if (log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(LOG_ERR, \"phpkafka - no connection to get offsets of topic: %s\", topic);\n        }\n        return -2;\n    }\n    /* Topic configuration */\n    conf = rd_kafka_topic_conf_new();\n\n    /* Create topic */\n    rkt = rd_kafka_topic_new(r, topic, conf);\n    rkqu = rd_kafka_queue_new(rk);\n    const struct rd_kafka_metadata *meta = NULL;\n    if (RD_KAFKA_RESP_ERR_NO_ERROR == rd_kafka_metadata(r, 0, rkt, &meta, 5))\n    {\n        values = realloc(values, meta->topics->partition_cnt * sizeof *values);\n        if (values == NULL)\n        {\n            *partitions = values;//possible corrupted pointer now\n            //free metadata, return error\n            rd_kafka_metadata_destroy(meta);\n            return -1;\n        }\n        //we need eop to reach 0, if there are 4 partitions, start at 3 (0, 1, 2, 3)\n        cb_params.eop = meta->topics->partition_cnt -1;\n        cb_params.partition_offset = values;\n        for (i=0;i<meta->topics->partition_cnt;++i)\n        {\n            //initialize: set to -2 for callback\n            values[i] = -2;\n            if (rd_kafka_consume_start_queue(rkt, meta->topics->partitions[i].id, RD_KAFKA_OFFSET_BEGINNING, rkqu))\n            {\n                if (log_level)\n                {\n                    openlog(\"phpkafka\", 0, LOG_USER);\n                    syslog(LOG_ERR,\n                        \"Failed to start consuming topic %s [%\"PRId32\"]\",\n                        topic, meta->topics->partitions[i].id\n                    );\n                }\n                continue;\n            }\n        }\n        //eiter eop reached 0, or the read errors >= nr of partitions\n        //either way, we've consumed a message from each partition, and therefore, we're done\n        while(cb_params.eop && cb_params.error_count < meta->topics->partition_cnt)\n            rd_kafka_consume_callback_queue(rkqu, 100, offset_queue_consume, &cb_params);\n        //stop consuming for all partitions\n        for (i=0;i<meta->topics->partition_cnt;++i)\n            rd_kafka_consume_stop(rkt, meta->topics[0].partitions[i].id);\n        rd_kafka_queue_destroy(rkqu);\n        //do we need this poll here?\n        while(rd_kafka_outq_len(r) > 0)\n            rd_kafka_poll(r, 5);\n\n        //let's be sure to pass along the correct values here...\n        *partitions = values;\n        i = meta->topics->partition_cnt;\n    }\n    if (meta)\n        rd_kafka_metadata_destroy(meta);\n    rd_kafka_topic_destroy(rkt);\n    return i;\n}\n\nvoid kafka_consume_all(rd_kafka_t *rk, zval *return_value, const char *topic, const char *offset, int item_count)\n{\n    char errstr[512];\n    rd_kafka_topic_t *rkt;\n    rd_kafka_topic_conf_t *conf;\n    const struct rd_kafka_metadata *meta = NULL;\n    rd_kafka_queue_t *rkqu = NULL;\n    int current, p, i = 0;\n    int32_t partition = 0;\n    int64_t start;\n    struct consume_cb_params cb_params = {item_count, return_value, NULL, 0, 0, 0};\n    //check for NULL pointers, all arguments are required!\n    if (rk == NULL || return_value == NULL || topic == NULL || offset == NULL || strlen(offset) == 0)\n        return;\n\n    if (!strcmp(offset, \"end\"))\n        start = RD_KAFKA_OFFSET_END;\n    else if (!strcmp(offset, \"beginning\"))\n        start = RD_KAFKA_OFFSET_BEGINNING;\n    else if (!strcmp(offset, \"stored\"))\n        start = RD_KAFKA_OFFSET_STORED;\n    else\n        start = strtoll(offset, NULL, 10);\n\n    /* Topic configuration */\n    conf = rd_kafka_topic_conf_new();\n\n    /* Disable autocommit, queue_consume sets offsets automatically */\n    if (rd_kafka_topic_conf_set(conf, \"auto.commit.enable\", \"false\", errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK)\n    {\n        if (log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(\n                LOG_WARNING,\n                \"failed to turn autocommit off consuming %d messages (start offset %\"PRId64\") from topic %s: %s\",\n                item_count,\n                start,\n                topic,\n                errstr\n            );\n        }\n        cb_params.auto_commit = 1;\n    }\n    /* Create topic */\n    rkt = rd_kafka_topic_new(rk, topic, conf);\n    if (!rkt)\n    {\n        if (log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(LOG_INFO, \"phpkafka - Failed to read %s from %\"PRId64\" (%s)\", topic, start, offset);\n        }\n        return;\n    }\n    rkqu = rd_kafka_queue_new(rk);\n    if (RD_KAFKA_RESP_ERR_NO_ERROR == rd_kafka_metadata(rk, 0, rkt, &meta, 5))\n    {\n        p = meta->topics->partition_cnt;\n        cb_params.partition_ends = calloc(sizeof *cb_params.partition_ends, p);\n        if (cb_params.partition_ends == NULL)\n        {\n            if (log_level)\n            {\n                openlog(\"phpkafka\", 0, LOG_USER);\n                syslog(LOG_INFO, \"phpkafka - Failed to read %s from %\"PRId64\" (%s)\", topic, start, offset);\n            }\n            rd_kafka_metadata_destroy(meta);\n            meta = NULL;\n            rd_kafka_queue_destroy(rkqu);\n            rd_kafka_topic_destroy(rkt);\n            return;\n        }\n        cb_params.eop = p;\n        for (i=0;i<p;++i)\n        {\n            partition = meta->topics[0].partitions[i].id;\n            if (rd_kafka_consume_start_queue(rkt, partition, start, rkqu))\n            {\n                if (log_level)\n                {\n                    openlog(\"phpkafka\", 0, LOG_USER);\n                    syslog(LOG_ERR,\n                        \"Failed to start consuming topic %s [%\"PRId32\"]: %s\",\n                        topic, partition, offset\n                    );\n                }\n                continue;\n            }\n        }\n        while(cb_params.read_count && cb_params.eop)\n            rd_kafka_consume_callback_queue(rkqu, 200, queue_consume, &cb_params);\n        free(cb_params.partition_ends);\n        cb_params.partition_ends = NULL;\n        for (i=0;i<p;++i)\n        {\n            partition = meta->topics[0].partitions[i].id;\n            rd_kafka_consume_stop(rkt, partition);\n        }\n        rd_kafka_metadata_destroy(meta);\n        meta = NULL;\n        rd_kafka_queue_destroy(rkqu);\n        while(rd_kafka_outq_len(rk) > 0)\n            rd_kafka_poll(rk, 50);\n        rd_kafka_topic_destroy(rkt);\n    }\n    if (meta)\n        rd_kafka_metadata_destroy(meta);\n}\n\nint kafka_consume(rd_kafka_t *r, zval* return_value, char* topic, char* offset, int item_count, int partition)\n{\n    int64_t start_offset = 0;\n    int read_counter = 0,\n        run = 1;\n    //nothing to consume?\n    if (item_count == 0)\n        return 0;\n    if (strlen(offset) != 0)\n    {\n        if (!strcmp(offset, \"end\"))\n            start_offset = RD_KAFKA_OFFSET_END;\n        else if (!strcmp(offset, \"beginning\"))\n            start_offset = RD_KAFKA_OFFSET_BEGINNING;\n        else if (!strcmp(offset, \"stored\"))\n            start_offset = RD_KAFKA_OFFSET_STORED;\n        else\n        {\n            start_offset = strtoll(offset, NULL, 10);\n            if (start_offset < 1)\n                return -1;\n        }\n\n    }\n    rd_kafka_topic_t *rkt;\n\n    if (r == NULL)\n    {\n        if (log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(LOG_ERR, \"phpkafka - no connection to consume from topic: %s\", topic);\n        }\n        return -2;\n    }\n\n    rd_kafka_topic_conf_t *topic_conf;\n\n    /* Topic configuration */\n    topic_conf = rd_kafka_topic_conf_new();\n\n    /* Create topic */\n    rkt = rd_kafka_topic_new(r, topic, topic_conf);\n    if (rkt == NULL)\n    {\n        if (log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(\n                LOG_ERR,\n               \"Failed to consume from topic %s: %s\",\n                topic,\n                rd_kafka_err2str(\n                    rd_kafka_errno2err(errno)\n                )\n            );\n        }\n        return -3;\n    }\n    if (log_level)\n    {\n        openlog(\"phpkafka\", 0, LOG_USER);\n        syslog(LOG_INFO, \"phpkafka - start_offset: %\"PRId64\" and offset passed: %s\", start_offset, offset);\n\n    }\n    /* Start consuming */\n    if (rd_kafka_consume_start(rkt, partition, start_offset) == -1)\n    {\n        if (log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(LOG_INFO, \"phpkafka - %% Failed to start consuming: %s\",\n                rd_kafka_err2str(rd_kafka_errno2err(errno)));\n        }\n        return -4;\n    }\n\n    /**\n     * Keep reading until run == 0, or read_counter == item_count\n     */\n    for (read_counter=0;read_counter!=item_count;++read_counter)\n    {\n        if (run == 0)\n            break;\n        if (log_level)\n        {\n            openlog(\"phpkafka\", 0, LOG_USER);\n            syslog(LOG_INFO, \"Consuming, count at %d (of %d - run: %d)\",\n                read_counter,\n                item_count,\n                run\n            );\n        }\n        rd_kafka_message_t *rkmessage = NULL,\n            *rkmessage_return = NULL;\n\n        /* Consume single message.\n         * See rdkafka_performance.c for high speed\n         * consuming of messages. */\n        rkmessage = rd_kafka_consume(rkt, partition, 1000);\n        //timeout ONLY if error didn't cause run to be 0\n        if (!rkmessage)\n        {\n            //break on timeout, makes second call redundant\n            if (errno == ETIMEDOUT)\n            {\n                if (log_level)\n                {\n                    openlog(\"phpkafka\", 0, LOG_USER);\n                    syslog(LOG_INFO, \"Consumer timed out, count at %d (of %d) stop consuming after %d messages\",\n                        read_counter,\n                        item_count,\n                        read_counter +1\n                    );\n                }\n                break;\n            }\n            continue;\n        }\n\n        rkmessage_return = msg_consume(rkmessage, &run);\n        if (rkmessage_return != NULL)\n        {\n            if ((int) rkmessage_return->len > 0)\n            {\n                add_index_stringl(\n                    return_value,\n                    (int) rkmessage_return->offset,\n                    (char *) rkmessage_return->payload,\n                    (int) rkmessage_return->len,\n                    1\n                );\n            }\n            else\n            {\n                add_index_string(return_value, (int) rkmessage_return->offset, \"\", 1);\n            }\n        }\n        /* Return message to rdkafka */\n        rd_kafka_message_destroy(rkmessage);\n    }\n\n    /* Stop consuming */\n    rd_kafka_consume_stop(rkt, partition);\n    rd_kafka_topic_destroy(rkt);\n    return 0;\n}\n"
  },
  {
    "path": "kafka.h",
    "content": "/**\n *  Copyright 2015 Elias Van Ootegem\n *\n *  Licensed under the Apache License, Version 2.0 (the \"License\");\n *  you may not use this file except in compliance with the License.\n *  You may obtain a copy of the License at\n *\n *  http://www.apache.org/licenses/LICENSE-2.0\n *\n *  Unless required by applicable law or agreed to in writing, software\n *  distributed under the License is distributed on an \"AS IS\" BASIS,\n *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n *  See the License for the specific language governing permissions and\n *  limitations under the License.\n *\n * Special thanks to Patrick Reilly and Aleksandar Babic for their work\n * On which this extension was actually built.\n */\n\n#ifndef __KAFKA_H__\n#define __KAFKA_H__\n#include \"librdkafka/rdkafka.h\"\n\ntypedef struct connection_params_s {\n    rd_kafka_type_t type;\n    int log_level;\n    int reporting;\n    char *compression;\n    union {\n        char *retry_count;\n        char *queue_buffer;\n    };\n    char *retry_interval;\n} kafka_connection_params;\n\nvoid kafka_setup(char *brokers);\nvoid kafka_set_log_level(int ll);\nvoid kafka_set_partition(int partition);\nint kafka_produce(rd_kafka_t *r, char* topic, char* msg, int msg_len, int report, long timeout);\nint kafka_produce_report(rd_kafka_t *r, const char *topic, char *msg, int msg_len, long timeout);\nint kafka_produce_batch(rd_kafka_t *r, char *topic, char **msg, int *msg_len, int msg_cnt, int report, long timeout);\nrd_kafka_t *kafka_set_connection(rd_kafka_type_t type, const char *b, int report_level, const char *compression);\nrd_kafka_t *kafka_get_connection(kafka_connection_params params, const char *brokers);\nint kafka_consume(rd_kafka_t *r, zval* return_value, char* topic, char* offset, int item_count, int partition);\nvoid kafka_get_partitions(rd_kafka_t *r, zval *return_value, char *topic);\nint kafka_partition_offsets(rd_kafka_t *r, long **partitions, const char *topic);\nvoid kafka_get_topics(rd_kafka_t *r,zval *return_value);\nvoid kafka_consume_all(rd_kafka_t *rk, zval *return_value, const char *topic, const char *offset, int item_count);\nvoid kafka_destroy(rd_kafka_t *r, int timeout);\n\n#endif\n"
  },
  {
    "path": "php_kafka.c",
    "content": "/**\n *  Copyright 2015 Elias Van Ootegem.\n *\n *  Licensed under the Apache License, Version 2.0 (the \"License\");\n *  you may not use this file except in compliance with the License.\n *  You may obtain a copy of the License at\n *\n *  http://www.apache.org/licenses/LICENSE-2.0\n *\n *  Unless required by applicable law or agreed to in writing, software\n *  distributed under the License is distributed on an \"AS IS\" BASIS,\n *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n *  See the License for the specific language governing permissions and\n *  limitations under the License.\n *\n * Special thanks to Patrick Reilly and Aleksandar Babic for their work\n * On which this extension was actually built.\n */\n\n#ifdef HAVE_CONFIG_H\n# include \"config.h\"\n#endif\n\n#include <php.h>\n#include <php_kafka.h>\n#include \"kafka.h\"\n#include \"zend_exceptions.h\"\n#include \"zend_hash.h\"\n#include <zlib.h>\n#include <ctype.h>\n\n#ifdef COMPILE_DL_KAFKA\nZEND_GET_MODULE(kafka)\n#endif\n#define REGISTER_KAFKA_CLASS_CONST_STRING(ce, name, value) \\\n    zend_declare_class_constant_stringl(ce, name, sizeof(name)-1, value, sizeof(value)-1)\n#define REGISTER_KAFKA_CLASS_CONST_LONG(ce, name, value) \\\n    zend_declare_class_constant_long(ce, name, sizeof(name)-1, value)\n#define REGISTER_KAFKA_CLASS_CONST(ce, c_name, type) \\\n    REGISTER_KAFKA_CLASS_CONST_ ## type(ce, #c_name, PHP_KAFKA_ ## c_name)\n#ifndef BASE_EXCEPTION\n#if (PHP_MAJOR_VERSION < 5) || ( ( PHP_MAJOR_VERSION == 5 ) && (PHP_MINOR_VERSION < 2) )\n#define BASE_EXCEPTION zend_exception_get_default()\n#else\n#define BASE_EXCEPTION zend_exception_get_default(TSRMLS_C)\n#endif\n#endif\n\n#define GET_KAFKA_CONNECTION(varname, thisObj) \\\n    kafka_connection *varname = (kafka_connection *) zend_object_store_get_object( \\\n        thisObj TSRMLS_CC \\\n    )\n\n/* {{{ arginfo */\nZEND_BEGIN_ARG_INFO_EX(arginf_kafka__constr, 0, 0, 1)\n    ZEND_ARG_INFO(0, brokers)\n    ZEND_ARG_INFO(0, options)\nZEND_END_ARG_INFO()\n\nZEND_BEGIN_ARG_INFO(arginf_kafka_set_options, 0)\n    ZEND_ARG_INFO(0, options)\nZEND_END_ARG_INFO()\n\nZEND_BEGIN_ARG_INFO_EX(arginf_kafka_set_partition, 0, 0, 1)\n    ZEND_ARG_INFO(0, partition)\n    ZEND_ARG_INFO(0, mode)\nZEND_END_ARG_INFO()\n\nZEND_BEGIN_ARG_INFO(arginf_kafka_set_compression, 0)\n    ZEND_ARG_INFO(0, compression)\nZEND_END_ARG_INFO()\n\nZEND_BEGIN_ARG_INFO(arginf_kafka_set_log_level, 0)\n    ZEND_ARG_INFO(0, logLevel)\nZEND_END_ARG_INFO()\n\nZEND_BEGIN_ARG_INFO(arginf_kafka_get_partitions_for_topic, 0)\n    ZEND_ARG_INFO(0, topic)\nZEND_END_ARG_INFO()\n\nZEND_BEGIN_ARG_INFO(arginf_kafka_set_get_partition, 0)\n    ZEND_ARG_INFO(0, mode)\nZEND_END_ARG_INFO()\n\nZEND_BEGIN_ARG_INFO_EX(arginf_kafka_produce, 0, 0, 2)\n    ZEND_ARG_INFO(0, topic)\n    ZEND_ARG_INFO(0, message)\n    ZEND_ARG_INFO(0, timeout)\nZEND_END_ARG_INFO()\n\nZEND_BEGIN_ARG_INFO_EX(arginf_kafka_produce_batch, 0, 0, 2)\n    ZEND_ARG_INFO(0, topic)\n    ZEND_ARG_INFO(0, messages)\n    ZEND_ARG_INFO(0, timeout)\nZEND_END_ARG_INFO()\n\nZEND_BEGIN_ARG_INFO_EX(arginf_kafka_consume, 0, 0, 2)\n    ZEND_ARG_INFO(0, topic)\n    ZEND_ARG_INFO(0, offset)\n    ZEND_ARG_INFO(0, messageCount)\nZEND_END_ARG_INFO()\n\nZEND_BEGIN_ARG_INFO_EX(arginf_kafka_is_conn, 0, 0, 0)\n    ZEND_ARG_INFO(0, mode)\nZEND_END_ARG_INFO()\n\nZEND_BEGIN_ARG_INFO(arginf_kafka_void, 0)\nZEND_END_ARG_INFO()\n\nZEND_BEGIN_ARG_INFO_EX(arginf_kafka_disconnect, 0, 0, 0)\n    ZEND_ARG_INFO(0, mode)\nZEND_END_ARG_INFO()\n\n/* }}} end arginfo */\n\n/* decalre the class entries */\nzend_class_entry *kafka_ce;\nzend_class_entry *kafka_exception;\n\n/* the method table */\n/* each method can have its own parameters and visibility */\nstatic zend_function_entry kafka_functions[] = {\n    PHP_ME(Kafka, __construct, arginf_kafka__constr, ZEND_ACC_CTOR | ZEND_ACC_PUBLIC)\n    PHP_ME(Kafka, __destruct, arginf_kafka_void, ZEND_ACC_DTOR | ZEND_ACC_PUBLIC)\n    PHP_ME(Kafka, setCompression, arginf_kafka_set_compression, ZEND_ACC_PUBLIC)\n    PHP_ME(Kafka, getCompression, arginf_kafka_void, ZEND_ACC_PUBLIC)\n    PHP_ME(Kafka, set_partition, arginf_kafka_set_partition, ZEND_ACC_PUBLIC|ZEND_ACC_DEPRECATED)\n    PHP_ME(Kafka, setPartition, arginf_kafka_set_partition, ZEND_ACC_PUBLIC)\n    PHP_ME(Kafka, getPartition, arginf_kafka_set_get_partition, ZEND_ACC_PUBLIC)\n    PHP_ME(Kafka, setLogLevel, arginf_kafka_set_log_level, ZEND_ACC_PUBLIC)\n    PHP_ME(Kafka, getPartitionsForTopic, arginf_kafka_get_partitions_for_topic, ZEND_ACC_PUBLIC)\n    PHP_ME(Kafka, getPartitionOffsets, arginf_kafka_get_partitions_for_topic, ZEND_ACC_PUBLIC)\n    PHP_ME(Kafka, setBrokers, arginf_kafka__constr, ZEND_ACC_PUBLIC)\n    PHP_ME(Kafka, setOptions, arginf_kafka_set_options, ZEND_ACC_PUBLIC)\n    PHP_ME(Kafka, getTopics, arginf_kafka_void, ZEND_ACC_PUBLIC)\n    PHP_ME(Kafka, disconnect, arginf_kafka_disconnect, ZEND_ACC_PUBLIC)\n    PHP_ME(Kafka, isConnected, arginf_kafka_is_conn, ZEND_ACC_PUBLIC)\n    PHP_ME(Kafka, produce, arginf_kafka_produce, ZEND_ACC_PUBLIC)\n    PHP_ME(Kafka, produceBatch, arginf_kafka_produce_batch, ZEND_ACC_PUBLIC)\n    PHP_ME(Kafka, consume, arginf_kafka_consume, ZEND_ACC_PUBLIC)\n    {NULL,NULL,NULL} /* Marks the end of function entries */\n};\n\nzend_module_entry kafka_module_entry = {\n    STANDARD_MODULE_HEADER,\n    \"kafka\",\n    kafka_functions, /* Function entries */\n    PHP_MINIT(kafka), /* Module init */\n    PHP_MSHUTDOWN(kafka), /* Module shutdown */\n    PHP_RINIT(kafka), /* Request init */\n    PHP_RSHUTDOWN(kafka), /* Request shutdown */\n    NULL, /* Module information */\n    PHP_KAFKA_VERSION, /* Replace with version number for your extension */\n    STANDARD_MODULE_PROPERTIES\n};\n\nPHP_MINIT_FUNCTION(kafka)\n{\n    zend_class_entry ce,\n            ce_ex;\n    INIT_CLASS_ENTRY(ce, \"Kafka\", kafka_functions);\n    kafka_ce = zend_register_internal_class(&ce TSRMLS_CC);\n    INIT_CLASS_ENTRY(ce_ex, \"KafkaException\", NULL);\n    kafka_exception = zend_register_internal_class_ex(\n        &ce_ex,\n        BASE_EXCEPTION,\n        NULL TSRMLS_CC\n    );\n    //do not allow people to extend this class, make it final\n    kafka_ce->create_object = create_kafka_connection;\n    kafka_ce->ce_flags |= ZEND_ACC_FINAL_CLASS;\n    //offset constants (consume)\n    REGISTER_KAFKA_CLASS_CONST(kafka_ce, OFFSET_BEGIN, STRING);\n    REGISTER_KAFKA_CLASS_CONST(kafka_ce, OFFSET_END, STRING);\n    //compression mode constants\n    REGISTER_KAFKA_CLASS_CONST(kafka_ce, COMPRESSION_NONE, STRING);\n    REGISTER_KAFKA_CLASS_CONST(kafka_ce, COMPRESSION_GZIP, STRING);\n    REGISTER_KAFKA_CLASS_CONST(kafka_ce, COMPRESSION_SNAPPY, STRING);\n    //global log-mode constants TODO: refactor to ERRMODE constants\n    REGISTER_KAFKA_CLASS_CONST(kafka_ce, LOG_ON, LONG);\n    REGISTER_KAFKA_CLASS_CONST(kafka_ce, LOG_OFF, LONG);\n    //connection mode constants\n    REGISTER_KAFKA_CLASS_CONST(kafka_ce, MODE_CONSUMER, LONG);\n    REGISTER_KAFKA_CLASS_CONST(kafka_ce, MODE_PRODUCER, LONG);\n    //random partition constant\n    REGISTER_KAFKA_CLASS_CONST(kafka_ce, PARTITION_RANDOM, LONG);\n    //config constants:\n    REGISTER_KAFKA_CLASS_CONST(kafka_ce, RETRY_COUNT, LONG);\n    REGISTER_KAFKA_CLASS_CONST(kafka_ce, RETRY_INTERVAL, LONG);\n    REGISTER_KAFKA_CLASS_CONST(kafka_ce, CONFIRM_DELIVERY, LONG);\n    REGISTER_KAFKA_CLASS_CONST(kafka_ce, QUEUE_BUFFER_SIZE, LONG);\n    REGISTER_KAFKA_CLASS_CONST(kafka_ce, COMPRESSION_MODE, LONG);\n    REGISTER_KAFKA_CLASS_CONST(kafka_ce, LOGLEVEL, LONG);\n    //confirmation value constants\n    REGISTER_KAFKA_CLASS_CONST(kafka_ce, CONFIRM_OFF, LONG);\n    REGISTER_KAFKA_CLASS_CONST(kafka_ce, CONFIRM_BASIC, LONG);\n    REGISTER_KAFKA_CLASS_CONST(kafka_ce, CONFIRM_EXTENDED, LONG);\n    return SUCCESS;\n}\n\nPHP_RSHUTDOWN_FUNCTION(kafka)\n{\n    return SUCCESS;\n}\n\nPHP_RINIT_FUNCTION(kafka)\n{\n    return SUCCESS;\n}\n\nPHP_MSHUTDOWN_FUNCTION(kafka)\n{\n    return SUCCESS;\n}\n\nzend_object_value create_kafka_connection(zend_class_entry *class_type TSRMLS_DC)\n{\n    zend_object_value retval;\n    kafka_connection *intern;\n    zval *tmp;\n\n    // allocate the struct we're going to use\n    intern = emalloc(sizeof *intern);\n    memset(intern, 0, sizeof *intern);\n    //init partitions to random partitions\n    intern->consumer_partition = PHP_KAFKA_PARTITION_RANDOM;\n    intern->producer_partition = PHP_KAFKA_PARTITION_RANDOM;\n    //set default values\n    //basic confirmation (wait for success callback)\n    intern->delivery_confirm_mode = PHP_KAFKA_CONFIRM_BASIC;\n    //logging = default on (while in development, at least)\n    intern->log_level = PHP_KAFKA_LOG_ON;\n\n    zend_object_std_init(&intern->std, class_type TSRMLS_CC);\n    //add properties table\n#if PHP_VERSION_ID < 50399\n    zend_hash_copy(\n        intern->std.properties, &class_type->default_properties,\n        (copy_ctor_func_t)zval_add_ref,\n        (void *)&tmp,\n        sizeof tmp\n    );\n#else\n    object_properties_init(&intern->std, class_type);\n#endif\n\n    // create a destructor for this struct\n    retval.handle = zend_objects_store_put(\n        intern,\n        (zend_objects_store_dtor_t) zend_objects_destroy_object,\n        free_kafka_connection,\n        NULL TSRMLS_CC\n    );\n    retval.handlers = zend_get_std_object_handlers();\n\n    return retval;\n}\n\n//clean current connections\nvoid free_kafka_connection(void *object TSRMLS_DC)\n{\n    int interval = 1;\n    kafka_connection *connection = ((kafka_connection *) object);\n    //no confirmation, wait to close connection a bit longer, for what it's worth\n    if (connection->delivery_confirm_mode == PHP_KAFKA_CONFIRM_OFF)\n        interval = 50;\n\n    if (connection->brokers)\n        efree(connection->brokers);\n    if (connection->compression)\n        efree(connection->compression);\n    if (connection->queue_buffer)\n        efree(connection->queue_buffer);\n    if (connection->retry_count)\n        efree(connection->retry_count);\n    if (connection->retry_interval)\n        efree(connection->retry_interval);\n    if (connection->consumer != NULL)\n        kafka_destroy(\n            connection->consumer,\n            1\n        );\n    if (connection->producer != NULL)\n        kafka_destroy(\n            connection->producer,\n            interval\n        );\n    efree(connection);\n}\n\nstatic\nint is_number(const char *str)\n{\n    while (*str != '\\0')\n    {\n        if (!isdigit(*str))\n            return 0;\n        ++str;\n    }\n    return 1;\n}\n\n//parse connection config array, and update connection struct\nstatic int parse_options_array(zval *arr, kafka_connection **conn)\n{\n    zval **entry;\n    char *assoc_key;\n    int key_len;\n    long idx;\n    HashPosition pos;\n    //make life easier, dereference struct\n    kafka_connection *connection = *conn;\n    zend_hash_internal_pointer_reset_ex(Z_ARRVAL_P(arr), &pos);\n    while (zend_hash_get_current_data_ex(Z_ARRVAL_P(arr), (void **)&entry, &pos) == SUCCESS)\n    {\n        if (zend_hash_get_current_key_ex(Z_ARRVAL_P(arr), &assoc_key, &key_len, &idx, 0, &pos) == HASH_KEY_IS_STRING)\n        {\n            zend_throw_exception(kafka_exception, \"Invalid option key, use class constants\", 0 TSRMLS_CC);\n            return -1;\n        }\n        else\n        {\n            char tmp[128];\n            switch (idx)\n            {\n                case PHP_KAFKA_RETRY_COUNT:\n                    if (Z_TYPE_PP(entry) == IS_STRING && is_number(Z_STRVAL_PP(entry)))\n                    {\n                        if (connection->retry_count)\n                            efree(connection->retry_count);\n                        connection->retry_count = estrdup(Z_STRVAL_PP(entry));\n                    }\n                    else if (Z_TYPE_PP(entry) == IS_LONG)\n                    {\n                        if (connection->retry_count)\n                            efree(connection->retry_count);\n                        snprintf(tmp, 128, \"%d\", Z_LVAL_PP(entry));\n                        connection->retry_count = estrdup(tmp);\n                    }\n                    else\n                    {\n                        zend_throw_exception(kafka_exception, \"Invalid value for Kafka::RETRY_COUNT option, expected numeric value\", 0 TSRMLS_CC);\n                        return -1;\n                    }\n                    break;\n                case PHP_KAFKA_RETRY_INTERVAL:\n                    if (Z_TYPE_PP(entry) == IS_STRING && is_number(Z_STRVAL_PP(entry)))\n                    {\n                        if (connection->retry_interval)\n                            efree(connection->retry_interval);\n                        connection->retry_interval = estrdup(Z_STRVAL_PP(entry));\n                    }\n                    else if (Z_TYPE_PP(entry) == IS_LONG)\n                    {\n                        if (connection->retry_interval)\n                            efree(connection->retry_interval);\n                        snprintf(tmp, 128, \"%d\", Z_LVAL_PP(entry));\n                        connection->retry_interval = estrdup(tmp);\n                    }\n                    else\n                    {\n                        zend_throw_exception(kafka_exception, \"Invalid value for Kafka::RETRY_INTERVAL option, expected numeric value\", 0 TSRMLS_CC);\n                        return -1;\n                    }\n                    break;\n                case PHP_KAFKA_CONFIRM_DELIVERY:\n                    if (\n                        Z_TYPE_PP(entry) != IS_LONG\n                        ||\n                        (\n                            Z_LVAL_PP(entry) != PHP_KAFKA_CONFIRM_OFF\n                            &&\n                            Z_LVAL_PP(entry) != PHP_KAFKA_CONFIRM_BASIC\n                            &&\n                            Z_LVAL_PP(entry) != PHP_KAFKA_CONFIRM_EXTENDED\n                        )\n                    )\n                    {\n                        zend_throw_exception(kafka_exception, \"Invalid value for Kafka::CONFIRM_DELIVERY, use Kafka::CONFIRM_* constants\", 0 TSRMLS_CC);\n                        return -1;\n                    }\n                    connection->delivery_confirm_mode = Z_LVAL_PP(entry);\n                    break;\n                case PHP_KAFKA_QUEUE_BUFFER_SIZE:\n                    if (Z_TYPE_PP(entry) == IS_STRING && is_number(Z_STRVAL_PP(entry)))\n                    {\n                        if (connection->queue_buffer)\n                            efree(connection->queue_buffer);\n                        connection->queue_buffer = estrdup(Z_STRVAL_PP(entry));\n                    }\n                    else if (Z_TYPE_PP(entry) == IS_LONG)\n                    {\n                        if (connection->queue_buffer)\n                            efree(connection->queue_buffer);\n                        snprintf(tmp, 128, \"%d\", Z_LVAL_PP(entry));\n                        connection->queue_buffer = estrdup(tmp);\n                    }\n                    else\n                    {\n                        zend_throw_exception(kafka_exception, \"Invalid value for Kafka::QUEUE_BUFFER_SIZE, expected numeric value\", 0 TSRMLS_CC);\n                        return -1;\n                    }\n                    break;\n                case PHP_KAFKA_COMPRESSION_MODE:\n                    if (Z_TYPE_PP(entry) != IS_STRING)\n                    {\n                        zend_throw_exception(kafka_exception, \"Invalid type for Kafka::COMPRESSION_MODE option, use Kafka::COMPRESSION_* constants\", 0 TSRMLS_CC);\n                        return -1;\n                    }\n                    if (\n                        !strcmp(Z_STRVAL_PP(entry), PHP_KAFKA_COMPRESSION_GZIP)\n                        &&\n                        !strcmp(Z_STRVAL_PP(entry), PHP_KAFKA_COMPRESSION_NONE)\n                        &&\n                        !strcmp(Z_STRVAL_PP(entry), PHP_KAFKA_COMPRESSION_SNAPPY)\n                    ) {\n                        zend_throw_exception(kafka_exception, \"Invalid value for Kafka::COMPRESSION_MODE, use Kafka::COMPRESSION_* constants\", 0 TSRMLS_CC);\n                        return -1;\n                    }\n                    if (connection->compression)\n                        efree(connection->compression);\n                    connection->compression = estrdup(Z_STRVAL_PP(entry));\n                    break;\n                case PHP_KAFKA_LOGLEVEL:\n                    if (Z_TYPE_PP(entry) != IS_LONG ||\n                        (Z_LVAL_PP(entry) != PHP_KAFKA_LOG_OFF && Z_LVAL_PP(entry) != PHP_KAFKA_LOG_ON))\n                    {\n                        zend_throw_exception(kafka_exception, \"Invalid value for Kafka::LOGLEVEL option, use Kafka::LOG_* constants\", 0 TSRMLS_CC);\n                        return -1;\n                    }\n                    connection->log_level = Z_LVAL_PP(entry);\n                    break;\n            }\n        }\n        zend_hash_move_forward_ex(Z_ARRVAL_P(arr), &pos);\n    }\n    return 0;\n}\n\n/** {{{ proto void DOMDocument::__construct( string $brokers [, array $options = null]);\n    Constructor, expects a comma-separated list of brokers to connect to\n*/\nPHP_METHOD(Kafka, __construct)\n{\n    zval *arr = NULL;\n    char *brokers = NULL;\n    int brokers_len = 0;\n    kafka_connection *connection = (kafka_connection *) zend_object_store_get_object(\n        getThis() TSRMLS_CC\n    );\n\n    if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, \"s|a\",\n            &brokers, &brokers_len, &arr) == FAILURE) {\n        return;\n    }\n    if (arr)\n    {\n        if (parse_options_array(arr, &connection))\n            return;//we've thrown an exception\n    }\n    connection->brokers = estrdup(brokers);\n    kafka_set_log_level(connection->log_level);\n    kafka_connect(brokers);\n}\n/* }}} end Kafka::__construct */\n\n/* {{{ proto bool Kafka::isConnected( [ int $mode ] )\n    returns true if kafka connection is active, fals if not\n    Mode defaults to current active mode\n*/\nPHP_METHOD(Kafka, isConnected)\n{\n    zval *mode = NULL,\n        *obj = getThis();\n    long tmp_val = -1;\n    rd_kafka_type_t type;\n    GET_KAFKA_CONNECTION(k, obj);\n    if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, \"|z\", &mode) == FAILURE)\n        return;\n    if (mode)\n    {\n        if (Z_TYPE_P(mode) == IS_LONG)\n            tmp_val = Z_LVAL_P(mode);\n        if (tmp_val != PHP_KAFKA_MODE_CONSUMER && tmp_val != PHP_KAFKA_MODE_PRODUCER)\n        {\n            zend_throw_exception(\n                kafka_exception,\n                \"invalid argument passed to Kafka::isConnected, use Kafka::MODE_* constants\",\n                 0 TSRMLS_CC\n            );\n            return;\n        }\n        if (tmp_val == PHP_KAFKA_MODE_CONSUMER)\n            type = RD_KAFKA_CONSUMER;\n        else\n            type = RD_KAFKA_PRODUCER;\n    }\n    else\n        type = k->rk_type;\n    if (type == RD_KAFKA_CONSUMER)\n    {\n        if (k->consumer != NULL)\n        {\n            RETURN_TRUE;\n        }\n        RETURN_FALSE;\n\n    }\n    if (k->producer != NULL)\n    {\n        RETURN_TRUE;\n    }\n    RETURN_FALSE;\n}\n/* }}} end bool Kafka::isConnected */\n\n/* {{{ proto void Kafka::__destruct( void )\n    constructor, disconnects kafka\n*/\nPHP_METHOD(Kafka, __destruct)\n{\n    int interval = 1;\n    kafka_connection *connection = (kafka_connection *) zend_object_store_get_object(\n        getThis() TSRMLS_CC\n    );\n    if (connection->delivery_confirm_mode == PHP_KAFKA_CONFIRM_OFF)\n        interval = 25;\n    if (connection->brokers)\n        efree(connection->brokers);\n    if (connection->queue_buffer)\n        efree(connection->queue_buffer);\n    if (connection->retry_count)\n        efree(connection->retry_count);\n    if (connection->retry_interval)\n        efree(connection->retry_interval);\n    if (connection->compression)\n        efree(connection->compression);\n    if (connection->consumer != NULL)\n        kafka_destroy(\n            connection->consumer,\n            1\n        );\n    if (connection->producer != NULL)\n        kafka_destroy(\n            connection->producer,\n            interval\n        );\n    connection->producer    = NULL;\n    connection->brokers     = NULL;\n    connection->compression = NULL;\n    connection->consumer    = NULL;\n    connection->queue_buffer = connection->retry_count = connection->retry_interval = NULL;\n    connection->delivery_confirm_mode = 0;\n    connection->consumer_partition = connection->producer_partition = PHP_KAFKA_PARTITION_RANDOM;\n}\n/* }}} end Kafka::__destruct */\n\n/* {{{ proto Kafka Kafka::set_partition( int $partition [, int $mode ] );\n    Set partition (used by consume method)\n    This method is deprecated, in favour of the more PSR-compliant\n    Kafka::setPartition\n*/\nPHP_METHOD(Kafka, set_partition)\n{\n    zval *partition,\n        *mode = NULL,\n        *obj = getThis();\n    long p_value;\n    GET_KAFKA_CONNECTION(connection, obj);\n    if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, \"z|z\", &partition, &mode) == FAILURE)\n        return;\n    if (Z_TYPE_P(partition) != IS_LONG || (mode && Z_TYPE_P(mode) != IS_LONG)) {\n        zend_throw_exception(kafka_exception, \"Partition and/or mode is expected to be an int\", 0 TSRMLS_CC);\n        return;\n    }\n    if (mode)\n    {\n        if (Z_LVAL_P(mode) != PHP_KAFKA_MODE_CONSUMER && Z_LVAL_P(mode) != PHP_KAFKA_MODE_PRODUCER)\n        {\n            zend_throw_exception(\n                kafka_exception,\n                \"invalid mode argument passed to Kafka::setPartition, use Kafka::MODE_* constants\",\n                0 TSRMLS_CC\n            );\n            return;\n        }\n    }\n    p_value = Z_LVAL_P(partition);\n    if (p_value < -1)\n    {\n        zend_throw_exception(\n            kafka_exception,\n            \"invalid partition passed to Kafka::setPartition, partition value should be >= 0 or Kafka::PARTION_RANDOM\",\n            0 TSRMLS_CC\n        );\n        return;\n    }\n    p_value = p_value == -1 ? PHP_KAFKA_PARTITION_RANDOM : p_value;\n    if (!mode)\n    {\n        connection->consumer_partition = p_value;\n        connection->producer_partition = p_value;\n        kafka_set_partition(p_value);\n    }\n    else\n    {\n        if (Z_LVAL_P(mode) != PHP_KAFKA_MODE_CONSUMER)\n            connection->producer_partition = p_value;\n        else\n            connection->consumer_partition = p_value;\n    }\n    //return $this\n    RETURN_ZVAL(getThis(), 1, 0);\n}\n/* }}} end Kafka::set_partition */\n\n/* {{{ proto Kafka Kafka::setLogLevel( mixed $logLevel )\n    toggle syslogging on or off use Kafka::LOG_* constants\n*/\nPHP_METHOD(Kafka, setLogLevel)\n{\n    zval *log_level,\n        *obj = getThis();\n    GET_KAFKA_CONNECTION(connection, obj);\n    if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, \"z\", &log_level) == FAILURE)\n    {\n        return;//?\n    }\n    if (Z_TYPE_P(log_level) != IS_LONG) {\n        zend_throw_exception(kafka_exception, \"Kafka::setLogLevel expects argument to be an int\", 0 TSRMLS_CC);\n        return;\n    }\n    if (\n        Z_LVAL_P(log_level) != PHP_KAFKA_LOG_ON\n        &&\n        Z_LVAL_P(log_level) != PHP_KAFKA_LOG_OFF\n    ) {\n        zend_throw_exception(kafka_exception, \"Invalid argument, use Kafka::LOG_* constants\", 0 TSRMLS_CC);\n        return;\n    }\n    connection->log_level = Z_LVAL_P(log_level);\n    kafka_set_log_level(connection->log_level);\n    RETURN_ZVAL(getThis(), 1, 0);\n}\n/* }}} end Kafka::setLogLevel */\n\n/* {{{ proto Kafka Kafka::setCompression( string $compression )\n * Enable compression for produced messages\n */\nPHP_METHOD(Kafka, setCompression)\n{\n    zval *obj = getThis();\n    char *arg;\n    int arg_len;\n    GET_KAFKA_CONNECTION(connection, obj);\n    if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, \"s\", &arg, &arg_len) == FAILURE)\n    {\n        return;\n    }\n    //if valid compression constant was used...\n    if (\n        !strcmp(arg, PHP_KAFKA_COMPRESSION_GZIP)\n            ||\n        !strcmp(arg, PHP_KAFKA_COMPRESSION_NONE)\n            ||\n        !strcmp(arg, PHP_KAFKA_COMPRESSION_SNAPPY)\n    )\n    {\n        if (connection->compression || strcmp(connection->compression, arg))\n        {\n            //close connections, if any, currently only use compression for producers\n            if (connection->producer)\n                kafka_destroy(connection->producer, 1);\n            connection->producer = NULL;\n            connection->producer_partition = PHP_KAFKA_PARTITION_RANDOM;\n            connection->compression = estrdup(arg);\n        }\n    }\n    else\n    {\n        zend_throw_exception(kafka_exception, \"Invalid argument, use Kafka::COMPRESSION_* constants\", 0 TSRMLS_CC);\n    }\n    RETURN_ZVAL(obj, 1, 0);\n}\n/* }}} end proto Kafka::setCompression */\n\n/* {{{ proto string Kafka::getCompression( void )\n * Get type of compression that is currently used\n */\nPHP_METHOD(Kafka, getCompression)\n{\n    zval *obj = getThis();\n    GET_KAFKA_CONNECTION(connection, obj);\n    if (!connection->compression)\n        RETURN_STRING(PHP_KAFKA_COMPRESSION_NONE, 1);\n    RETURN_STRING(connection->compression, 1);\n}\n/* }}} end proto Kafka::getCompression */\n\n/* {{{ proto Kafka Kafka::setPartition( int $partition [, int $mode ] );\n    Set partition to use for Kafka::consume calls\n*/\nPHP_METHOD(Kafka, setPartition)\n{\n    zval *partition,\n        *mode = NULL,\n        *obj = getThis();\n    long p_value;\n    GET_KAFKA_CONNECTION(connection, obj);\n    if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, \"z|z\", &partition, &mode) == FAILURE)\n        return;\n    if (Z_TYPE_P(partition) != IS_LONG || (mode && Z_TYPE_P(mode) != IS_LONG)) {\n        zend_throw_exception(kafka_exception, \"Partition and/or mode is expected to be an int\", 0 TSRMLS_CC);\n        return;\n    }\n    if (mode)\n    {\n        if (Z_LVAL_P(mode) != PHP_KAFKA_MODE_CONSUMER && Z_LVAL_P(mode) != PHP_KAFKA_MODE_PRODUCER)\n        {\n            zend_throw_exception(\n                kafka_exception,\n                \"invalid mode argument passed to Kafka::setPartition, use Kafka::MODE_* constants\",\n                0 TSRMLS_CC\n            );\n            return;\n        }\n    }\n    p_value = Z_LVAL_P(partition);\n    if (p_value < -1)\n    {\n        zend_throw_exception(\n            kafka_exception,\n            \"invalid partition passed to Kafka::setPartition, partition value should be >= 0 or Kafka::PARTION_RANDOM\",\n            0 TSRMLS_CC\n        );\n        return;\n    }\n    p_value = p_value == -1 ? PHP_KAFKA_PARTITION_RANDOM : p_value;\n    if (!mode)\n    {\n        connection->consumer_partition = p_value;\n        connection->producer_partition = p_value;\n        kafka_set_partition(p_value);\n    }\n    else\n    {\n        if (Z_LVAL_P(mode) != PHP_KAFKA_MODE_CONSUMER)\n            connection->producer_partition = p_value;\n        else\n            connection->consumer_partition = p_value;\n    }\n    //return $this\n    RETURN_ZVAL(getThis(), 1, 0);\n}\n/* }}} end Kafka::setPartition */\n\n/* {{{ proto int Kafka::getPartition( int $mode )\n    Get partition for connection (consumer/producer)\n*/\nPHP_METHOD(Kafka, getPartition)\n{\n    zval *obj = getThis(),\n        *arg = NULL;\n    GET_KAFKA_CONNECTION(connection, obj);\n    if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, \"z\", &arg) == FAILURE)\n        return;\n    if (Z_TYPE_P(arg) != IS_LONG || (Z_LVAL_P(arg) != PHP_KAFKA_MODE_CONSUMER && Z_LVAL_P(arg) != PHP_KAFKA_MODE_PRODUCER))\n    {\n        zend_throw_exception(kafka_exception, \"Invalid argument passed to Kafka::getPartition, use Kafka::MODE_* constants\", 0 TSRMLS_CC);\n        return;\n    }\n    if (Z_LVAL_P(arg) == PHP_KAFKA_MODE_CONSUMER)\n        RETURN_LONG(connection->consumer_partition);\n    RETURN_LONG(connection->producer_partition);\n}\n/* }}} end proto Kafka::getPartition */\n\n/* {{{ proto array Kafka::getTopics( void )\n    Get all existing topics\n*/\nPHP_METHOD(Kafka, getTopics)\n{\n    zval *obj = getThis();\n    GET_KAFKA_CONNECTION(connection, obj);\n    if (connection->brokers == NULL && connection->consumer == NULL)\n    {\n        zend_throw_exception(kafka_exception, \"No brokers to get topics from\", 0 TSRMLS_CC);\n        return;\n    }\n    if (connection->consumer == NULL)\n    {\n        kafka_connection_params config;\n        config.type = RD_KAFKA_CONSUMER;\n        config.log_level = connection->log_level;\n        config.queue_buffer = connection->queue_buffer;\n        config.compression = NULL;\n        connection->consumer = kafka_get_connection(config, connection->brokers);\n        if (connection->consumer == NULL)\n        {\n            zend_throw_exception(kafka_exception, \"Failed to connect to kafka\", 0 TSRMLS_CC);\n            return;\n        }\n        connection->rk_type = RD_KAFKA_CONSUMER;\n    }\n    array_init(return_value);\n    kafka_get_topics(connection->consumer, return_value);\n}\n/* }}} end Kafka::getTopics */\n\n/* {{{ proto Kafka Kafka::setBrokers ( string $brokers [, array $options = null ] )\n    Set brokers on-the-fly\n*/\nPHP_METHOD(Kafka, setBrokers)\n{\n    zval *arr = NULL,\n        *obj = getThis();\n    char *brokers;\n    int brokers_len;\n    GET_KAFKA_CONNECTION(connection, obj);\n\n    if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, \"s|a\",\n            &brokers, &brokers_len, &arr) == FAILURE) {\n        return;\n    }\n    //if array is passed, parse it, return if an exception was thrown...\n    if (arr && parse_options_array(arr, &connection))\n        return;\n    if (connection->consumer)\n        kafka_destroy(connection->consumer, 1);\n    if (connection->producer)\n        kafka_destroy(connection->producer, 1);\n    //free previous brokers value, if any\n    if (connection->brokers)\n        efree(connection->brokers);\n    if (connection->compression)\n        efree(connection->compression);\n    //set brokers\n    connection->brokers = estrdup(\n        brokers\n    );\n    //reinit to NULL\n    connection->producer = connection->consumer = NULL;\n    connection->compression = NULL;\n    //restore partitions back to random...\n    connection->consumer_partition = connection->producer_partition = PHP_KAFKA_PARTITION_RANDOM;\n    //set brokers member to correct value\n    //we can ditch this call, I think...\n    kafka_connect(\n        connection->brokers\n    );\n    //return\n    RETURN_ZVAL(obj, 1, 0);\n}\n/* }}} end Kafka::setBrokers */\n\n/* proto Kafka Kafka::setOptions( array $options )\n * Set connection options on the \"fly\"\n */\nPHP_METHOD(Kafka, setOptions)\n{\n    zval *arr = NULL,\n        *obj = getThis();\n    GET_KAFKA_CONNECTION(connection, obj);\n    if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, \"a\", &arr) == FAILURE)\n    {\n        return;\n    }\n    if (parse_options_array(arr, &connection))\n        return;\n    RETURN_ZVAL(obj, 1, 0);\n\n}\n/* end proto Kafka::setOptions */\n\n/* {{{ proto array Kafka::getPartitionsForTopic( string $topic )\n    Get an array of available partitions for a given topic\n*/\nPHP_METHOD(Kafka, getPartitionsForTopic)\n{\n    zval *obj = getThis();\n    char *topic = NULL;\n    int topic_len = 0;\n    GET_KAFKA_CONNECTION(connection, obj);\n    if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, \"s\",\n            &topic, &topic_len) == FAILURE) {\n        return;\n    }\n    if (!connection->consumer)\n    {\n        kafka_connection_params config;\n        config.type = RD_KAFKA_CONSUMER;\n        config.log_level = connection->log_level;\n        config.queue_buffer = connection->queue_buffer;\n        config.compression = NULL;\n        connection->consumer = kafka_get_connection(config, connection->brokers);\n        if (connection->consumer == NULL)\n        {\n            zend_throw_exception(kafka_exception, \"Failed to connect to kafka\", 0 TSRMLS_CC);\n            return;\n        }\n        connection->rk_type = RD_KAFKA_CONSUMER;\n    }\n    array_init(return_value);\n    kafka_get_partitions(connection->consumer, return_value, topic);\n}\n/* }}} end Kafka::getPartitionsForTopic */\n\n/* {{{ proto Kafka::getPartitionOffsets( string $topic )\n * Get an array containing all partitions and their respective first offsets\n */\nPHP_METHOD(Kafka, getPartitionOffsets)\n{\n    char *topic = NULL;\n    int topic_len = 0,\n        kafka_r;\n    long *offsets = NULL,\n        i;\n    kafka_connection *connection = (kafka_connection *) zend_object_store_get_object(\n        getThis() TSRMLS_CC\n    );\n\n    if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, \"s\",\n            &topic, &topic_len) == FAILURE) {\n        return;\n    }\n    if (!connection->consumer)\n    {\n        kafka_connection_params config;\n        config.type = RD_KAFKA_CONSUMER;\n        config.log_level = connection->log_level;\n        config.queue_buffer = connection->queue_buffer;\n        config.compression = NULL;\n        connection->consumer = kafka_get_connection(config, connection->brokers);\n        if (connection->consumer == NULL)\n        {\n            zend_throw_exception(kafka_exception, \"Failed to connect to kafka\", 0 TSRMLS_CC);\n            return;\n        }\n        connection->rk_type = RD_KAFKA_CONSUMER;\n    }\n    kafka_r = kafka_partition_offsets(\n        connection->consumer,\n        &offsets,\n        topic\n    );\n    if (kafka_r < 1) {\n        char *msg = NULL;\n        if (kafka_r)\n            msg = kafka_r == -2 ? \"No kafka connection\" : \"Allocation error\";\n        else\n            msg = \"Failed to get metadata for topic\";\n        zend_throw_exception(\n            kafka_exception,\n            msg,\n            0 TSRMLS_CC\n        );\n        return;\n    }\n    array_init(return_value);\n    for (i=0;i<kafka_r;++i) {\n        add_index_long(return_value,i, offsets[i]);\n    }\n    free(offsets);//kafka allocates this bit, free outside of zend\n} /* }}} end Kafka::getPartitionOffsets */\n\n/* {{{ proto bool Kafka::disconnect( [int $mode] );\n   if No $mode argument is passed, all connections will be closed\n    Disconnects kafka, returns false if disconnect failed\n*/\nPHP_METHOD(Kafka, disconnect)\n{\n    zval *obj = getThis(),\n        *mode = NULL;\n    long type = -1;\n    GET_KAFKA_CONNECTION(connection, obj);\n    if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, \"|z\",\n            &mode) == FAILURE) {\n        return;\n    }\n    if (mode)\n    {//mode was given\n        if (Z_TYPE_P(mode) == IS_LONG)\n            type = Z_LVAL_P(mode);\n        if (type != PHP_KAFKA_MODE_CONSUMER && type != PHP_KAFKA_MODE_PRODUCER)\n        {\n            zend_throw_exception(\n                kafka_exception,\n                \"invalid argument passed to Kafka::disconnect, use Kafka::MODE_* constants\",\n                0 TSRMLS_CC\n            );\n            return;\n        }\n        if (type == PHP_KAFKA_MODE_CONSUMER)\n        {//disconnect consumer\n            if (connection->consumer)\n                kafka_destroy(connection->consumer, 1);\n            connection->consumer = NULL;\n        }\n        else\n        {\n            if (connection->producer)\n                kafka_destroy(connection->producer, 1);\n            connection->producer = NULL;\n        }\n        RETURN_TRUE;\n    }\n    if (connection->consumer)\n        kafka_destroy(connection->consumer, 1);\n    if (connection->producer)\n        kafka_destroy(connection->producer, 1);\n    connection->producer = connection->consumer = NULL;\n    connection->consumer_partition = connection->producer_partition = PHP_KAFKA_PARTITION_RANDOM;\n    RETURN_TRUE;\n}\n/* }}} end Kafka::disconnect */\n\n/* {{{ proto Kafka Kafka::produce( string $topic, string $message [, int $timeout = 60000]);\n    Produce a message, returns instance\n    or throws KafkaException in case something went wrong\n*/\nPHP_METHOD(Kafka, produce)\n{\n    zval *object = getThis();\n    GET_KAFKA_CONNECTION(connection, object);\n    char *topic;\n    char *msg;\n    long reporting = connection->delivery_confirm_mode;\n    long timeout = 60000;\n    int topic_len,\n        msg_len,\n        status = 0;\n\n\n    if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, \"ss|l\",\n            &topic, &topic_len,\n            &msg, &msg_len,\n            &timeout) == FAILURE) {\n        return;\n    }\n    if (!connection->producer)\n    {\n        kafka_connection_params config;\n        config.type = RD_KAFKA_PRODUCER;\n        config.log_level = connection->log_level;\n        config.reporting = connection->delivery_confirm_mode;\n        config.retry_count = connection->retry_count;\n        config.retry_interval = connection->retry_interval;\n        config.compression = connection->compression;\n        connection->producer = kafka_get_connection(config, connection->brokers);\n        if (connection->producer == NULL)\n        {\n            zend_throw_exception(kafka_exception, \"Failed to connect to kafka\", 0 TSRMLS_CC);\n            return;\n        }\n        connection->rk_type = RD_KAFKA_PRODUCER;\n    }\n    //this does nothing at this stage...\n    kafka_set_partition(\n        (int) connection->producer_partition\n    );\n    if (connection->delivery_confirm_mode == PHP_KAFKA_CONFIRM_EXTENDED)\n        status = kafka_produce_report(connection->producer, topic, msg, msg_len, timeout);\n    else\n        status = kafka_produce(connection->producer, topic, msg, msg_len, connection->delivery_confirm_mode, timeout);\n    switch (status)\n    {\n        case -1:\n            zend_throw_exception(kafka_exception, \"Failed to produce message\", 0 TSRMLS_CC);\n            return;\n        case -2:\n            zend_throw_exception(kafka_exception, \"Connection failure, cannot produce message\", 0 TSRMLS_CC);\n            return;\n        case -3:\n            zend_throw_exception(kafka_exception, \"Topic configuration error\", 0 TSRMLS_CC);\n            return;\n    }\n    RETURN_ZVAL(object, 1, 0);\n}\n/* }}} end Kafka::produce */\n\n/* {{{ proto Kafka Kafka::produceBatch( string $topic, array $messages [, int $timeout = 60000]);\n    Produce a batch of messages, returns instance\n    or throws exceptions in case of error\n*/\nPHP_METHOD(Kafka, produceBatch)\n{\n    zval *arr,\n         *object = getThis(),\n         **entry;\n    GET_KAFKA_CONNECTION(connection, object);\n    char *topic;\n    char *msg;\n    char *msg_batch[50];\n    int msg_batch_len[50] = {0};\n    long reporting = connection->delivery_confirm_mode;\n    long timeout = 60000;\n    int topic_len,\n        msg_len,\n        current_idx = 0,\n        status = 0;\n    HashPosition pos;\n    if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, \"sa|l\",\n            &topic, &topic_len,\n            &arr,\n            &timeout) == FAILURE) {\n        return;\n    }\n    //get producer up and running\n    if (!connection->producer)\n    {\n        kafka_connection_params config;\n        config.type = RD_KAFKA_PRODUCER;\n        config.log_level = connection->log_level;\n        config.reporting = connection->delivery_confirm_mode;\n        config.retry_count = connection->retry_count;\n        config.compression = connection->compression;\n        config.retry_interval = connection->retry_interval;\n        connection->producer = kafka_get_connection(config, connection->brokers);\n        if (connection->producer == NULL)\n        {\n            zend_throw_exception(kafka_exception, \"Failed to connect to kafka\", 0 TSRMLS_CC);\n            return;\n        }\n        connection->rk_type = RD_KAFKA_PRODUCER;\n    }\n    //this does nothing at this stage...\n    kafka_set_partition(\n        (int) connection->producer_partition\n    );\n    //iterate array of messages, start producing them\n    //todo: change individual produce calls to a more performant\n    //produce queue...\n    zend_hash_internal_pointer_reset_ex(Z_ARRVAL_P(arr), &pos);\n    while (zend_hash_get_current_data_ex(Z_ARRVAL_P(arr), (void **)&entry, &pos) == SUCCESS)\n    {\n        if (Z_TYPE_PP(entry) == IS_STRING)\n        {\n            msg_batch[current_idx] = Z_STRVAL_PP(entry);\n            msg_batch_len[current_idx] = Z_STRLEN_PP(entry);\n            ++current_idx;\n            if (current_idx == 50)\n            {\n                status = kafka_produce_batch(connection->producer, topic, msg_batch, msg_batch_len, current_idx, connection->delivery_confirm_mode, timeout);\n                if (status)\n                {\n                    if (status < 0)\n                        zend_throw_exception(kafka_exception, \"Failed to produce messages\", 0 TSRMLS_CC);\n                    else if (status > 0)\n                    {\n                        char err_msg[200];\n                        snprintf(err_msg, 200, \"Produced messages with %d errors\", status);\n                        zend_throw_exception(kafka_exception, err_msg, 0 TSRMLS_CC);\n                    }\n                    return;\n                }\n                current_idx = 0;//reset batch counter\n            }\n        }\n        zend_hash_move_forward_ex(Z_ARRVAL_P(arr), &pos);\n    }\n    if (current_idx)\n    {//we still have some messages to produce...\n        status = kafka_produce_batch(connection->producer, topic, msg_batch, msg_batch_len, current_idx, connection->delivery_confirm_mode, timeout);\n        if (status)\n        {\n            if (status < 0)\n                zend_throw_exception(kafka_exception, \"Failed to produce messages\", 0 TSRMLS_CC);\n            else if (status > 0)\n            {\n                char err_msg[200];\n                snprintf(err_msg, 200, \"Produced messages with %d errors\", status);\n                zend_throw_exception(kafka_exception, err_msg, 0 TSRMLS_CC);\n            }\n            return;\n        }\n    }\n    RETURN_ZVAL(object, 1, 0);\n}\n/* end proto Kafka::produceBatch */\n\n/* {{{ proto array Kafka::consume( string $topic, [ string $offset = 0 [, mixed $length = 1] ] );\n    Consumes 1 or more ($length) messages from the $offset (default 0)\n*/\nPHP_METHOD(Kafka, consume)\n{\n    zval *object = getThis();\n    GET_KAFKA_CONNECTION(connection, object);\n    char *topic;\n    int topic_len;\n    char *offset;\n    int offset_len, status = 0;\n    long count = 0;\n    zval *item_count = NULL;\n\n    if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, \"ss|z\",\n            &topic, &topic_len,\n            &offset, &offset_len,\n            &item_count) == FAILURE) {\n        return;\n    }\n    if (item_count == NULL || Z_TYPE_P(item_count) == IS_NULL)\n    {//default\n        count = 1;\n    }\n    else\n    {\n        if (Z_TYPE_P(item_count) == IS_STRING && strcmp(Z_STRVAL_P(item_count), PHP_KAFKA_OFFSET_END) == 0) {\n            count = -1;\n        } else if (Z_TYPE_P(item_count) == IS_LONG) {\n            count = Z_LVAL_P(item_count);\n        } else {\n\n            zend_throw_exception(\n                kafka_exception,\n                \"Invalid messageCount value passed to Kafka::consume, should be int or OFFSET constant\",\n                0 TSRMLS_CC\n            );\n        }\n    }\n    if (count < -1 || count == 0)\n    {\n        zend_throw_exception(\n            kafka_exception,\n            \"Invalid messageCount value passed to Kafka::consume\",\n            0 TSRMLS_CC\n        );\n    }\n    if (!connection->consumer)\n    {\n        kafka_connection_params config;\n        config.type = RD_KAFKA_CONSUMER;\n        config.log_level = connection->log_level;\n        config.queue_buffer = connection->queue_buffer;\n        config.compression = NULL;\n        connection->consumer = kafka_get_connection(config, connection->brokers);\n        if (connection->consumer == NULL)\n        {\n            zend_throw_exception(kafka_exception, \"Failed to connect to kafka\", 0 TSRMLS_CC);\n            return;\n        }\n        connection->rk_type = RD_KAFKA_CONSUMER;\n    }\n    array_init(return_value);\n    if (connection->consumer_partition == PHP_KAFKA_PARTITION_RANDOM)\n    {\n        kafka_consume_all(\n            connection->consumer,\n            return_value,\n            topic,\n            offset,\n            count\n        );\n    }\n    else\n    {\n        status = kafka_consume(\n            connection->consumer,\n            return_value,\n            topic,\n            offset,\n            count,\n            connection->consumer_partition\n        );\n        if (status)\n        {\n            switch (status)\n            {\n                case -1:\n                    zend_throw_exception(\n                        kafka_exception,\n                        \"Invalid offset passed, use Kafka::OFFSET_* constants, or positive integer!\",\n                        0 TSRMLS_CC\n                    );\n                    return;\n                case -2:\n                    zend_throw_exception(\n                        kafka_exception,\n                        \"No kafka connection available\",\n                        0 TSRMLS_CC\n                    );\n                    return;\n                case -3:\n                    zend_throw_exception(\n                        kafka_exception,\n                        \"Unable to access topic\",\n                        0 TSRMLS_CC\n                    );\n                    return;\n                case -4:\n                default:\n                    zend_throw_exception(\n                        kafka_exception,\n                        \"Consuming from topic failed\",\n                        0 TSRMLS_CC\n                    );\n                    return;\n            }\n        }\n    }\n}\n/* }}} end Kafka::consume */\n"
  },
  {
    "path": "php_kafka.h",
    "content": "/**\n *  Copyright 2015 Elias Van Ootegem.\n *\n *  Licensed under the Apache License, Version 2.0 (the \"License\");\n *  you may not use this file except in compliance with the License.\n *  You may obtain a copy of the License at\n *\n *  http://www.apache.org/licenses/LICENSE-2.0\n *\n *  Unless required by applicable law or agreed to in writing, software\n *  distributed under the License is distributed on an \"AS IS\" BASIS,\n *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n *  See the License for the specific language governing permissions and\n *  limitations under the License.\n *\n * Special thanks to Patrick Reilly and Aleksandar Babic for their work\n * On which this extension was actually built.\n */\n#ifndef PHP_KAFKA_H\n#define\tPHP_KAFKA_H 1\n\n#define PHP_KAFKA_VERSION \"0.2.0-dev\"\n#define PHP_KAFKA_EXTNAME \"kafka\"\n#define PHP_KAFKA_OFFSET_BEGIN \"beginning\"\n#define PHP_KAFKA_OFFSET_END \"end\"\n#define PHP_KAFKA_LOG_ON 1\n#define PHP_KAFKA_LOG_OFF 0\n#define PHP_KAFKA_MODE_CONSUMER 0\n#define PHP_KAFKA_MODE_PRODUCER 1\n#define PHP_KAFKA_COMPRESSION_NONE \"none\"\n#define PHP_KAFKA_COMPRESSION_GZIP \"gzip\"\n#define PHP_KAFKA_COMPRESSION_SNAPPY \"snappy\"\n//option constants...\n#define PHP_KAFKA_RETRY_COUNT 1\n#define PHP_KAFKA_RETRY_INTERVAL 2\n#define PHP_KAFKA_CONFIRM_DELIVERY 4\n#define PHP_KAFKA_QUEUE_BUFFER_SIZE 8\n#define PHP_KAFKA_COMPRESSION_MODE 16\n#define PHP_KAFKA_LOGLEVEL 32\n#define PHP_KAFKA_CONFIRM_OFF 0\n#define PHP_KAFKA_CONFIRM_BASIC 1\n#define PHP_KAFKA_CONFIRM_EXTENDED 2\nextern zend_module_entry kafka_module_entry;\n\nPHP_MSHUTDOWN_FUNCTION(kafka);\nPHP_MINIT_FUNCTION(kafka);\nPHP_RINIT_FUNCTION(kafka);\nPHP_RSHUTDOWN_FUNCTION(kafka);\n\n#ifdef ZTS\n#include <TSRM.h>\n#endif\n#include \"librdkafka/rdkafka.h\"\n\n#define PHP_KAFKA_PARTITION_RANDOM RD_KAFKA_PARTITION_UA\n\ntypedef struct _kafka_r {\n    zend_object         std;\n    rd_kafka_t          *consumer;\n    rd_kafka_t          *producer;\n    char                *brokers;\n    char                *compression;\n    char                *retry_count;\n    char                *retry_interval;\n    int                 delivery_confirm_mode;\n    char                *queue_buffer;\n    long                consumer_partition;\n    long                producer_partition;\n    int                 log_level;\n    rd_kafka_type_t     rk_type;\n} kafka_connection;\n\n//attach kafka connection to module\nzend_object_value create_kafka_connection(zend_class_entry *class_type TSRMLS_DC);\nvoid free_kafka_connection(void *object TSRMLS_DC);\n\n/* Kafka class */\nstatic PHP_METHOD(Kafka, __construct);\nstatic PHP_METHOD(Kafka, __destruct);\nstatic PHP_METHOD(Kafka, setCompression);\nstatic PHP_METHOD(Kafka, getCompression);\nstatic PHP_METHOD(Kafka, set_partition);\nstatic PHP_METHOD(Kafka, setPartition);\nstatic PHP_METHOD(Kafka, getPartition);\nstatic PHP_METHOD(Kafka, setLogLevel);\nstatic PHP_METHOD(Kafka, getPartitionsForTopic);\nstatic PHP_METHOD(Kafka, getPartitionOffsets);\nstatic PHP_METHOD(Kafka, isConnected);\nstatic PHP_METHOD(Kafka, setBrokers);\nstatic PHP_METHOD(Kafka, setOptions);\nstatic PHP_METHOD(Kafka, getTopics);\nstatic PHP_METHOD(Kafka, disconnect);\nstatic PHP_METHOD(Kafka, produceBatch);\nstatic PHP_METHOD(Kafka, produce);\nstatic PHP_METHOD(Kafka, consume);\nPHPAPI void kafka_connect(char *brokers);\n\n#endif\n"
  },
  {
    "path": "rebuild.sh",
    "content": "#!/bin/bash\n\nexport CFLAGS=\"-Wall -Wextra -Wdeclaration-after-statement -Wmissing-field-initializers -Wshadow -Wno-unused-parameter -ggdb3\"\nphpize && ./configure --enable-kafka && make clean && make\n\n\n# -j 5\n#all\n#&& make install"
  },
  {
    "path": "stub/Kafka.class.php",
    "content": "<?php\n\nfinal class Kafka\n{\n    const OFFSET_BEGIN = 'beginning';\n    const OFFSET_END = 'end';\n    const LOG_ON = 1;//default\n    const LOG_OFF = 0;\n    const MODE_CONSUMER = 0;\n    const MODE_PRODUCER = 1;\n    const PARTITION_RANDOM = -1;\n    const COMPRESSION_NONE = 'none';//default\n    const COMPRESSION_GZIP = 'gzip';\n    const COMPRESSION_SNAPPY = 'snappy';\n    const CONFIRM_OFF = 0;\n    const CONFIRM_BASIC = 1;//default\n    const CONFIRM_EXTENDED = 2;//under development\n    //use for $options array keys\n    const RETRY_COUNT = 1;\n    const RETRY_INTERVAL = 2;\n    const CONFIRM_DELIVERY = 4;\n    const QUEUE_BUFFER_SIZE = 8;\n    const COMPRESSION_MODE = 16;\n    const LOGLEVEL = 32;\n\n    /**\n     * This property does not exist, connection status\n     * depends on the mode, which is stored internally\n     * @var bool\n     */\n    private $connected = false;\n\n    /**\n     * This property was removed, instead a partition is stored\n     * internally for the producer and consumer connection\n     * @var int\n     */\n    private $partition = 0;\n\n    /**\n     * Not an actual property, but last active mode is tracked\n     * This value is unreliable at this point in time, so don't rely on it...\n     * @var int\n     */\n    private $lastMode = 0;\n\n    /**\n     * @var string\n     * Internal property to track use of compression when producing messages\n     */\n    private $compression = self::COMPRESSION_NONE;\n\n    /**\n     * @param string $brokers\n     * @param array $options\n     * @throws KafkaException\n     */\n    public function __construct($brokers, array $options = null)\n    {}\n\n    /**\n     * @param int $partition\n     * @param null|int $mode\n     * @deprecated use setPartition instead\n     * @return $this\n     * @throws \\KafkaException\n     */\n    public function set_partition($partition, $mode = null)\n    {\n        if (!is_int($partition) || ($mode !== null)) {\n            throw new \\KafkaException('Invalid arguments passed to Kafka::set_topics');\n        }\n        if ($mode && $mode != self::MODE_CONSUMER && $mode != self::MODE_PRODUCER) {\n            throw new \\KafkaException(\n                sprintf(\n                    'Invalid mode passed to %s, use Kafka::MODE_* constants',\n                    __METHOD__\n                )\n            );\n        }\n        if ($partition < self::PARTITION_RANDOM) {\n            throw new \\KafkaException('Invalid partition');\n        }\n        $this->partition = $partition;\n        return $this;\n    }\n\n    /**\n     * @param int $partition\n     * @param null|$mode\n     * @return $this\n     * @throws \\KafkaException\n     */\n    public function setPartition($partition, $mode = null)\n    {\n        if (!is_int($partition) || ($mode !== null)) {\n            throw new \\KafkaException('Invalid arguments passed to Kafka::set_topics');\n        }\n        if ($mode && $mode != self::MODE_CONSUMER && $mode != self::MODE_PRODUCER) {\n            throw new \\KafkaException(\n                sprintf(\n                    'Invalid mode passed to %s, use Kafka::MODE_* constants',\n                    __METHOD__\n                )\n            );\n        }\n        if ($partition < self::PARTITION_RANDOM) {\n            throw new \\KafkaException('Invalid partition');\n        }\n        $this->partition = $partition;\n        return $this;\n    }\n\n    /**\n     * @param array $options\n     * @throws KafkaException on invalid config\n     */\n    public function setOptions(array $options)\n    {}\n\n    /**\n     * Note, this disconnects a previously opened producer connection!\n     * @param string $compression\n     * @return $this\n     * @throws KafkaException\n     */\n    public function setCompression($compression)\n    {\n        if ($compression !== self::COMPRESSION_NONE && $compression !== self::COMPRESSION_GZIP && $compression !== self::COMPRESSION_SNAPPY)\n            throw new KafkaException(\n                sprintf('Invalid argument, use %s::COMPRESSION_* constants', __CLASS__)\n            );\n        $this->compression = $compression;\n        return $this;\n    }\n\n    /**\n     * @return string\n     */\n    public function getCompression()\n    {\n        return $this->compression;\n    }\n\n    /**\n     * @param int $mode\n     * @return int\n     * @throws KafkaException\n     */\n    public function getPartition($mode)\n    {\n        if ($mode != self::MODE_CONSUMER && $mode != self::MODE_PRODUCER) {\n            throw new \\KafkaException(\n                sprintf(\n                    'Invalid argument passed to %s, use %s::MODE_* constants',\n                    __METHOD__,\n                    __CLASS__\n                )\n            );\n        }\n        return $this->partition;\n    }\n\n    /**\n     * @param int $level\n     * @return $this\n     * @throws \\KafkaException (invalid argument)\n     */\n    public function setLogLevel($level)\n    {\n        if (!is_int($level)) {\n            throw new KafkaException(\n                sprintf(\n                    '%s expects argument to be an int',\n                    __METHOD__\n                )\n            );\n        }\n        if ($level != self::LOG_ON && $level != self::LOG_OFF) {\n            throw new KafkaException(\n                sprintf(\n                    '%s argument invalid, use %s::LOG_* constants',\n                    __METHOD__,\n                    __CLASS__\n                )\n            );\n        }\n        //level is passed to kafka backend\n        return $this;\n    }\n\n    /**\n     * @param string $brokers\n     * @param array $options = null\n     * @return $this\n     * @throws \\KafkaException\n     */\n    public function setBrokers($brokers, array $options = null)\n    {\n        if (!is_string($brokers)) {\n            throw new \\KafkaException(\n                sprintf(\n                    '%s expects argument to be a string',\n                    __CLASS__\n                )\n            );\n        }\n        $this->brokers = $brokers;\n        return $this;\n    }\n\n    /**\n     *\n     * @oaran int|null $mode\n     * @return bool\n     */\n    public function isConnected($mode = null)\n    {\n        if ($mode == null) {\n            $mode = $this->lastMode;\n        }\n        if ($mode != self::MODE_CONSUMER && $mode != self::MODE_PRODUCER) {\n            throw new \\KafkaException(\n                sprintf(\n                    'invalid argument passed to %s, use Kafka::MODE_* constants',\n                    __METHOD__\n                )\n            );\n        }\n        //connection pointers determine connected status\n        return $this->connected;\n    }\n\n    /**\n     * produce message on topic\n     * @param string $topic\n     * @param string $message\n     * @param int $timeout\n     * @return $this\n     * @throws \\KafkaException\n     */ \n    public function produce($topic, $message, $timeout=5000)\n    {\n        $this->connected = true;\n        //internal call, produce message on topic\n        //or throw exception\n        return $this;\n    }\n\n    /**\n     * Produce a batch of messages without having PHP method calls\n     * Causing any overhead (internally, array is iterated, and produced\n     * @param string $topic\n     * @param array $messages\n     * @param int $timeout\n     * @return $this\n     * @throws \\KafkaException\n     */\n    public function produceBatch($topic, array $messages, $timeout=5000)\n    {\n        foreach ($messages as $msg) {\n            //non-string messages are skipped silently ATM\n            if (is_string($msg)) {\n                //internally, the method call overhead is not there\n                $this->produce($topic, $msg);\n            }\n        }\n        return $this;\n    }\n\n    /**\n     * @param string $topic\n     * @param string|int $offset\n     * @param string|int $count\n     * @return array\n     */\n    public function consume($topic, $offset = self::OFFSET_BEGIN, $count = self::OFFSET_END)\n    {\n        $this->connected = true;\n        $return = [];\n        if (!is_numeric($offset)) {\n            //0 or last message (whatever its offset might be)\n            $start = $offset == self::OFFSET_BEGIN ? 0 : 100;\n        } else {\n            $start = $offset;\n        }\n        if (!is_numeric($count)) {\n            //depending on amount of messages in topic\n            $count = 100;\n        }\n        return array_fill_keys(\n            range($start, $start + $count),\n            'the message at the offset $key'\n        );\n    }\n\n    /**\n     * Returns an assoc array of topic names\n     * The value is the partition count\n     * @return array\n     */\n    public function getTopics()\n    {\n        return [\n            'topicName' => 1\n        ];\n    }\n\n    /**\n     * Disconnect a specific connection (producer/consumer) or both\n     * @param int|null $mode\n     * @return bool\n     */\n    public function disconnect($mode = null)\n    {\n        if ($mode !== null && $mode != self::MODE_PRODUCER && $mode != self::MODE_CONSUMER) {\n            throw new \\KafkaException(\n                sprintf(\n                    'invalid argument passed to %s, use Kafka::MODE_* constants',\n                    __METHOD__\n                )\n            );\n        }\n        $this->connected = false;\n        return true;\n    }\n\n    /**\n     * Returns an array of ints (available partitions for topic)\n     * @param string $topic\n     * @return array\n     */\n    public function getPartitionsForTopic($topic)\n    {\n        return [];\n    }\n\n    /**\n     * Returns an array where keys are partition\n     * values are their respective beginning offsets\n     * if a partition has offset -1, the consume call failed\n     * @param string $topic\n     * @return array\n     * @throws \\KafkaException when meta call failed or no partitions available\n     */\n    public function getPartitionOffsets($topic)\n    {\n        return [];\n    }\n\n    public function __destruct()\n    {\n        $this->connected = false;\n    }\n}\n"
  },
  {
    "path": "stub/KafkaException.class.php",
    "content": "<?php\n\nclass KafkaException extends Exception\n{}\n"
  },
  {
    "path": "test.php",
    "content": "<?php\r\n\r\nfor($i = 0; $i<2; $i++) {\r\n\r\n    $kafka = new \\Kafka(\"localhost:9092\");\r\n    $kafka->produce(\"test123\", $i);\r\n\r\n}"
  },
  {
    "path": "tests/constant_begin.phpt",
    "content": "--TEST--\nBasic test for Kafka::OFFSET_BEGIN constant\n--FILE--\n<?php\nvar_dump(Kafka::OFFSET_BEGIN);\n?>\n--EXPECT--\nstring(9) \"beginning\"\n"
  },
  {
    "path": "tests/constant_end.phpt",
    "content": "--TEST--\nBasic test for Kafka::OFFSET_END constant\n--FILE--\n<?php\nvar_dump(Kafka::OFFSET_END);\n?>\n--EXPECT--\nstring(3) \"end\"\n"
  },
  {
    "path": "tests/constant_mode.phpt",
    "content": "--TEST--\nBasic test for Kafka::MODE_* constants\n--FILE--\n<?php\nvar_dump(Kafka::MODE_PRODUCER);\nvar_dump(Kafka::MODE_CONSUMER);\n?>\n--EXPECT--\nint(1)\nint(0)\n"
  },
  {
    "path": "tests/exceptiontest1.phpt",
    "content": "--TEST--\nTest custom exception class\n--FILE--\n<?php\n$kafka = new Kafka('localhost:9092');\ntry {\n    $kafka->isConnected('InvalidParam');\n} catch (Exception $e) {\n    var_dump(get_class($e));\n}\n?>\n--EXPECT--\nstring(14) \"KafkaException\"\n"
  },
  {
    "path": "travis.sh",
    "content": "#!/usr/bin/env bash\necho \"....fetching librdkafka dependency....\"\nmkdir tmp_build\ncd tmp_build\n## clone fork, we know this version of librdkafka works\ngit clone https://github.com/EVODelavega/librdkafka.git\necho \".....done.....\"\ncd librdkafka\necho \"....compiling librdkafka....\"\n./configure && make\nsudo make install\necho \"....done....\"\ncd ../../\necho \".... ensure librdkafka is available.....\"\nsudo ldconfig\nrm -Rf tmp_build\necho \".... start building extension.....\"\nphpize\n./configure --enable-kafka\nmake\nNO_INTERACTION=1 make test\n"
  }
]