| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|/
|
|
| |
python3
|
|\
| |
| |
| |
| | |
Warn users about async producer
Refactor producer failover tests (5x speedup)
Skip async producer failover test for now, because it is broken
|
| |
| |
| |
| | |
retry failed messages
|
|\ \
| |/
|/|
| |
| |
| |
| | |
fix consumer retry logic (fixes #135)
Conflicts:
kafka/consumer.py
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixes bug in the follow condition:
* Starting buffer size is 1024, max buffer size is 2048, both set on an instance level
* Fetch from p0, p1 and received response
* p0 has more than 1024 bytes, consumer doubles buffer size to 2048 and marks p0 for retry
* p1 has more than 1024 bytes, consumer tries to double buffer size, but sees that it's at
the max and raises ConsumerFetchSizeTooSmall
The fix changes the logic to the following:
* Starting buffer size is 1024 set on a per-partition level, max buffer size is 2048 set on an instance level
* Fetch from p0, p1 and received response
* p0 has more than 1024 bytes, consumer doubles buffer size to 2048 for p0 and marks p0 for retry
* p1 has more than 1024 bytes, consumer double buffer size to 2048 for p1 and marks p1 for retry
* Consumer sees that there's partitions to retry, repeats parsing loop
* p0 sent all the bytes this time, consumer yields these messages
* p1 sent all the bytes this time, consumer yields these messages
|
|\ \
| | |
| | | |
Use PyLint for static error checking
|
| | |
| | |
| | |
| | | |
only; fixup errors; skip kafka/queue.py
|
| | |
| | |
| | |
| | | |
subclass); document
|
| | | |
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The `Consumer` class fetches the last known offsets in `__init__` if
`auto_commit` is enabled, but it would be nice to expose this behavior
for consumers that aren't using auto_commit. This doesn't change
existing behavior, just exposes the ability to easily fetch and set the
last known offsets. Once #162 or something similar lands this may no
longer be necessary, but it looks like that might take a while to make
it through.
|
|\ \
| | |
| | | |
Improve KafkaConnection with more tests
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | | |
try block in conn.recv
|
| | | |
|
| | |
| | |
| | |
| | | |
return val check in KafkaConnection.send()
|
| | | |
|
| | |
| | |
| | |
| | | |
close()
|
| | |
| | |
| | |
| | | |
connection fails
|
|\ \ \
| | | |
| | | | |
Better type errors
|
| | | |
| | | |
| | | |
| | | | |
It will still die, just as before, but it now includes a *helpful* error message
|
| |/ / |
|
|\ \ \
| | | |
| | | | |
Add KafkaTimeoutError and fix client.ensure_topic_exists
|
| |/ / |
|
|/ /
| |
| |
| | |
Tags applied to master will now be automatically deployed on PyPI
|
|\ \
| | |
| | | |
Handle New Topic Creation
|
| | |
| | |
| | |
| | |
| | | |
Adds ensure_topic_exists to KafkaClient, redirects test case to use
that. Fixes #113 and fixes #150.
|
|/ / |
|
|\ \
| | |
| | | |
Add 'codec' parameter to Producer
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add function kafka.protocol.create_message_set() that takes a list of
payloads and a codec and returns a message set with the desired encoding.
Introduce kafka.common.UnsupportedCodecError, raised if an unknown codec
is specified.
Include a test for the new function.
|
| |\ \
| | | |
| | | |
| | | |
| | | |
| | | | |
Conflicts:
servers/0.8.0/kafka-src
test/test_unit.py
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Adds a codec parameter to Producer.__init__ that lets the user choose
a compression codec to use for all messages sent by it.
|
|/ / / |
|
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
kafka/client.py contained duplicate copies of same refactor, merged.
Move test/test_integration.py changes into test/test_producer_integration.
Conflicts:
kafka/client.py
servers/0.8.0/kafka-src
test/test_integration.py
|
| |\ \ \
| | | | |
| | | | | |
SimpleProducer randomization of initial round robin ordering
|
| | | | |
| | | | |
| | | | |
| | | | | |
of the initial partition messages are published to
|
| | | | |
| | | | |
| | | | |
| | | | | |
the sorted list of partition rather than completely randomizing the initial ordering before round-robin cycling the partitions
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
of partitions to prevent the first message from always being published
to partition 0.
|
| | |/ /
| |/| | |
|
| | | |
| | | |
| | | |
| | | | |
in memory logging. Address code review concerns
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Bump version number to 0.9.1
Update readme to show supported Kafka/Python versions
Validate arguments in consumer.py, add initial consumer unit test
Make service kill() child processes when startup fails
Add tests for util.py, fix Python 2.6 specific bug.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
integration tests, make skipped integration also skip setupClass, implement rudimentary offset support in consumer.py
|
| | | | |
|
| | | | |
|
| | | | |
|
|\ \ \ \
| |/ / / |
|
| | | | |
|