diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..5c9cbec --- /dev/null +++ b/.gitignore @@ -0,0 +1,44 @@ +*.py[co] +*.swp +*.swo +*.so +*.egg +*.egg-info +*.attr +.tox +.python-version +build +MANIFEST +dist +.coverage +nosetests.xml +cover/ +docs/_build/ +tests/integration/ccm +setuptools*.tar.gz +setuptools*.egg + +cassandra/*.c +!cassandra/cmurmur3.c +cassandra/*.html +tests/unit/cython/bytesio_testhelper.c + +# OSX +.DS_Store + +# IDE +.project +.pydevproject +.settings/ +.idea/ +*.iml + +.DS_Store + +# Unit test / coverage reports +.coverage +.tox + +#iPython +*.ipynb + diff --git a/.travis.yml b/.travis.yml new file mode 100644 index 0000000..f1fff4b --- /dev/null +++ b/.travis.yml @@ -0,0 +1,32 @@ +dist: xenial +sudo: false + +language: python +python: + - "2.7" + - "3.5" + - "3.6" + - "3.7" + - "pypy2.7-6.0" + - "pypy3.5" + +env: + - CASS_DRIVER_NO_CYTHON=1 + +addons: + apt: + packages: + - build-essential + - python-dev + - pypy-dev + - libc-ares-dev + - libev4 + - libev-dev + +install: + - pip install tox-travis lz4 + +script: + - tox + - tox -e gevent_loop + - tox -e eventlet_loop diff --git a/CHANGELOG.rst b/CHANGELOG.rst new file mode 100644 index 0000000..0ac2aeb --- /dev/null +++ b/CHANGELOG.rst @@ -0,0 +1,1494 @@ +3.20.2 +====== +November 19, 2019 + +Bug Fixes +--------- +* Fix import error for old python installation without SSLContext (PYTHON-1183) + +3.20.1 +====== +November 6, 2019 + +Bug Fixes +--------- +* ValueError: too many values to unpack (expected 2)" when there are two dashes in server version number (PYTHON-1172) + +3.20.0 +====== +October 28, 2019 + +Features +-------- +* DataStax Apollo Support (PYTHON-1074) +* Use 4.0 schema parser in 4 alpha and snapshot builds (PYTHON-1158) + +Bug Fixes +--------- +* Connection setup methods prevent using ExecutionProfile in cqlengine (PYTHON-1009) +* Driver deadlock if all connections dropped by heartbeat whilst request in flight and request times out (PYTHON-1044) +* Exception when use pk__token__gt filter In python 3.7 (PYTHON-1121) + +3.19.0 +====== +August 26, 2019 + +Features +-------- +* Add Python 3.7 support (PYTHON-1016) +* Future-proof Mapping imports (PYTHON-1023) +* Include param values in cqlengine logging (PYTHON-1105) +* NTS Token Replica Map Generation is slow (PYTHON-622) + +Bug Fixes +--------- +* as_cql_query UDF/UDA parameters incorrectly includes "frozen" if arguments are collections (PYTHON-1031) +* cqlengine does not currently support combining TTL and TIMESTAMP on INSERT (PYTHON-1093) +* Fix incorrect metadata for compact counter tables (PYTHON-1100) +* Call ConnectionException with correct kwargs (PYTHON-1117) +* Can't connect to clusters built from source because version parsing doesn't handle 'x.y-SNAPSHOT' (PYTHON-1118) +* Discovered node doesn´t honor the configured Cluster port on connection (PYTHON-1127) + +Other +----- +* Remove invalid warning in set_session when we initialize a default connection (PYTHON-1104) +* Set the proper default ExecutionProfile.row_factory value (PYTHON-1119) + +3.18.0 +====== +May 27, 2019 + +Features +-------- + +* Abstract Host Connection information (PYTHON-1079) +* Improve version parsing to support a non-integer 4th component (PYTHON-1091) +* Expose on_request_error method in the RetryPolicy (PYTHON-1064) +* Add jitter to ExponentialReconnectionPolicy (PYTHON-1065) + +Bug Fixes +--------- + +* Fix error when preparing queries with beta protocol v5 (PYTHON-1081) +* Accept legacy empty strings as column names (PYTHON-1082) +* Let util.SortedSet handle uncomparable elements (PYTHON-1087) + +3.17.1 +====== +May 2, 2019 + +Bug Fixes +--------- +* Socket errors EAGAIN/EWOULDBLOCK are not handled properly and cause timeouts (PYTHON-1089) + +3.17.0 +====== +February 19, 2019 + +Features +-------- +* Send driver name and version in startup message (PYTHON-1068) +* Add Cluster ssl_context option to enable SSL (PYTHON-995) +* Allow encrypted private keys for 2-way SSL cluster connections (PYTHON-995) +* Introduce new method ConsistencyLevel.is_serial (PYTHON-1067) +* Add Session.get_execution_profile (PYTHON-932) +* Add host kwarg to Session.execute/execute_async APIs to send a query to a specific node (PYTHON-993) + +Bug Fixes +--------- +* NoHostAvailable when all hosts are up and connectable (PYTHON-891) +* Serial consistency level is not used (PYTHON-1007) + +Other +----- +* Fail faster on incorrect lz4 import (PYTHON-1042) +* Bump Cython dependency version to 0.29 (PYTHON-1036) +* Expand Driver SSL Documentation (PYTHON-740) + +Deprecations +------------ + +* Using Cluster.ssl_options to enable SSL is deprecated and will be removed in + the next major release, use ssl_context. +* DowngradingConsistencyRetryPolicy is deprecated and will be + removed in the next major release. (PYTHON-937) + +3.16.0 +====== +November 12, 2018 + +Bug Fixes +--------- +* Improve and fix socket error-catching code in nonblocking-socket reactors (PYTHON-1024) +* Non-ASCII characters in schema break CQL string generation (PYTHON-1008) +* Fix OSS driver's virtual table support against DSE 6.0.X and future server releases (PYTHON-1020) +* ResultSet.one() fails if the row_factory is using a generator (PYTHON-1026) +* Log profile name on attempt to create existing profile (PYTHON-944) +* Cluster instantiation fails if any contact points' hostname resolution fails (PYTHON-895) + +Other +----- +* Fix tests when RF is not maintained if we decomission a node (PYTHON-1017) +* Fix wrong use of ResultSet indexing (PYTHON-1015) + +3.15.1 +====== +September 6, 2018 + +Bug Fixes +--------- +* C* 4.0 schema-parsing logic breaks running against DSE 6.0.X (PYTHON-1018) + +3.15.0 +====== +August 30, 2018 + +Features +-------- +* Parse Virtual Keyspace Metadata (PYTHON-992) + +Bug Fixes +--------- +* Tokenmap.get_replicas returns the wrong value if token coincides with the end of the range (PYTHON-978) +* Python Driver fails with "more than 255 arguments" python exception when > 255 columns specified in query response (PYTHON-893) +* Hang in integration.standard.test_cluster.ClusterTests.test_set_keyspace_twice (PYTHON-998) +* Asyncore reactors should use a global variable instead of a class variable for the event loop (PYTHON-697) + +Other +----- +* Use global variable for libev loops so it can be subclassed (PYTHON-973) +* Update SchemaParser for V4 (PYTHON-1006) +* Bump Cython dependency version to 0.28 (PYTHON-1012) + +3.14.0 +====== +April 17, 2018 + +Features +-------- +* Add one() function to the ResultSet API (PYTHON-947) +* Create an utility function to fetch concurrently many keys from the same replica (PYTHON-647) +* Allow filter queries with fields that have an index managed outside of cqlengine (PYTHON-966) +* Twisted SSL Support (PYTHON-343) +* Support IS NOT NULL operator in cqlengine (PYTHON-968) + +Other +----- +* Fix Broken Links in Docs (PYTHON-916) +* Reevaluate MONKEY_PATCH_LOOP in test codebase (PYTHON-903) +* Remove CASS_SERVER_VERSION and replace it for CASSANDRA_VERSION in tests (PYTHON-910) +* Refactor CASSANDRA_VERSION to a some kind of version object (PYTHON-915) +* Log warning when driver configures an authenticator, but server does not request authentication (PYTHON-940) +* Warn users when using the deprecated Session.default_consistency_level (PYTHON-953) +* Add DSE smoke test to OSS driver tests (PYTHON-894) +* Document long compilation times and workarounds (PYTHON-868) +* Improve error for batch WriteTimeouts (PYTHON-941) +* Deprecate ResultSet indexing (PYTHON-945) + +3.13.0 +====== +January 30, 2018 + +Features +-------- +* cqlengine: LIKE filter operator (PYTHON-512) +* Support cassandra.query.BatchType with cqlengine BatchQuery (PYTHON-888) + +Bug Fixes +--------- +* AttributeError: 'NoneType' object has no attribute 'add_timer' (PYTHON-862) +* Support retry_policy in PreparedStatement (PYTHON-861) +* __del__ method in Session is throwing an exception (PYTHON-813) +* LZ4 import issue with recent versions (PYTHON-897) +* ResponseFuture._connection can be None when returning request_id (PYTHON-853) +* ResultSet.was_applied doesn't support batch with LWT statements (PYTHON-848) + +Other +----- +* cqlengine: avoid warning when unregistering connection on shutdown (PYTHON-865) +* Fix DeprecationWarning of log.warn (PYTHON-846) +* Fix example_mapper.py for python3 (PYTHON-860) +* Possible deadlock on cassandra.concurrent.execute_concurrent (PYTHON-768) +* Add some known deprecated warnings for 4.x (PYTHON-877) +* Remove copyright dates from copyright notices (PYTHON-863) +* Remove "Experimental" tag from execution profiles documentation (PYTHON-840) +* request_timer metrics descriptions are slightly incorrect (PYTHON-885) +* Remove "Experimental" tag from cqlengine connections documentation (PYTHON-892) +* Set in documentation default consistency for operations is LOCAL_ONE (PYTHON-901) + +3.12.0 +====== +November 6, 2017 + +Features +-------- +* Send keyspace in QUERY, PREPARE, and BATCH messages (PYTHON-678) +* Add IPv4Address/IPv6Address support for inet types (PYTHON-751) +* WriteType.CDC and VIEW missing (PYTHON-794) +* Warn on Cluster init if contact points are specified but LBP isn't (legacy mode) (PYTHON-812) +* Warn on Cluster init if contact points are specified but LBP isn't (exection profile mode) (PYTHON-838) +* Include hash of result set metadata in prepared stmt id (PYTHON-808) +* Add NO_COMPACT startup option (PYTHON-839) +* Add new exception type for CDC (PYTHON-837) +* Allow 0ms in ConstantSpeculativeExecutionPolicy (PYTHON-836) +* Add asyncio reactor (PYTHON-507) + +Bug Fixes +--------- +* Both _set_final_exception/result called for the same ResponseFuture (PYTHON-630) +* Use of DCAwareRoundRobinPolicy raises NoHostAvailable exception (PYTHON-781) +* Not create two sessions by default in CQLEngine (PYTHON-814) +* Bug when subclassing AyncoreConnection (PYTHON-827) +* Error at cleanup when closing the asyncore connections (PYTHON-829) +* Fix sites where `sessions` can change during iteration (PYTHON-793) +* cqlengine: allow min_length=0 for Ascii and Text column types (PYTHON-735) +* Rare exception when "sys.exit(0)" after query timeouts (PYTHON-752) +* Dont set the session keyspace when preparing statements (PYTHON-843) +* Use of DCAwareRoundRobinPolicy raises NoHostAvailable exception (PYTHON-781) + +Other +------ +* Remove DeprecationWarning when using WhiteListRoundRobinPolicy (PYTHON-810) +* Bump Cython dependency version to 0.27 (PYTHON-833) + +3.11.0 +====== +July 24, 2017 + + +Features +-------- +* Add idle_heartbeat_timeout cluster option to tune how long to wait for heartbeat responses. (PYTHON-762) +* Add HostFilterPolicy (PYTHON-761) + +Bug Fixes +--------- +* is_idempotent flag is not propagated from PreparedStatement to BoundStatement (PYTHON-736) +* Fix asyncore hang on exit (PYTHON-767) +* Driver takes several minutes to remove a bad host from session (PYTHON-762) +* Installation doesn't always fall back to no cython in Windows (PYTHON-763) +* Avoid to replace a connection that is supposed to shutdown (PYTHON-772) +* request_ids may not be returned to the pool (PYTHON-739) +* Fix murmur3 on big-endian systems (PYTHON-653) +* Ensure unused connections are closed if a Session is deleted by the GC (PYTHON-774) +* Fix .values_list by using db names internally (cqlengine) (PYTHON-785) + + +Other +----- +* Bump Cython dependency version to 0.25.2 (PYTHON-754) +* Fix DeprecationWarning when using lz4 (PYTHON-769) +* Deprecate WhiteListRoundRobinPolicy (PYTHON-759) +* Improve upgrade guide for materializing pages (PYTHON-464) +* Documentation for time/date specifies timestamp inupt as microseconds (PYTHON-717) +* Point to DSA Slack, not IRC, in docs index + +3.10.0 +====== +May 24, 2017 + +Features +-------- +* Add Duration type to cqlengine (PYTHON-750) +* Community PR review: Raise error on primary key update only if its value changed (PYTHON-705) +* get_query_trace() contract is ambiguous (PYTHON-196) + +Bug Fixes +--------- +* Queries using speculative execution policy timeout prematurely (PYTHON-755) +* Fix `map` where results are not consumed (PYTHON-749) +* Driver fails to encode Duration's with large values (PYTHON-747) +* UDT values are not updated correctly in CQLEngine (PYTHON-743) +* UDT types are not validated in CQLEngine (PYTHON-742) +* to_python is not implemented for types columns.Type and columns.Date in CQLEngine (PYTHON-741) +* Clients spin infinitely trying to connect to a host that is drained (PYTHON-734) +* Resulset.get_query_trace returns empty trace sometimes (PYTHON-730) +* Memory grows and doesn't get removed (PYTHON-720) +* Fix RuntimeError caused by change dict size during iteration (PYTHON-708) +* fix ExponentialReconnectionPolicy may throw OverflowError problem (PYTHON-707) +* Avoid using nonexistent prepared statement in ResponseFuture (PYTHON-706) + +Other +----- +* Update README (PYTHON-746) +* Test python versions 3.5 and 3.6 (PYTHON-737) +* Docs Warning About Prepare "select *" (PYTHON-626) +* Increase Coverage in CqlEngine Test Suite (PYTHON-505) +* Example SSL connection code does not verify server certificates (PYTHON-469) + +3.9.0 +===== + +Features +-------- +* cqlengine: remove elements by key from a map (PYTHON-688) + +Bug Fixes +--------- +* improve error handling when connecting to non-existent keyspace (PYTHON-665) +* Sockets associated with sessions not getting cleaned up on session.shutdown() (PYTHON-673) +* rare flake on integration.standard.test_cluster.ClusterTests.test_clone_shared_lbp (PYTHON-727) +* MontonicTimestampGenerator.__init__ ignores class defaults (PYTHON-728) +* race where callback or errback for request may not be called (PYTHON-733) +* cqlengine: model.update() should not update columns with a default value that hasn't changed (PYTHON-657) +* cqlengine: field value manager's explicit flag is True when queried back from cassandra (PYTHON-719) + +Other +----- +* Connection not closed in example_mapper (PYTHON-723) +* Remove mention of pre-2.0 C* versions from OSS 3.0+ docs (PYTHON-710) + +3.8.1 +===== +March 16, 2017 + +Bug Fixes +--------- + +* implement __le__/__ge__/__ne__ on some custom types (PYTHON-714) +* Fix bug in eventlet and gevent reactors that could cause hangs (PYTHON-721) +* Fix DecimalType regression (PYTHON-724) + +3.8.0 +===== + +Features +-------- + +* Quote index names in metadata CQL generation (PYTHON-616) +* On column deserialization failure, keep error message consistent between python and cython (PYTHON-631) +* TokenAwarePolicy always sends requests to the same replica for a given key (PYTHON-643) +* Added cql types to result set (PYTHON-648) +* Add __len__ to BatchStatement (PYTHON-650) +* Duration Type for Cassandra (PYTHON-655) +* Send flags with PREPARE message in v5 (PYTHON-684) + +Bug Fixes +--------- + +* Potential Timing issue if application exits prior to session pool initialization (PYTHON-636) +* "Host X.X.X.X has been marked down" without any exceptions (PYTHON-640) +* NoHostAvailable or OperationTimedOut when using execute_concurrent with a generator that inserts into more than one table (PYTHON-642) +* ResponseFuture creates Timers and don't cancel them even when result is received which leads to memory leaks (PYTHON-644) +* Driver cannot connect to Cassandra version > 3 (PYTHON-646) +* Unable to import model using UserType without setuping connection since 3.7 (PYTHON-649) +* Don't prepare queries on ignored hosts on_up (PYTHON-669) +* Sockets associated with sessions not getting cleaned up on session.shutdown() (PYTHON-673) +* Make client timestamps strictly monotonic (PYTHON-676) +* cassandra.cqlengine.connection.register_connection broken when hosts=None (PYTHON-692) + +Other +----- + +* Create a cqlengine doc section explaining None semantics (PYTHON-623) +* Resolve warnings in documentation generation (PYTHON-645) +* Cython dependency (PYTHON-686) +* Drop Support for Python 2.6 (PYTHON-690) + +3.7.1 +===== +October 26, 2016 + +Bug Fixes +--------- +* Cython upgrade has broken stable version of cassandra-driver (PYTHON-656) + +3.7.0 +===== +September 13, 2016 + +Features +-------- +* Add v5 protocol failure map (PYTHON-619) +* Don't return from initial connect on first error (PYTHON-617) +* Indicate failed column when deserialization fails (PYTHON-361) +* Let Cluster.refresh_nodes force a token map rebuild (PYTHON-349) +* Refresh UDTs after "keyspace updated" event with v1/v2 protocol (PYTHON-106) +* EC2 Address Resolver (PYTHON-198) +* Speculative query retries (PYTHON-218) +* Expose paging state in API (PYTHON-200) +* Don't mark host down while one connection is active (PYTHON-498) +* Query request size information (PYTHON-284) +* Avoid quadratic ring processing with invalid replication factors (PYTHON-379) +* Improve Connection/Pool creation concurrency on startup (PYTHON-82) +* Add beta version native protocol flag (PYTHON-614) +* cqlengine: Connections: support of multiple keyspaces and sessions (PYTHON-613) + +Bug Fixes +--------- +* Race when adding a pool while setting keyspace (PYTHON-628) +* Update results_metadata when prepared statement is reprepared (PYTHON-621) +* CQL Export for Thrift Tables (PYTHON-213) +* cqlengine: default value not applied to UserDefinedType (PYTHON-606) +* cqlengine: columns are no longer hashable (PYTHON-618) +* cqlengine: remove clustering keys from where clause when deleting only static columns (PYTHON-608) + +3.6.0 +===== +August 1, 2016 + +Features +-------- +* Handle null values in NumpyProtocolHandler (PYTHON-553) +* Collect greplin scales stats per cluster (PYTHON-561) +* Update mock unit test dependency requirement (PYTHON-591) +* Handle Missing CompositeType metadata following C* upgrade (PYTHON-562) +* Improve Host.is_up state for HostDistance.IGNORED hosts (PYTHON-551) +* Utilize v2 protocol's ability to skip result set metadata for prepared statement execution (PYTHON-71) +* Return from Cluster.connect() when first contact point connection(pool) is opened (PYTHON-105) +* cqlengine: Add ContextQuery to allow cqlengine models to switch the keyspace context easily (PYTHON-598) +* Standardize Validation between Ascii and Text types in Cqlengine (PYTHON-609) + +Bug Fixes +--------- +* Fix geventreactor with SSL support (PYTHON-600) +* Don't downgrade protocol version if explicitly set (PYTHON-537) +* Nonexistent contact point tries to connect indefinitely (PYTHON-549) +* Execute_concurrent can exceed max recursion depth in failure mode (PYTHON-585) +* Libev loop shutdown race (PYTHON-578) +* Include aliases in DCT type string (PYTHON-579) +* cqlengine: Comparison operators for Columns (PYTHON-595) +* cqlengine: disentangle default_time_to_live table option from model query default TTL (PYTHON-538) +* cqlengine: pk__token column name issue with the equality operator (PYTHON-584) +* cqlengine: Fix "__in" filtering operator converts True to string "True" automatically (PYTHON-596) +* cqlengine: Avoid LWTExceptions when updating columns that are part of the condition (PYTHON-580) +* cqlengine: Cannot execute a query when the filter contains all columns (PYTHON-599) +* cqlengine: routing key computation issue when a primary key column is overriden by model inheritance (PYTHON-576) + +3.5.0 +===== +June 27, 2016 + +Features +-------- +* Optional Execution Profiles for the core driver (PYTHON-569) +* API to get the host metadata associated with the control connection node (PYTHON-583) +* Expose CDC option in table metadata CQL (PYTHON-593) + +Bug Fixes +--------- +* Clean up Asyncore socket map when fork is detected (PYTHON-577) +* cqlengine: QuerySet only() is not respected when there are deferred fields (PYTHON-560) + +3.4.1 +===== +May 26, 2016 + +Bug Fixes +--------- +* Gevent connection closes on IO timeout (PYTHON-573) +* "dictionary changed size during iteration" with Python 3 (PYTHON-572) + +3.4.0 +===== +May 24, 2016 + +Features +-------- +* Include DSE version and workload in Host data (PYTHON-555) +* Add a context manager to Cluster and Session (PYTHON-521) +* Better Error Message for Unsupported Protocol Version (PYTHON-157) +* Make the error message explicitly state when an error comes from the server (PYTHON-412) +* Short Circuit meta refresh on topo change if NEW_NODE already exists (PYTHON-557) +* Show warning when the wrong config is passed to SimpleStatement (PYTHON-219) +* Return namedtuple result pairs from execute_concurrent (PYTHON-362) +* BatchStatement should enforce batch size limit in a better way (PYTHON-151) +* Validate min/max request thresholds for connection pool scaling (PYTHON-220) +* Handle or warn about multiple hosts with the same rpc_address (PYTHON-365) +* Write docs around working with datetime and timezones (PYTHON-394) + +Bug Fixes +--------- +* High CPU utilization when using asyncore event loop (PYTHON-239) +* Fix CQL Export for non-ASCII Identifiers (PYTHON-447) +* Make stress scripts Python 2.6 compatible (PYTHON-434) +* UnicodeDecodeError when unicode characters in key in BOP (PYTHON-559) +* WhiteListRoundRobinPolicy should resolve hosts (PYTHON-565) +* Cluster and Session do not GC after leaving scope (PYTHON-135) +* Don't wait for schema agreement on ignored nodes (PYTHON-531) +* Reprepare on_up with many clients causes node overload (PYTHON-556) +* None inserted into host map when control connection node is decommissioned (PYTHON-548) +* weakref.ref does not accept keyword arguments (github #585) + +3.3.0 +===== +May 2, 2016 + +Features +-------- +* Add an AddressTranslator interface (PYTHON-69) +* New Retry Policy Decision - try next host (PYTHON-285) +* Don't mark host down on timeout (PYTHON-286) +* SSL hostname verification (PYTHON-296) +* Add C* version to metadata or cluster objects (PYTHON-301) +* Options to Disable Schema, Token Metadata Processing (PYTHON-327) +* Expose listen_address of node we get ring information from (PYTHON-332) +* Use A-record with multiple IPs for contact points (PYTHON-415) +* Custom consistency level for populating query traces (PYTHON-435) +* Normalize Server Exception Types (PYTHON-443) +* Propagate exception message when DDL schema agreement fails (PYTHON-444) +* Specialized exceptions for metadata refresh methods failure (PYTHON-527) + +Bug Fixes +--------- +* Resolve contact point hostnames to avoid duplicate hosts (PYTHON-103) +* GeventConnection stalls requests when read is a multiple of the input buffer size (PYTHON-429) +* named_tuple_factory breaks with duplicate "cleaned" col names (PYTHON-467) +* Connection leak if Cluster.shutdown() happens during reconnection (PYTHON-482) +* HostConnection.borrow_connection does not block when all request ids are used (PYTHON-514) +* Empty field not being handled by the NumpyProtocolHandler (PYTHON-550) + +3.2.2 +===== +April 19, 2016 + +* Fix counter save-after-no-update (PYTHON-547) + +3.2.1 +===== +April 13, 2016 + +* Introduced an update to allow deserializer compilation with recently released Cython 0.24 (PYTHON-542) + +3.2.0 +===== +April 12, 2016 + +Features +-------- +* cqlengine: Warn on sync_schema type mismatch (PYTHON-260) +* cqlengine: Automatically defer fields with the '=' operator (and immutable values) in select queries (PYTHON-520) +* cqlengine: support non-equal conditions for LWT (PYTHON-528) +* cqlengine: sync_table should validate the primary key composition (PYTHON-532) +* cqlengine: token-aware routing for mapper statements (PYTHON-535) + +Bug Fixes +--------- +* Deleting a column in a lightweight transaction raises a SyntaxException #325 (PYTHON-249) +* cqlengine: make Token function works with named tables/columns #86 (PYTHON-272) +* comparing models with datetime fields fail #79 (PYTHON-273) +* cython date deserializer integer math should be aligned with CPython (PYTHON-480) +* db_field is not always respected with UpdateStatement (PYTHON-530) +* Sync_table fails on column.Set with secondary index (PYTHON-533) + +3.1.1 +===== +March 14, 2016 + +Bug Fixes +--------- +* cqlengine: Fix performance issue related to additional "COUNT" queries (PYTHON-522) + +3.1.0 +===== +March 10, 2016 + +Features +-------- +* Pass name of server auth class to AuthProvider (PYTHON-454) +* Surface schema agreed flag for DDL statements (PYTHON-458) +* Automatically convert float and int to Decimal on serialization (PYTHON-468) +* Eventlet Reactor IO improvement (PYTHON-495) +* Make pure Python ProtocolHandler available even when Cython is present (PYTHON-501) +* Optional Cython deserializer for bytes as bytearray (PYTHON-503) +* Add Session.default_serial_consistency_level (github #510) +* cqlengine: Expose prior state information via cqlengine LWTException (github #343, PYTHON-336) +* cqlengine: Collection datatype "contains" operators support (Cassandra 2.1) #278 (PYTHON-258) +* cqlengine: Add DISTINCT query operator (PYTHON-266) +* cqlengine: Tuple cqlengine api (PYTHON-306) +* cqlengine: Add support for UPDATE/DELETE ... IF EXISTS statements (PYTHON-432) +* cqlengine: Allow nested container types (PYTHON-478) +* cqlengine: Add ability to set query's fetch_size and limit (PYTHON-323) +* cqlengine: Internalize default keyspace from successive set_session (PYTHON-486) +* cqlengine: Warn when Model.create() on Counters (to be deprecated) (PYTHON-333) + +Bug Fixes +--------- +* Bus error (alignment issues) when running cython on some ARM platforms (PYTHON-450) +* Overflow when decoding large collections (cython) (PYTHON-459) +* Timer heap comparison issue with Python 3 (github #466) +* Cython deserializer date overflow at 2^31 - 1 (PYTHON-452) +* Decode error encountered when cython deserializing large map results (PYTHON-459) +* Don't require Cython for build if compiler or Python header not present (PYTHON-471) +* Unorderable types in task scheduling with Python 3 (h(PYTHON-473) +* cqlengine: Fix crash when updating a UDT column with a None value (github #467) +* cqlengine: Race condition in ..connection.execute with lazy_connect (PYTHON-310) +* cqlengine: doesn't support case sensitive column family names (PYTHON-337) +* cqlengine: UserDefinedType mandatory in create or update (PYTHON-344) +* cqlengine: db_field breaks UserType (PYTHON-346) +* cqlengine: UDT badly quoted (PYTHON-347) +* cqlengine: Use of db_field on primary key prevents querying except while tracing. (PYTHON-351) +* cqlengine: DateType.deserialize being called with one argument vs two (PYTHON-354) +* cqlengine: Querying without setting up connection now throws AttributeError and not CQLEngineException (PYTHON-395) +* cqlengine: BatchQuery multiple time executing execute statements. (PYTHON-445) +* cqlengine: Better error for management functions when no connection set (PYTHON-451) +* cqlengine: Handle None values for UDT attributes in cqlengine (PYTHON-470) +* cqlengine: Fix inserting None for model save (PYTHON-475) +* cqlengine: EQ doesn't map to a QueryOperator (setup race condition) (PYTHON-476) +* cqlengine: class.MultipleObjectsReturned has DoesNotExist as base class (PYTHON-489) +* cqlengine: Typo in cqlengine UserType __len__ breaks attribute assignment (PYTHON-502) + + +Other +----- + +* cqlengine: a major improvement on queryset has been introduced. It + is a lot more efficient to iterate large datasets: the rows are + now fetched on demand using the driver pagination. + +* cqlengine: the queryset len() and count() behaviors have changed. It + now executes a "SELECT COUNT(*)" of the query rather than returning + the size of the internal result_cache (loaded rows). On large + queryset, you might want to avoid using them due to the performance + cost. Note that trying to access objects using list index/slicing + with negative indices also requires a count to be + executed. + + + +3.0.0 +===== +November 24, 2015 + +Features +-------- +* Support datetime.date objects as a DateType (PYTHON-212) +* Add Cluster.update_view_metadata (PYTHON-407) +* QueryTrace option to populate partial trace sessions (PYTHON-438) +* Attach column names to ResultSet (PYTHON-439) +* Change default consistency level to LOCAL_ONE + +Bug Fixes +--------- +* Properly SerDes nested collections when protocol_version < 3 (PYTHON-215) +* Evict UDTs from UserType cache on change (PYTHON-226) +* Make sure query strings are always encoded UTF-8 (PYTHON-334) +* Track previous value of columns at instantiation in CQLengine (PYTHON-348) +* UDT CQL encoding does not work for unicode values (PYTHON-353) +* NetworkTopologyStrategy#make_token_replica_map does not account for multiple racks in a DC (PYTHON-378) +* Cython integer overflow on decimal type deserialization (PYTHON-433) +* Query trace: if session hasn't been logged, query trace can throw exception (PYTHON-442) + +3.0.0rc1 +======== +November 9, 2015 + +Features +-------- +* Process Modernized Schema Tables for Cassandra 3.0 (PYTHON-276, PYTHON-408, PYTHON-400, PYTHON-422) +* Remove deprecated features (PYTHON-292) +* Don't assign trace data to Statements (PYTHON-318) +* Normalize results return (PYTHON-368) +* Process Materialized View Metadata/Events (PYTHON-371) +* Remove blist as soft dependency (PYTHON-385) +* Change default consistency level to LOCAL_QUORUM (PYTHON-416) +* Normalize CQL query/export in metadata model (PYTHON-405) + +Bug Fixes +--------- +* Implementation of named arguments bind is non-pythonic (PYTHON-178) +* CQL encoding is incorrect for NaN and Infinity floats (PYTHON-282) +* Protocol downgrade issue with C* 2.0.x, 2.1.x, and python3, with non-default logging (PYTHON-409) +* ValueError when accessing usertype with non-alphanumeric field names (PYTHON-413) +* NumpyProtocolHandler does not play well with PagedResult (PYTHON-430) + +2.7.2 +===== +September 14, 2015 + +Bug Fixes +--------- +* Resolve CQL export error for UDF with zero parameters (PYTHON-392) +* Remove futures dep. for Python 3 (PYTHON-393) +* Avoid Python closure in cdef (supports earlier Cython compiler) (PYTHON-396) +* Unit test runtime issues (PYTHON-397,398) + +2.7.1 +===== +August 25, 2015 + +Bug Fixes +--------- +* Explicitly include extension source files in Manifest + +2.7.0 +===== +August 25, 2015 + +Cython is introduced, providing compiled extensions for core modules, and +extensions for optimized results deserialization. + +Features +-------- +* General Performance Improvements for Throughput (PYTHON-283) +* Improve synchronous request performance with Timers (PYTHON-108) +* Enable C Extensions for PyPy Runtime (PYTHON-357) +* Refactor SerDes functionality for pluggable interface (PYTHON-313) +* Cython SerDes Extension (PYTHON-377) +* Accept iterators/generators for execute_concurrent() (PYTHON-123) +* cythonize existing modules (PYTHON-342) +* Pure Python murmur3 implementation (PYTHON-363) +* Make driver tolerant of inconsistent metadata (PYTHON-370) + +Bug Fixes +--------- +* Drop Events out-of-order Cause KeyError on Processing (PYTHON-358) +* DowngradingConsistencyRetryPolicy doesn't check response count on write timeouts (PYTHON-338) +* Blocking connect does not use connect_timeout (PYTHON-381) +* Properly protect partition key in CQL export (PYTHON-375) +* Trigger error callbacks on timeout (PYTHON-294) + +2.6.0 +===== +July 20, 2015 + +Bug Fixes +--------- +* Output proper CQL for compact tables with no clustering columns (PYTHON-360) + +2.6.0c2 +======= +June 24, 2015 + +Features +-------- +* Automatic Protocol Version Downgrade (PYTHON-240) +* cqlengine Python 2.6 compatibility (PYTHON-288) +* Double-dollar string quote UDF body (PYTHON-345) +* Set models.DEFAULT_KEYSPACE when calling set_session (github #352) + +Bug Fixes +--------- +* Avoid stall while connecting to mixed version cluster (PYTHON-303) +* Make SSL work with AsyncoreConnection in python 2.6.9 (PYTHON-322) +* Fix Murmur3Token.from_key() on Windows (PYTHON-331) +* Fix cqlengine TimeUUID rounding error for Windows (PYTHON-341) +* Avoid invalid compaction options in CQL export for non-SizeTiered (PYTHON-352) + +2.6.0c1 +======= +June 4, 2015 + +This release adds support for Cassandra 2.2 features, including version +4 of the native protocol. + +Features +-------- +* Default load balancing policy to TokenAware(DCAware) (PYTHON-160) +* Configuration option for connection timeout (PYTHON-206) +* Support User Defined Function and Aggregate metadata in C* 2.2 (PYTHON-211) +* Surface request client in QueryTrace for C* 2.2+ (PYTHON-235) +* Implement new request failure messages in protocol v4+ (PYTHON-238) +* Metadata model now maps index meta by index name (PYTHON-241) +* Support new types in C* 2.2: date, time, smallint, tinyint (PYTHON-245, 295) +* cqle: add Double column type and remove Float overload (PYTHON-246) +* Use partition key column information in prepared response for protocol v4+ (PYTHON-277) +* Support message custom payloads in protocol v4+ (PYTHON-280, PYTHON-329) +* Deprecate refresh_schema and replace with functions for specific entities (PYTHON-291) +* Save trace id even when trace complete times out (PYTHON-302) +* Warn when registering client UDT class for protocol < v3 (PYTHON-305) +* Support client warnings returned with messages in protocol v4+ (PYTHON-315) +* Ability to distinguish between NULL and UNSET values in protocol v4+ (PYTHON-317) +* Expose CQL keywords in API (PYTHON-324) + +Bug Fixes +--------- +* IPv6 address support on Windows (PYTHON-20) +* Convert exceptions during automatic re-preparation to nice exceptions (PYTHON-207) +* cqle: Quote keywords properly in table management functions (PYTHON-244) +* Don't default to GeventConnection when gevent is loaded, but not monkey-patched (PYTHON-289) +* Pass dynamic host from SaslAuthProvider to SaslAuthenticator (PYTHON-300) +* Make protocol read_inet work for Windows (PYTHON-309) +* cqle: Correct encoding for nested types (PYTHON-311) +* Update list of CQL keywords used quoting identifiers (PYTHON-319) +* Make ConstantReconnectionPolicy work with infinite retries (github #327, PYTHON-325) +* Accept UUIDs with uppercase hex as valid in cqlengine (github #335) + +2.5.1 +===== +April 23, 2015 + +Bug Fixes +--------- +* Fix thread safety in DC-aware load balancing policy (PYTHON-297) +* Fix race condition in node/token rebuild (PYTHON-298) +* Set and send serial consistency parameter (PYTHON-299) + +2.5.0 +===== +March 30, 2015 + +Features +-------- +* Integrated cqlengine object mapping package +* Utility functions for converting timeuuids and datetime (PYTHON-99) +* Schema metadata fetch window randomized, config options added (PYTHON-202) +* Support for new Date and Time Cassandra types (PYTHON-190) + +Bug Fixes +--------- +* Fix index target for collection indexes (full(), keys()) (PYTHON-222) +* Thread exception during GIL cleanup (PYTHON-229) +* Workaround for rounding anomaly in datetime.utcfromtime (Python 3.4) (PYTHON-230) +* Normalize text serialization for lookup in OrderedMap (PYTHON-231) +* Support reading CompositeType data (PYTHON-234) +* Preserve float precision in CQL encoding (PYTHON-243) + +2.1.4 +===== +January 26, 2015 + +Features +-------- +* SaslAuthenticator for Kerberos support (PYTHON-109) +* Heartbeat for network device keepalive and detecting failures on idle connections (PYTHON-197) +* Support nested, frozen collections for Cassandra 2.1.3+ (PYTHON-186) +* Schema agreement wait bypass config, new call for synchronous schema refresh (PYTHON-205) +* Add eventlet connection support (PYTHON-194) + +Bug Fixes +--------- +* Schema meta fix for complex thrift tables (PYTHON-191) +* Support for 'unknown' replica placement strategies in schema meta (PYTHON-192) +* Resolve stream ID leak on set_keyspace (PYTHON-195) +* Remove implicit timestamp scaling on serialization of numeric timestamps (PYTHON-204) +* Resolve stream id collision when using SASL auth (PYTHON-210) +* Correct unhexlify usage for user defined type meta in Python3 (PYTHON-208) + +2.1.3 +===== +December 16, 2014 + +Features +-------- +* INFO-level log confirmation that a connection was opened to a node that was marked up (PYTHON-116) +* Avoid connecting to peer with incomplete metadata (PYTHON-163) +* Add SSL support to gevent reactor (PYTHON-174) +* Use control connection timeout in wait for schema agreement (PYTHON-175) +* Better consistency level representation in unavailable+timeout exceptions (PYTHON-180) +* Update schema metadata processing to accommodate coming schema modernization (PYTHON-185) + +Bug Fixes +--------- +* Support large negative timestamps on Windows (PYTHON-119) +* Fix schema agreement for clusters with peer rpc_addres 0.0.0.0 (PYTHON-166) +* Retain table metadata following keyspace meta refresh (PYTHON-173) +* Use a timeout when preparing a statement for all nodes (PYTHON-179) +* Make TokenAware routing tolerant of statements with no keyspace (PYTHON-181) +* Update add_collback to store/invoke multiple callbacks (PYTHON-182) +* Correct routing key encoding for composite keys (PYTHON-184) +* Include compression option in schema export string when disabled (PYTHON-187) + +2.1.2 +===== +October 16, 2014 + +Features +-------- +* Allow DCAwareRoundRobinPolicy to be constructed without a local_dc, defaulting + instead to the DC of a contact_point (PYTHON-126) +* Set routing key in BatchStatement.add() if none specified in batch (PYTHON-148) +* Improved feedback on ValueError using named_tuple_factory with invalid column names (PYTHON-122) + +Bug Fixes +--------- +* Make execute_concurrent compatible with Python 2.6 (PYTHON-159) +* Handle Unauthorized message on schema_triggers query (PYTHON-155) +* Pure Python sorted set in support of UDTs nested in collections (PYTON-167) +* Support CUSTOM index metadata and string export (PYTHON-165) + +2.1.1 +===== +September 11, 2014 + +Features +-------- +* Detect triggers and include them in CQL queries generated to recreate + the schema (github-189) +* Support IPv6 addresses (PYTHON-144) (note: basic functionality added; Windows + platform not addressed (PYTHON-20)) + +Bug Fixes +--------- +* Fix NetworkTopologyStrategy.export_for_schema (PYTHON-120) +* Keep timeout for paged results (PYTHON-150) + +Other +----- +* Add frozen<> type modifier to UDTs and tuples to handle CASSANDRA-7857 + +2.1.0 +===== +August 7, 2014 + +Bug Fixes +--------- +* Correctly serialize and deserialize null values in tuples and + user-defined types (PYTHON-110) +* Include additional header and lib dirs, allowing libevwrapper to build + against Homebrew and Mac Ports installs of libev (PYTHON-112 and 804dea3) + +2.1.0c1 +======= +July 25, 2014 + +Bug Fixes +--------- +* Properly specify UDTs for columns in CREATE TABLE statements +* Avoid moving retries to a new host when using request ID zero (PYTHON-88) +* Don't ignore fetch_size arguments to Statement constructors (github-151) +* Allow disabling automatic paging on a per-statement basis when it's + enabled by default for the session (PYTHON-93) +* Raise ValueError when tuple query parameters for prepared statements + have extra items (PYTHON-98) +* Correctly encode nested tuples and UDTs for non-prepared statements (PYTHON-100) +* Raise TypeError when a string is used for contact_points (github #164) +* Include User Defined Types in KeyspaceMetadata.export_as_string() (PYTHON-96) + +Other +----- +* Return list collection columns as python lists instead of tuples + now that tuples are a specific Cassandra type + +2.1.0b1 +======= +July 11, 2014 + +This release adds support for Cassandra 2.1 features, including version +3 of the native protocol. + +Features +-------- +* When using the v3 protocol, only one connection is opened per-host, and + throughput is improved due to reduced pooling overhead and lock contention. +* Support for user-defined types (Cassandra 2.1+) +* Support for tuple type in (limited usage Cassandra 2.0.9, full usage + in Cassandra 2.1) +* Protocol-level client-side timestamps (see Session.use_client_timestamp) +* Overridable type encoding for non-prepared statements (see Session.encoders) +* Configurable serial consistency levels for batch statements +* Use io.BytesIO for reduced CPU consumption (github #143) +* Support Twisted as a reactor. Note that a Twisted-compatible + API is not exposed (so no Deferreds), this is just a reactor + implementation. (github #135, PYTHON-8) + +Bug Fixes +--------- +* Fix references to xrange that do not go through "six" in libevreactor and + geventreactor (github #138) +* Make BoundStatements inherit fetch_size from their parent + PreparedStatement (PYTHON-80) +* Clear reactor state in child process after forking to prevent errors with + multiprocessing when the parent process has connected a Cluster before + forking (github #141) +* Don't share prepared statement lock across Cluster instances +* Format CompositeType and DynamicCompositeType columns correctly in + CREATE TABLE statements. +* Fix cassandra.concurrent behavior when dealing with automatic paging + (PYTHON-81) +* Properly defunct connections after protocol errors +* Avoid UnicodeDecodeError when query string is unicode (PYTHON-76) +* Correctly capture dclocal_read_repair_chance for tables and + use it when generating CREATE TABLE statements (PYTHON-84) +* Avoid race condition with AsyncoreConnection that may cause messages + to fail to be written until a new message is pushed +* Make sure cluster.metadata.partitioner and cluster.metadata.token_map + are populated when all nodes in the cluster are included in the + contact points (PYTHON-90) +* Make Murmur3 hash match Cassandra's hash for all values (PYTHON-89, + github #147) +* Don't attempt to reconnect to hosts that should be ignored (according + to the load balancing policy) when a notification is received that the + host is down. +* Add CAS WriteType, avoiding KeyError on CAS write timeout (PYTHON-91) + +2.0.2 +===== +June 10, 2014 + +Bug Fixes +--------- +* Add six to requirements.txt +* Avoid KeyError during schema refresh when a keyspace is dropped + and TokenAwarePolicy is not in use +* Avoid registering multiple atexit cleanup functions when the + asyncore event loop is restarted multiple times +* Delay initialization of reactors in order to avoid problems + with shared state when using multiprocessing (PYTHON-60) +* Add python-six to debian dependencies, move python-blist to recommends +* Fix memory leak when libev connections are created and + destroyed (github #93) +* Ensure token map is rebuilt when hosts are removed from the cluster + +2.0.1 +===== +May 28, 2014 + +Bug Fixes +--------- +* Fix check for Cluster.is_shutdown in in @run_in_executor + decorator + +2.0.0 +===== +May 28, 2014 + +Features +-------- +* Make libev C extension Python3-compatible (PYTHON-70) +* Support v2 protocol authentication (PYTHON-73, github #125) + +Bug Fixes +--------- +* Fix murmur3 C extension compilation under Python3.4 (github #124) + +Merged From 1.x +--------------- + +Features +^^^^^^^^ +* Add Session.default_consistency_level (PYTHON-14) + +Bug Fixes +^^^^^^^^^ +* Don't strip trailing underscores from column names when using the + named_tuple_factory (PYTHON-56) +* Ensure replication factors are ints for NetworkTopologyStrategy + to avoid TypeErrors (github #120) +* Pass WriteType instance to RetryPolicy.on_write_timeout() instead + of the string name of the write type. This caused write timeout + errors to always be rethrown instead of retrying. (github #123) +* Avoid submitting tasks to the ThreadPoolExecutor after shutdown. With + retries enabled, this could cause Cluster.shutdown() to hang under + some circumstances. +* Fix unintended rebuild of token replica map when keyspaces are + discovered (on startup), added, or updated and TokenAwarePolicy is not + in use. +* Avoid rebuilding token metadata when cluster topology has not + actually changed +* Avoid preparing queries for hosts that should be ignored (such as + remote hosts when using the DCAwareRoundRobinPolicy) (PYTHON-75) + +Other +^^^^^ +* Add 1 second timeout to join() call on event loop thread during + interpreter shutdown. This can help to prevent the process from + hanging during shutdown. + +2.0.0b1 +======= +May 6, 2014 + +Upgrading from 1.x +------------------ +Cluster.shutdown() should always be called when you are done with a +Cluster instance. If it is not called, there are no guarantees that the +driver will not hang. However, if you *do* have a reproduceable case +where Cluster.shutdown() is not called and the driver hangs, please +report it so that we can attempt to fix it. + +If you're using the 2.0 driver against Cassandra 1.2, you will need +to set your protocol version to 1. For example: + + cluster = Cluster(..., protocol_version=1) + +Features +-------- +* Support v2 of Cassandra's native protocol, which includes the following + new features: automatic query paging support, protocol-level batch statements, + and lightweight transactions +* Support for Python 3.3 and 3.4 +* Allow a default query timeout to be set per-Session + +Bug Fixes +--------- +* Avoid errors during interpreter shutdown (the driver attempts to cleanup + daemonized worker threads before interpreter shutdown) + +Deprecations +------------ +The following functions have moved from cassandra.decoder to cassandra.query. +The original functions have been left in place with a DeprecationWarning for +now: + +* cassandra.decoder.tuple_factory has moved to cassandra.query.tuple_factory +* cassandra.decoder.named_tuple_factory has moved to cassandra.query.named_tuple_factory +* cassandra.decoder.dict_factory has moved to cassandra.query.dict_factory +* cassandra.decoder.ordered_dict_factory has moved to cassandra.query.ordered_dict_factory + +Exceptions that were in cassandra.decoder have been moved to cassandra.protocol. If +you handle any of these exceptions, you must adjust the code accordingly. + +1.1.2 +===== +May 8, 2014 + +Features +-------- +* Allow a specific compression type to be requested for communications with + Cassandra and prefer lz4 if available + +Bug Fixes +--------- +* Update token metadata (for TokenAware calculations) when a node is removed + from the ring +* Fix file handle leak with gevent reactor due to blocking Greenlet kills when + closing excess connections +* Avoid handling a node coming up multiple times due to a reconnection attempt + succeeding close to the same time that an UP notification is pushed +* Fix duplicate node-up handling, which could result in multiple reconnectors + being started as well as the executor threads becoming deadlocked, preventing + future node up or node down handling from being executed. +* Handle exhausted ReconnectionPolicy schedule correctly + +Other +----- +* Don't log at ERROR when a connection is closed during the startup + communications +* Mke scales, blist optional dependencies + +1.1.1 +===== +April 16, 2014 + +Bug Fixes +--------- +* Fix unconditional import of nose in setup.py (github #111) + +1.1.0 +===== +April 16, 2014 + +Features +-------- +* Gevent is now supported through monkey-patching the stdlib (PYTHON-7, + github issue #46) +* Support static columns in schemas, which are available starting in + Cassandra 2.1. (github issue #91) +* Add debian packaging (github issue #101) +* Add utility methods for easy concurrent execution of statements. See + the new cassandra.concurrent module. (github issue #7) + +Bug Fixes +--------- +* Correctly supply compaction and compression parameters in CREATE statements + for tables when working with Cassandra 2.0+ +* Lowercase boolean literals when generating schemas +* Ignore SSL_ERROR_WANT_READ and SSL_ERROR_WANT_WRITE socket errors. Previously, + these resulted in the connection being defuncted, but they can safely be + ignored by the driver. +* Don't reconnect the control connection every time Cluster.connect() is + called +* Avoid race condition that could leave ResponseFuture callbacks uncalled + if the callback was added outside of the event loop thread (github issue #95) +* Properly escape keyspace name in Session.set_keyspace(). Previously, the + keyspace name was quoted, but any quotes in the string were not escaped. +* Avoid adding hosts to the load balancing policy before their datacenter + and rack information has been set, if possible. +* Avoid KeyError when updating metadata after droping a table (github issues + #97, #98) +* Use tuples instead of sets for DCAwareLoadBalancingPolicy to ensure equal + distribution of requests + +Other +----- +* Don't ignore column names when parsing typestrings. This is needed for + user-defined type support. (github issue #90) +* Better error message when libevwrapper is not found +* Only try to import scales when metrics are enabled (github issue #92) +* Cut down on the number of queries executing when a new Cluster + connects and when the control connection has to reconnect (github issue #104, + PYTHON-59) +* Issue warning log when schema versions do not match + +1.0.2 +===== +March 4, 2014 + +Bug Fixes +--------- +* With asyncorereactor, correctly handle EAGAIN/EWOULDBLOCK when the message from + Cassandra is a multiple of the read buffer size. Previously, if no more data + became available to read on the socket, the message would never be processed, + resulting in an OperationTimedOut error. +* Double quote keyspace, table and column names that require them (those using + uppercase characters or keywords) when generating CREATE statements through + KeyspaceMetadata and TableMetadata. +* Decode TimestampType as DateType. (Cassandra replaced DateType with + TimestampType to fix sorting of pre-unix epoch dates in CASSANDRA-5723.) +* Handle latest table options when parsing the schema and generating + CREATE statements. +* Avoid 'Set changed size during iteration' during query plan generation + when hosts go up or down + +Other +----- +* Remove ignored ``tracing_enabled`` parameter for ``SimpleStatement``. The + correct way to trace a query is by setting the ``trace`` argument to ``True`` + in ``Session.execute()`` and ``Session.execute_async()``. +* Raise TypeError instead of cassandra.query.InvalidParameterTypeError when + a parameter for a prepared statement has the wrong type; remove + cassandra.query.InvalidParameterTypeError. +* More consistent type checking for query parameters +* Add option to a return special object for empty string values for non-string + columns + +1.0.1 +===== +Feb 19, 2014 + +Bug Fixes +--------- +* Include table indexes in ``KeyspaceMetadata.export_as_string()`` +* Fix broken token awareness on ByteOrderedPartitioner +* Always close socket when defuncting error'ed connections to avoid a potential + file descriptor leak +* Handle "custom" types (such as the replaced DateType) correctly +* With libevreactor, correctly handle EAGAIN/EWOULDBLOCK when the message from + Cassandra is a multiple of the read buffer size. Previously, if no more data + became available to read on the socket, the message would never be processed, + resulting in an OperationTimedOut error. +* Don't break tracing when a Session's row_factory is not the default + namedtuple_factory. +* Handle data that is already utf8-encoded for UTF8Type values +* Fix token-aware routing for tokens that fall before the first node token in + the ring and tokens that exactly match a node's token +* Tolerate null source_elapsed values for Trace events. These may not be + set when events complete after the main operation has already completed. + +Other +----- +* Skip sending OPTIONS message on connection creation if compression is + disabled or not available and a CQL version has not been explicitly + set +* Add details about errors and the last queried host to ``OperationTimedOut`` + +1.0.0 Final +=========== +Jan 29, 2014 + +Bug Fixes +--------- +* Prevent leak of Scheduler thread (even with proper shutdown) +* Correctly handle ignored hosts, which are common with the + DCAwareRoundRobinPolicy +* Hold strong reference to prepared statement while executing it to avoid + garbage collection +* Add NullHandler logging handler to the cassandra package to avoid + warnings about there being no configured logger +* Fix bad handling of nodes that have been removed from the cluster +* Properly escape string types within cql collections +* Handle setting the same keyspace twice in a row +* Avoid race condition during schema agreement checks that could result + in schema update queries returning before all nodes had seen the change +* Preserve millisecond-level precision in datetimes when performing inserts + with simple (non-prepared) statements +* Properly defunct connections when libev reports an error by setting + errno instead of simply logging the error +* Fix endless hanging of some requests when using the libev reactor +* Always start a reconnection process when we fail to connect to + a newly bootstrapped node +* Generators map to CQL lists, not key sequences +* Always defunct connections when an internal operation fails +* Correctly break from handle_write() if nothing was sent (asyncore + reactor only) +* Avoid potential double-erroring of callbacks when a connection + becomes defunct + +Features +-------- +* Add default query timeout to ``Session`` +* Add timeout parameter to ``Session.execute()`` +* Add ``WhiteListRoundRobinPolicy`` as a load balancing policy option +* Support for consistency level ``LOCAL_ONE`` +* Make the backoff for fetching traces exponentially increasing and + configurable + +Other +----- +* Raise Exception if ``TokenAwarePolicy`` is used against a cluster using the + ``Murmur3Partitioner`` if the murmur3 C extension has not been compiled +* Add encoder mapping for ``OrderedDict`` +* Use timeouts on all control connection queries +* Benchmark improvements, including command line options and eay + multithreading support +* Reduced lock contention when using the asyncore reactor +* Warn when non-datetimes are used for 'timestamp' column values in + prepared statements +* Add requirements.txt and test-requirements.txt +* TravisCI integration for running unit tests against Python 2.6, + Python 2.7, and PyPy + +1.0.0b7 +======= +Nov 12, 2013 + +This release makes many stability improvements, especially around +prepared statements and node failure handling. In particular, +several cases where a request would never be completed (and as a +result, leave the application hanging) have been resolved. + +Features +-------- +* Add `timeout` kwarg to ``ResponseFuture.result()`` +* Create connection pools to all hosts in parallel when initializing + new Sesssions. + +Bug Fixes +--------- +* Properly set exception on ResponseFuture when a query fails + against all hosts +* Improved cleanup and reconnection efforts when reconnection fails + on a node that has recently come up +* Use correct consistency level when retrying failed operations + against a different host. (An invalid consistency level was being + used, causing the retry to fail.) +* Better error messages for failed ``Session.prepare()`` opertaions +* Prepare new statements against all hosts in parallel (formerly + sequential) +* Fix failure to save the new current keyspace on connections. (This + could cause problems for prepared statements and lead to extra + operations to continuously re-set the keyspace.) +* Avoid sharing ``LoadBalancingPolicies`` across ``Cluster`` instances. (When + a second ``Cluster`` was connected, it effectively mark nodes down for the + first ``Cluster``.) +* Better handling of failures during the re-preparation sequence for + unrecognized prepared statements +* Throttle trashing of underutilized connections to avoid trashing newly + created connections +* Fix race condition which could result in trashed connections being closed + before the last operations had completed +* Avoid preparing statements on the event loop thread (which could lead to + deadlock) +* Correctly mark up non-contact point nodes discovered by the control + connection. (This lead to prepared statements not being prepared + against those hosts, generating extra traffic later when the + statements were executed and unrecognized.) +* Correctly handle large messages through libev +* Add timeout to schema agreement check queries +* More complete (and less contended) locking around manipulation of the + pending message deque for libev connections + +Other +----- +* Prepare statements in batches of 10. (When many prepared statements + are in use, this allows the driver to start utilizing nodes that + were restarted more quickly.) +* Better debug logging around connection management +* Don't retain unreferenced prepared statements in the local cache. + (If many different prepared statements were created, this would + increase memory usage and greatly increase the amount of time + required to begin utilizing a node that was added or marked + up.) + +1.0.0b6 +======= +Oct 22, 2013 + +Bug Fixes +--------- +* Use lazy string formatting when logging +* Avoid several deadlock scenarios, especially when nodes go down +* Avoid trashing newly created connections due to insufficient traffic +* Gracefully handle un-handled Exceptions when erroring callbacks + +Other +----- +* Node state listeners (which are called when a node is added, removed, + goes down, or comes up) should now be registered through + Cluster.register_listener() instead of through a host's HealthMonitor + (which has been removed) + + +1.0.0b5 +======== +Oct 10, 2013 + +Features +-------- +* SSL support + +Bug Fixes +--------- +* Avoid KeyError when building replica map for NetworkTopologyStrategy +* Work around python bug which causes deadlock when a thread imports + the utf8 module +* Handle no blist library, which is not compatible with pypy +* Avoid deadlock triggered by a keyspace being set on a connection (which + may happen automatically for new connections) + +Other +----- +* Switch packaging from Distribute to setuptools, improved C extension + support +* Use PEP 386 compliant beta and post-release versions + +1.0.0-beta4 +=========== +Sep 24, 2013 + +Features +-------- +* Handle new blob syntax in Cassandra 2.0 by accepting bytearray + objects for blob values +* Add cql_version kwarg to Cluster.__init__ + +Bug Fixes +--------- +* Fix KeyError when building token map with NetworkTopologyStrategy + keyspaces (this prevented a Cluster from successfully connecting + at all). +* Don't lose default consitency level from parent PreparedStatement + when creating BoundStatements + +1.0.0-beta3 +=========== +Sep 20, 2013 + +Features +-------- +* Support for LZ4 compression (Cassandra 2.0+) +* Token-aware routing will now utilize all replicas for a query instead + of just the first replica + +Bug Fixes +--------- +* Fix libev include path for CentOS +* Fix varint packing of the value 0 +* Correctly pack unicode values +* Don't attempt to return failed connections to the pool when a final result + is set +* Fix bad iteration of connection credentials +* Use blist's orderedset for set collections and OrderedDict for map + collections so that Cassandra's ordering is preserved +* Fix connection failure on Windows due to unavailability of inet_pton + and inet_ntop. (Note that IPv6 inet_address values are still not + supported on Windows.) +* Boolean constants shouldn't be surrounded by single quotes +* Avoid a potential loss of precision on float constants due to string + formatting +* Actually utilize non-standard ports set on Cluster objects +* Fix export of schema as a set of CQL queries + +Other +----- +* Use cStringIO for connection buffer for better performance +* Add __repr__ method for Statement classes +* Raise InvalidTypeParameterError when parameters of the wrong + type are used with statements +* Make all tests compatible with Python 2.6 +* Add 1s timeout for opening new connections + +1.0.0-beta2 +=========== +Aug 19, 2013 + +Bug Fixes +--------- +* Fix pip packaging + +1.0.0-beta +========== +Aug 16, 2013 + +Initial release diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst new file mode 100644 index 0000000..cdd742c --- /dev/null +++ b/CONTRIBUTING.rst @@ -0,0 +1,33 @@ +Contributing +============ + +Contributions are welcome in the form of bug reports or pull requests. + +Bug Reports +----------- +Quality bug reports are welcome at the `DataStax Python Driver JIRA `_. + +There are plenty of `good resources `_ describing how to create +good bug reports. They will not be repeated in detail here, but in general, the bug report include where appropriate: + +* relevant software versions (Python runtime, driver version, cython version, server version) +* details for how to produce (e.g. a test script or written procedure) + * any effort to isolate the issue in reproduction is much-appreciated +* stack trace from a crashed runtime + +Pull Requests +------------- +If you're able to fix a bug yourself, you can `fork the repository `_ and submit a `Pull Request `_ with the fix. +Please include tests demonstrating the issue and fix. For examples of how to run the tests, consult the `dev README `_. + +Contribution License Agreement +------------------------------ +To protect the community, all contributors are required to `sign the DataStax Contribution License Agreement `_. The process is completely electronic and should only take a few minutes. + +Design and Implementation Guidelines +------------------------------------ +- We support Python 2.7+, so any changes must work in any of these runtimes (we use ``six``, ``futures``, and some internal backports for compatability) +- We have integrations (notably Cassandra cqlsh) that require pure Python and minimal external dependencies. We try to avoid new external dependencies. Where compiled extensions are concerned, there should always be a pure Python fallback implementation. +- This project follows `semantic versioning `_, so breaking API changes will only be introduced in major versions. +- Legacy ``cqlengine`` has varying degrees of overreaching client-side validation. Going forward, we will avoid client validation where server feedback is adequate and not overly expensive. +- When writing tests, try to achieve maximal coverage in unit tests (where it is faster to run across many runtimes). Integration tests are good for things where we need to test server interaction, or where it is important to test across different server versions (emulating in unit tests would not be effective). diff --git a/PKG-INFO b/PKG-INFO deleted file mode 100644 index 79d9098..0000000 --- a/PKG-INFO +++ /dev/null @@ -1,114 +0,0 @@ -Metadata-Version: 1.1 -Name: cassandra-driver -Version: 3.20.2 -Summary: Python driver for Cassandra -Home-page: http://github.com/datastax/python-driver -Author: Tyler Hobbs -Author-email: tyler@datastax.com -License: UNKNOWN -Description: DataStax Python Driver for Apache Cassandra - =========================================== - - .. image:: https://travis-ci.org/datastax/python-driver.png?branch=master - :target: https://travis-ci.org/datastax/python-driver - - A modern, `feature-rich `_ and highly-tunable Python client library for Apache Cassandra (2.1+) using exclusively Cassandra's binary protocol and Cassandra Query Language v3. - - The driver supports Python 2.7, 3.4, 3.5, 3.6 and 3.7. - - If you require compatibility with DataStax Enterprise, use the `DataStax Enterprise Python Driver `_. - - **Note:** DataStax products do not support big-endian systems. - - Feedback Requested - ------------------ - **Help us focus our efforts!** Provide your input on the `Platform and Runtime Survey `_ (we kept it short). - - Features - -------- - * `Synchronous `_ and `Asynchronous `_ APIs - * `Simple, Prepared, and Batch statements `_ - * Asynchronous IO, parallel execution, request pipelining - * `Connection pooling `_ - * Automatic node discovery - * `Automatic reconnection `_ - * Configurable `load balancing `_ and `retry policies `_ - * `Concurrent execution utilities `_ - * `Object mapper `_ - * `Connecting to DataStax Apollo database (cloud) `_ - - Installation - ------------ - Installation through pip is recommended:: - - $ pip install cassandra-driver - - For more complete installation instructions, see the - `installation guide `_. - - Documentation - ------------- - The documentation can be found online `here `_. - - A couple of links for getting up to speed: - - * `Installation `_ - * `Getting started guide `_ - * `API docs `_ - * `Performance tips `_ - - Object Mapper - ------------- - cqlengine (originally developed by Blake Eggleston and Jon Haddad, with contributions from the - community) is now maintained as an integral part of this package. Refer to - `documentation here `_. - - Contributing - ------------ - See `CONTRIBUTING.md `_. - - Reporting Problems - ------------------ - Please report any bugs and make any feature requests on the - `JIRA `_ issue tracker. - - If you would like to contribute, please feel free to open a pull request. - - Getting Help - ------------ - Your best options for getting help with the driver are the - `mailing list `_ - and the ``#datastax-drivers`` channel in the `DataStax Academy Slack `_. - - License - ------- - Copyright DataStax, Inc. - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. - -Keywords: cassandra,cql,orm -Platform: UNKNOWN -Classifier: Development Status :: 5 - Production/Stable -Classifier: Intended Audience :: Developers -Classifier: License :: OSI Approved :: Apache Software License -Classifier: Natural Language :: English -Classifier: Operating System :: OS Independent -Classifier: Programming Language :: Python -Classifier: Programming Language :: Python :: 2.7 -Classifier: Programming Language :: Python :: 3.4 -Classifier: Programming Language :: Python :: 3.5 -Classifier: Programming Language :: Python :: 3.6 -Classifier: Programming Language :: Python :: 3.7 -Classifier: Programming Language :: Python :: Implementation :: CPython -Classifier: Programming Language :: Python :: Implementation :: PyPy -Classifier: Topic :: Software Development :: Libraries :: Python Modules diff --git a/README-dev.rst b/README-dev.rst new file mode 100644 index 0000000..c10aed2 --- /dev/null +++ b/README-dev.rst @@ -0,0 +1,171 @@ +Releasing +========= +* Run the tests and ensure they all pass +* Update CHANGELOG.rst + + * Check for any missing entries + * Add today's date to the release section +* Update the version in ``cassandra/__init__.py`` + + * For beta releases, use a version like ``(2, 1, '0b1')`` + * For release candidates, use a version like ``(2, 1, '0rc1')`` + * When in doubt, follow PEP 440 versioning +* Add the new version in ``docs.yaml`` + +* Commit the changelog and version changes, e.g. ``git commit -m'version 1.0.0'`` +* Tag the release. For example: ``git tag -a 1.0.0 -m 'version 1.0.0'`` +* Push the tag and new ``master``: ``git push origin 1.0.0 ; git push origin master`` +* Upload the package to pypi:: + + python setup.py register + python setup.py sdist upload + +* On pypi, make the latest GA the only visible version +* Update the docs (see below) +* Append a 'postN' string to the version tuple in ``cassandra/__init__.py`` + so that it looks like ``(x, y, z, 'postN')`` + + * After a beta or rc release, this should look like ``(2, 1, '0b1', 'post0')`` + +* Commit and push +* Update 'cassandra-test' branch to reflect new release + + * this is typically a matter of merging or rebasing onto master + * test and push updated branch to origin + +* Update the JIRA versions: https://datastax-oss.atlassian.net/plugins/servlet/project-config/PYTHON/versions + + * add release dates and set version as "released" + +* Make an announcement on the mailing list + +Building the Docs +================= +Sphinx is required to build the docs. You probably want to install through apt, +if possible:: + + sudo apt-get install python-sphinx + +pip may also work:: + + sudo pip install -U Sphinx + +To build the docs, run:: + + python setup.py doc + +Upload the Docs +================= + +This is deprecated. The docs is now only published on https://docs.datastax.com. + +To upload the docs, checkout the ``gh-pages`` branch and copy the entire +contents all of ``docs/_build/X.Y.Z/*`` into the root of the ``gh-pages`` branch +and then push that branch to github. + +For example:: + + git checkout 1.0.0 + python setup.py doc + git checkout gh-pages + cp -R docs/_build/1.0.0/* . + git add --update # add modified files + # Also make sure to add any new documentation files! + git commit -m 'Update docs (version 1.0.0)' + git push origin gh-pages + +If docs build includes errors, those errors may not show up in the next build unless +you have changed the files with errors. It's good to occassionally clear the build +directory and build from scratch:: + + rm -rf docs/_build/* + +Running the Tests +================= +In order for the extensions to be built and used in the test, run:: + + nosetests + +You can run a specific test module or package like so:: + + nosetests -w tests/unit/ + +You can run a specific test method like so:: + + nosetests -w tests/unit/test_connection.py:ConnectionTest.test_bad_protocol_version + +Seeing Test Logs in Real Time +----------------------------- +Sometimes it's useful to output logs for the tests as they run:: + + nosetests -w tests/unit/ --nocapture --nologcapture + +Use tee to capture logs and see them on your terminal:: + + nosetests -w tests/unit/ --nocapture --nologcapture 2>&1 | tee test.log + +Specifying a Cassandra Version for Integration Tests +---------------------------------------------------- +You can specify a cassandra version with the ``CASSANDRA_VERSION`` environment variable:: + + CASSANDRA_VERSION=2.0.9 nosetests -w tests/integration/standard + +You can also specify a cassandra directory (to test unreleased versions):: + + CASSANDRA_DIR=/home/thobbs/cassandra nosetests -w tests/integration/standard + +Specifying the usage of an already running Cassandra cluster +---------------------------------------------------- +The test will start the appropriate Cassandra clusters when necessary but if you don't want this to happen because a Cassandra cluster is already running the flag ``USE_CASS_EXTERNAL`` can be used, for example: + + USE_CASS_EXTERNAL=1 python setup.py nosetests -w tests/integration/standard + +Specify a Protocol Version for Tests +------------------------------------ +The protocol version defaults to 1 for cassandra 1.2 and 2 otherwise. You can explicitly set +it with the ``PROTOCOL_VERSION`` environment variable:: + + PROTOCOL_VERSION=3 nosetests -w tests/integration/standard + +Testing Multiple Python Versions +-------------------------------- +If you want to test all of python 2.7, 3.4, 3.5, 3.6 and pypy, use tox (this is what +TravisCI runs):: + + tox + +By default, tox only runs the unit tests because I haven't put in the effort +to get the integration tests to run on TravicCI. However, the integration +tests should work locally. To run them, edit the following line in tox.ini:: + + commands = {envpython} setup.py build_ext --inplace nosetests --verbosity=2 tests/unit/ + +and change ``tests/unit/`` to ``tests/``. + +Running the Benchmarks +====================== +There needs to be a version of cassandra running locally so before running the benchmarks, if ccm is installed: + + ccm create benchmark_cluster -v 3.0.1 -n 1 -s + +To run the benchmarks, pick one of the files under the ``benchmarks/`` dir and run it:: + + python benchmarks/future_batches.py + +There are a few options. Use ``--help`` to see them all:: + + python benchmarks/future_batches.py --help + +Packaging for Cassandra +======================= +A source distribution is included in Cassandra, which uses the driver internally for ``cqlsh``. +To package a released version, checkout the tag and build a source zip archive:: + + python setup.py sdist --formats=zip + +If packaging a pre-release (untagged) version, it is useful to include a commit hash in the archive +name to specify the built version:: + + python setup.py egg_info -b-`git rev-parse --short HEAD` sdist --formats=zip + +The file (``dist/cassandra-driver-.zip``) is packaged with Cassandra in ``cassandra/lib/cassandra-driver-internal-only*zip``. diff --git a/appveyor.yml b/appveyor.yml new file mode 100644 index 0000000..d1daaa6 --- /dev/null +++ b/appveyor.yml @@ -0,0 +1,26 @@ +environment: + matrix: + - PYTHON: "C:\\Python27-x64" + cassandra_version: 3.11.2 + ci_type: standard + - PYTHON: "C:\\Python35-x64" + cassandra_version: 3.11.2 + ci_type: standard +os: Visual Studio 2015 +platform: + - x64 +install: + - "SET PATH=%PYTHON%;%PYTHON%\\Scripts;%PATH%" + - ps: .\appveyor\appveyor.ps1 +build_script: + - cmd: | + "%VS140COMNTOOLS%\..\..\VC\vcvarsall.bat" x86_amd64 + python setup.py install --no-cython +test_script: + - ps: .\appveyor\run_test.ps1 +cache: + - C:\Users\appveyor\.m2 + - C:\ProgramData\chocolatey\bin + - C:\ProgramData\chocolatey\lib + - C:\Users\appveyor\jce_policy-1.7.0.zip + - C:\Users\appveyor\jce_policy-1.8.0.zip \ No newline at end of file diff --git a/appveyor/appveyor.ps1 b/appveyor/appveyor.ps1 new file mode 100644 index 0000000..cc1e6aa --- /dev/null +++ b/appveyor/appveyor.ps1 @@ -0,0 +1,80 @@ +$env:JAVA_HOME="C:\Program Files\Java\jdk1.8.0" +$env:PATH="$($env:JAVA_HOME)\bin;$($env:PATH)" +$env:CCM_PATH="C:\Users\appveyor\ccm" +$env:CASSANDRA_VERSION=$env:cassandra_version +$env:EVENT_LOOP_MANAGER="asyncore" +$env:SIMULACRON_JAR="C:\Users\appveyor\simulacron-standalone-0.7.0.jar" + +python --version +python -c "import platform; print(platform.architecture())" +# Install Ant +Start-Process cinst -ArgumentList @("-y","ant") -Wait -NoNewWindow +# Workaround for ccm, link ant.exe -> ant.bat +If (!(Test-Path C:\ProgramData\chocolatey\bin\ant.bat)) { + cmd /c mklink C:\ProgramData\chocolatey\bin\ant.bat C:\ProgramData\chocolatey\bin\ant.exe +} + + +$jce_indicator = "$target\README.txt" +# Install Java Cryptographic Extensions, needed for SSL. +If (!(Test-Path $jce_indicator)) { + $zip = "C:\Users\appveyor\jce_policy-$($env:java_version).zip" + $target = "$($env:JAVA_HOME)\jre\lib\security" + # If this file doesn't exist we know JCE hasn't been installed. + $url = "https://www.dropbox.com/s/po4308hlwulpvep/UnlimitedJCEPolicyJDK7.zip?dl=1" + $extract_folder = "UnlimitedJCEPolicy" + If ($env:java_version -eq "1.8.0") { + $url = "https://www.dropbox.com/s/al1e6e92cjdv7m7/jce_policy-8.zip?dl=1" + $extract_folder = "UnlimitedJCEPolicyJDK8" + } + # Download zip to staging area if it doesn't exist, we do this because + # we extract it to the directory based on the platform and we want to cache + # this file so it can apply to all platforms. + if(!(Test-Path $zip)) { + (new-object System.Net.WebClient).DownloadFile($url, $zip) + } + + Add-Type -AssemblyName System.IO.Compression.FileSystem + [System.IO.Compression.ZipFile]::ExtractToDirectory($zip, $target) + + $jcePolicyDir = "$target\$extract_folder" + Move-Item $jcePolicyDir\* $target\ -force + Remove-Item $jcePolicyDir +} + +# Download simulacron +$simulacron_url = "https://github.com/datastax/simulacron/releases/download/0.7.0/simulacron-standalone-0.7.0.jar" +$simulacron_jar = $env:SIMULACRON_JAR +if(!(Test-Path $simulacron_jar)) { + (new-object System.Net.WebClient).DownloadFile($simulacron_url, $simulacron_jar) +} + +# Install Python Dependencies for CCM. +Start-Process python -ArgumentList "-m pip install psutil pyYaml six numpy" -Wait -NoNewWindow + +# Clone ccm from git and use master. +If (!(Test-Path $env:CCM_PATH)) { + Start-Process git -ArgumentList "clone https://github.com/pcmanus/ccm.git $($env:CCM_PATH)" -Wait -NoNewWindow +} + + +# Copy ccm -> ccm.py so windows knows to run it. +If (!(Test-Path $env:CCM_PATH\ccm.py)) { + Copy-Item "$env:CCM_PATH\ccm" "$env:CCM_PATH\ccm.py" +} + +$env:PYTHONPATH="$($env:CCM_PATH);$($env:PYTHONPATH)" +$env:PATH="$($env:CCM_PATH);$($env:PATH)" + +# Predownload cassandra version for CCM if it isn't already downloaded. +# This is necessary because otherwise ccm fails +If (!(Test-Path C:\Users\appveyor\.ccm\repository\$env:cassandra_version)) { + Start-Process python -ArgumentList "$($env:CCM_PATH)\ccm.py create -v $($env:cassandra_version) -n 1 predownload" -Wait -NoNewWindow + echo "Checking status of download" + python $env:CCM_PATH\ccm.py status + Start-Process python -ArgumentList "$($env:CCM_PATH)\ccm.py remove predownload" -Wait -NoNewWindow + echo "Downloaded version $env:cassandra_version" +} + +Start-Process python -ArgumentList "-m pip install -r test-requirements.txt" -Wait -NoNewWindow +Start-Process python -ArgumentList "-m pip install nose-ignore-docstring" -Wait -NoNewWindow diff --git a/appveyor/run_test.ps1 b/appveyor/run_test.ps1 new file mode 100644 index 0000000..fc95ec7 --- /dev/null +++ b/appveyor/run_test.ps1 @@ -0,0 +1,49 @@ +Set-ExecutionPolicy Unrestricted +Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope Process -force +Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser -force +Get-ExecutionPolicy -List +echo $env:Path +echo "JAVA_HOME: $env:JAVA_HOME" +echo "PYTHONPATH: $env:PYTHONPATH" +echo "Cassandra version: $env:CASSANDRA_VERSION" +echo "Simulacron jar: $env:SIMULACRON_JAR" +echo $env:ci_type +python --version +python -c "import platform; print(platform.architecture())" + +$wc = New-Object 'System.Net.WebClient' + +if($env:ci_type -eq 'unit'){ + echo "Running Unit tests" + nosetests -s -v --with-ignore-docstrings --with-xunit --xunit-file=unit_results.xml .\tests\unit + + $env:EVENT_LOOP_MANAGER="gevent" + nosetests -s -v --with-ignore-docstrings --with-xunit --xunit-file=unit_results.xml .\tests\unit\io\test_geventreactor.py + $env:EVENT_LOOP_MANAGER="eventlet" + nosetests -s -v --with-ignore-docstrings --with-xunit --xunit-file=unit_results.xml .\tests\unit\io\test_eventletreactor.py + $env:EVENT_LOOP_MANAGER="asyncore" + + echo "uploading unit results" + $wc.UploadFile("https://ci.appveyor.com/api/testresults/junit/$($env:APPVEYOR_JOB_ID)", (Resolve-Path .\unit_results.xml)) + +} + +if($env:ci_type -eq 'standard'){ + + echo "Running CQLEngine integration tests" + nosetests -s -v --with-ignore-docstrings --with-xunit --xunit-file=cqlengine_results.xml .\tests\integration\cqlengine + $cqlengine_tests_result = $lastexitcode + $wc.UploadFile("https://ci.appveyor.com/api/testresults/junit/$($env:APPVEYOR_JOB_ID)", (Resolve-Path .\cqlengine_results.xml)) + echo "uploading CQLEngine test results" + + echo "Running standard integration tests" + nosetests -s -v --with-ignore-docstrings --with-xunit --xunit-file=standard_results.xml .\tests\integration\standard + $integration_tests_result = $lastexitcode + $wc.UploadFile("https://ci.appveyor.com/api/testresults/junit/$($env:APPVEYOR_JOB_ID)", (Resolve-Path .\standard_results.xml)) + echo "uploading standard integration test results" +} + + +$exit_result = $unit_tests_result + $cqlengine_tests_result + $integration_tests_result + $simulacron_tests_result +echo "Exit result: $exit_result" +exit $exit_result diff --git a/benchmarks/base.py b/benchmarks/base.py new file mode 100644 index 0000000..47a03bb --- /dev/null +++ b/benchmarks/base.py @@ -0,0 +1,307 @@ +# Copyright DataStax, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from cProfile import Profile +import logging +import os.path +import sys +from threading import Thread +import time +from optparse import OptionParser +import uuid + +from greplin import scales + +dirname = os.path.dirname(os.path.abspath(__file__)) +sys.path.append(dirname) +sys.path.append(os.path.join(dirname, '..')) + +import cassandra +from cassandra.cluster import Cluster +from cassandra.io.asyncorereactor import AsyncoreConnection + +log = logging.getLogger() +handler = logging.StreamHandler() +handler.setFormatter(logging.Formatter("%(asctime)s [%(levelname)s] %(name)s: %(message)s")) +log.addHandler(handler) + +logging.getLogger('cassandra').setLevel(logging.WARN) + +_log_levels = { + 'CRITICAL': logging.CRITICAL, + 'ERROR': logging.ERROR, + 'WARN': logging.WARNING, + 'WARNING': logging.WARNING, + 'INFO': logging.INFO, + 'DEBUG': logging.DEBUG, + 'NOTSET': logging.NOTSET, +} + +have_libev = False +supported_reactors = [AsyncoreConnection] +try: + from cassandra.io.libevreactor import LibevConnection + have_libev = True + supported_reactors.append(LibevConnection) +except ImportError as exc: + pass + +have_asyncio = False +try: + from cassandra.io.asyncioreactor import AsyncioConnection + have_asyncio = True + supported_reactors.append(AsyncioConnection) +except (ImportError, SyntaxError): + pass + +have_twisted = False +try: + from cassandra.io.twistedreactor import TwistedConnection + have_twisted = True + supported_reactors.append(TwistedConnection) +except ImportError as exc: + log.exception("Error importing twisted") + pass + +KEYSPACE = "testkeyspace" + str(int(time.time())) +TABLE = "testtable" + +COLUMN_VALUES = { + 'int': 42, + 'text': "'42'", + 'float': 42.0, + 'uuid': uuid.uuid4(), + 'timestamp': "'2016-02-03 04:05+0000'" +} + + +def setup(options): + log.info("Using 'cassandra' package from %s", cassandra.__path__) + + cluster = Cluster(options.hosts, schema_metadata_enabled=False, token_metadata_enabled=False) + try: + session = cluster.connect() + + log.debug("Creating keyspace...") + try: + session.execute(""" + CREATE KEYSPACE %s + WITH replication = { 'class': 'SimpleStrategy', 'replication_factor': '2' } + """ % options.keyspace) + + log.debug("Setting keyspace...") + except cassandra.AlreadyExists: + log.debug("Keyspace already exists") + + session.set_keyspace(options.keyspace) + + log.debug("Creating table...") + create_table_query = """ + CREATE TABLE {0} ( + thekey text, + """ + for i in range(options.num_columns): + create_table_query += "col{0} {1},\n".format(i, options.column_type) + create_table_query += "PRIMARY KEY (thekey))" + + try: + session.execute(create_table_query.format(TABLE)) + except cassandra.AlreadyExists: + log.debug("Table already exists.") + + finally: + cluster.shutdown() + + +def teardown(options): + cluster = Cluster(options.hosts, schema_metadata_enabled=False, token_metadata_enabled=False) + session = cluster.connect() + if not options.keep_data: + session.execute("DROP KEYSPACE " + options.keyspace) + cluster.shutdown() + + +def benchmark(thread_class): + options, args = parse_options() + for conn_class in options.supported_reactors: + setup(options) + log.info("==== %s ====" % (conn_class.__name__,)) + + kwargs = {'metrics_enabled': options.enable_metrics, + 'connection_class': conn_class} + if options.protocol_version: + kwargs['protocol_version'] = options.protocol_version + cluster = Cluster(options.hosts, **kwargs) + session = cluster.connect(options.keyspace) + + log.debug("Sleeping for two seconds...") + time.sleep(2.0) + + + # Generate the query + if options.read: + query = "SELECT * FROM {0} WHERE thekey = '{{key}}'".format(TABLE) + else: + query = "INSERT INTO {0} (thekey".format(TABLE) + for i in range(options.num_columns): + query += ", col{0}".format(i) + + query += ") VALUES ('{key}'" + for i in range(options.num_columns): + query += ", {0}".format(COLUMN_VALUES[options.column_type]) + query += ")" + + values = None # we don't use that anymore. Keeping it in case we go back to prepared statements. + per_thread = options.num_ops // options.threads + threads = [] + + log.debug("Beginning {0}...".format('reads' if options.read else 'inserts')) + start = time.time() + try: + for i in range(options.threads): + thread = thread_class( + i, session, query, values, per_thread, + cluster.protocol_version, options.profile) + thread.daemon = True + threads.append(thread) + + for thread in threads: + thread.start() + + for thread in threads: + while thread.is_alive(): + thread.join(timeout=0.5) + + end = time.time() + finally: + cluster.shutdown() + teardown(options) + + total = end - start + log.info("Total time: %0.2fs" % total) + log.info("Average throughput: %0.2f/sec" % (options.num_ops / total)) + if options.enable_metrics: + stats = scales.getStats()['cassandra'] + log.info("Connection errors: %d", stats['connection_errors']) + log.info("Write timeouts: %d", stats['write_timeouts']) + log.info("Read timeouts: %d", stats['read_timeouts']) + log.info("Unavailables: %d", stats['unavailables']) + log.info("Other errors: %d", stats['other_errors']) + log.info("Retries: %d", stats['retries']) + + request_timer = stats['request_timer'] + log.info("Request latencies:") + log.info(" min: %0.4fs", request_timer['min']) + log.info(" max: %0.4fs", request_timer['max']) + log.info(" mean: %0.4fs", request_timer['mean']) + log.info(" stddev: %0.4fs", request_timer['stddev']) + log.info(" median: %0.4fs", request_timer['median']) + log.info(" 75th: %0.4fs", request_timer['75percentile']) + log.info(" 95th: %0.4fs", request_timer['95percentile']) + log.info(" 98th: %0.4fs", request_timer['98percentile']) + log.info(" 99th: %0.4fs", request_timer['99percentile']) + log.info(" 99.9th: %0.4fs", request_timer['999percentile']) + + +def parse_options(): + parser = OptionParser() + parser.add_option('-H', '--hosts', default='127.0.0.1', + help='cassandra hosts to connect to (comma-separated list) [default: %default]') + parser.add_option('-t', '--threads', type='int', default=1, + help='number of threads [default: %default]') + parser.add_option('-n', '--num-ops', type='int', default=10000, + help='number of operations [default: %default]') + parser.add_option('--asyncore-only', action='store_true', dest='asyncore_only', + help='only benchmark with asyncore connections') + parser.add_option('--asyncio-only', action='store_true', dest='asyncio_only', + help='only benchmark with asyncio connections') + parser.add_option('--libev-only', action='store_true', dest='libev_only', + help='only benchmark with libev connections') + parser.add_option('--twisted-only', action='store_true', dest='twisted_only', + help='only benchmark with Twisted connections') + parser.add_option('-m', '--metrics', action='store_true', dest='enable_metrics', + help='enable and print metrics for operations') + parser.add_option('-l', '--log-level', default='info', + help='logging level: debug, info, warning, or error') + parser.add_option('-p', '--profile', action='store_true', dest='profile', + help='Profile the run') + parser.add_option('--protocol-version', type='int', dest='protocol_version', default=4, + help='Native protocol version to use') + parser.add_option('-c', '--num-columns', type='int', dest='num_columns', default=2, + help='Specify the number of columns for the schema') + parser.add_option('-k', '--keyspace', type='str', dest='keyspace', default=KEYSPACE, + help='Specify the keyspace name for the schema') + parser.add_option('--keep-data', action='store_true', dest='keep_data', default=False, + help='Keep the data after the benchmark') + parser.add_option('--column-type', type='str', dest='column_type', default='text', + help='Specify the column type for the schema (supported: int, text, float, uuid, timestamp)') + parser.add_option('--read', action='store_true', dest='read', default=False, + help='Read mode') + + + options, args = parser.parse_args() + + options.hosts = options.hosts.split(',') + + level = options.log_level.upper() + try: + log.setLevel(_log_levels[level]) + except KeyError: + log.warning("Unknown log level specified: %s; specify one of %s", options.log_level, _log_levels.keys()) + + if options.asyncore_only: + options.supported_reactors = [AsyncoreConnection] + elif options.asyncio_only: + options.supported_reactors = [AsyncioConnection] + elif options.libev_only: + if not have_libev: + log.error("libev is not available") + sys.exit(1) + options.supported_reactors = [LibevConnection] + elif options.twisted_only: + if not have_twisted: + log.error("Twisted is not available") + sys.exit(1) + options.supported_reactors = [TwistedConnection] + else: + options.supported_reactors = supported_reactors + if not have_libev: + log.warning("Not benchmarking libev reactor because libev is not available") + + return options, args + + +class BenchmarkThread(Thread): + + def __init__(self, thread_num, session, query, values, num_queries, protocol_version, profile): + Thread.__init__(self) + self.thread_num = thread_num + self.session = session + self.query = query + self.values = values + self.num_queries = num_queries + self.protocol_version = protocol_version + self.profiler = Profile() if profile else None + + def start_profile(self): + if self.profiler: + self.profiler.enable() + + def run_query(self, key, **kwargs): + return self.session.execute_async(self.query.format(key=key), **kwargs) + + def finish_profile(self): + if self.profiler: + self.profiler.disable() + self.profiler.dump_stats('profile-%d' % self.thread_num) diff --git a/benchmarks/callback_full_pipeline.py b/benchmarks/callback_full_pipeline.py new file mode 100644 index 0000000..e3ecfe3 --- /dev/null +++ b/benchmarks/callback_full_pipeline.py @@ -0,0 +1,67 @@ +# Copyright DataStax, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import logging + +from itertools import count +from threading import Event + +from base import benchmark, BenchmarkThread +from six.moves import range + +log = logging.getLogger(__name__) + + +sentinel = object() + + +class Runner(BenchmarkThread): + + def __init__(self, *args, **kwargs): + BenchmarkThread.__init__(self, *args, **kwargs) + self.num_started = count() + self.num_finished = count() + self.event = Event() + + def insert_next(self, previous_result=sentinel): + if previous_result is not sentinel: + if isinstance(previous_result, BaseException): + log.error("Error on insert: %r", previous_result) + if next(self.num_finished) >= self.num_queries: + self.event.set() + + i = next(self.num_started) + if i <= self.num_queries: + key = "{0}-{1}".format(self.thread_num, i) + future = self.run_query(key, timeout=None) + future.add_callbacks(self.insert_next, self.insert_next) + + def run(self): + self.start_profile() + + if self.protocol_version >= 3: + concurrency = 1000 + else: + concurrency = 100 + + for _ in range(min(concurrency, self.num_queries)): + self.insert_next() + + self.event.wait() + + self.finish_profile() + + +if __name__ == "__main__": + benchmark(Runner) diff --git a/benchmarks/future_batches.py b/benchmarks/future_batches.py new file mode 100644 index 0000000..8cd915e --- /dev/null +++ b/benchmarks/future_batches.py @@ -0,0 +1,52 @@ +# Copyright DataStax, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import logging +from base import benchmark, BenchmarkThread +from six.moves import queue + +log = logging.getLogger(__name__) + + +class Runner(BenchmarkThread): + + def run(self): + futures = queue.Queue(maxsize=121) + + self.start_profile() + + for i in range(self.num_queries): + if i > 0 and i % 120 == 0: + # clear the existing queue + while True: + try: + futures.get_nowait().result() + except queue.Empty: + break + + key = "{0}-{1}".format(self.thread_num, i) + future = self.run_query(key) + futures.put_nowait(future) + + while True: + try: + futures.get_nowait().result() + except queue.Empty: + break + + self.finish_profile() + + +if __name__ == "__main__": + benchmark(Runner) diff --git a/benchmarks/future_full_pipeline.py b/benchmarks/future_full_pipeline.py new file mode 100644 index 0000000..9a9fcfc --- /dev/null +++ b/benchmarks/future_full_pipeline.py @@ -0,0 +1,48 @@ +# Copyright DataStax, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import logging +from base import benchmark, BenchmarkThread +from six.moves import queue + +log = logging.getLogger(__name__) + + +class Runner(BenchmarkThread): + + def run(self): + futures = queue.Queue(maxsize=121) + + self.start_profile() + + for i in range(self.num_queries): + if i >= 120: + old_future = futures.get_nowait() + old_future.result() + + key = "{}-{}".format(self.thread_num, i) + future = self.run_query(key) + futures.put_nowait(future) + + while True: + try: + futures.get_nowait().result() + except queue.Empty: + break + + self.finish_profile + + +if __name__ == "__main__": + benchmark(Runner) diff --git a/benchmarks/future_full_throttle.py b/benchmarks/future_full_throttle.py new file mode 100644 index 0000000..b4ba951 --- /dev/null +++ b/benchmarks/future_full_throttle.py @@ -0,0 +1,40 @@ +# Copyright DataStax, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import logging + +from base import benchmark, BenchmarkThread + +log = logging.getLogger(__name__) + +class Runner(BenchmarkThread): + + def run(self): + futures = [] + + self.start_profile() + + for i in range(self.num_queries): + key = "{0}-{1}".format(self.thread_num, i) + future = self.run_query(key) + futures.append(future) + + for future in futures: + future.result() + + self.finish_profile() + + +if __name__ == "__main__": + benchmark(Runner) diff --git a/benchmarks/sync.py b/benchmarks/sync.py new file mode 100644 index 0000000..f2a45fc --- /dev/null +++ b/benchmarks/sync.py @@ -0,0 +1,31 @@ +# Copyright DataStax, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from base import benchmark, BenchmarkThread +from six.moves import range + + +class Runner(BenchmarkThread): + + def run(self): + self.start_profile() + + for _ in range(self.num_queries): + self.session.execute(self.query, self.values) + + self.finish_profile() + + +if __name__ == "__main__": + benchmark(Runner) diff --git a/build.yaml b/build.yaml new file mode 100644 index 0000000..335de1e --- /dev/null +++ b/build.yaml @@ -0,0 +1,255 @@ +schedules: + nightly_master: + schedule: nightly + branches: + include: [master] + env_vars: | + EVENT_LOOP_MANAGER='libev' + matrix: + exclude: + - python: [3.4, 3.6, 3.7] + - cassandra: ['2.1', '3.0', 'test-dse'] + + commit_long_test: + schedule: per_commit + disable_pull_requests: true + branches: + include: [/long-python.*/] + env_vars: | + EVENT_LOOP_MANAGER='libev' + matrix: + exclude: + - python: [3.4, 3.6, 3.7] + - cassandra: ['2.1', '3.0', 'test-dse'] + + commit_branches: + schedule: per_commit + disable_pull_requests: true + branches: + include: [/python.*/] + env_vars: | + EVENT_LOOP_MANAGER='libev' + EXCLUDE_LONG=1 + matrix: + exclude: + - python: [3.4, 3.6, 3.7] + - cassandra: ['2.1', '3.0', 'test-dse'] + + commit_branches_dev: + schedule: per_commit + disable_pull_requests: true + branches: + include: [/dev-python.*/] + env_vars: | + EVENT_LOOP_MANAGER='libev' + EXCLUDE_LONG=1 + matrix: + exclude: + - python: [2.7, 3.4, 3.6, 3.7] + - cassandra: ['2.0', '2.1', '2.2', '3.0', 'test-dse'] + + release_test: + schedule: per_commit + disable_pull_requests: true + branches: + include: [/release-.+/] + env_vars: | + EVENT_LOOP_MANAGER='libev' + + weekly_master: + schedule: 0 10 * * 6 + disable_pull_requests: true + branches: + include: [master] + env_vars: | + EVENT_LOOP_MANAGER='libev' + matrix: + exclude: + - python: [3.5] + - cassandra: ['2.2', '3.1'] + + weekly_gevent: + schedule: 0 14 * * 6 + disable_pull_requests: true + branches: + include: [master] + env_vars: | + EVENT_LOOP_MANAGER='gevent' + JUST_EVENT_LOOP=1 + matrix: + exclude: + - python: [3.4] + + weekly_eventlet: + schedule: 0 18 * * 6 + disable_pull_requests: true + branches: + include: [master] + env_vars: | + EVENT_LOOP_MANAGER='eventlet' + JUST_EVENT_LOOP=1 + matrix: + exclude: + - python: [3.4] + + weekly_asyncio: + schedule: 0 22 * * 6 + disable_pull_requests: true + branches: + include: [master] + env_vars: | + EVENT_LOOP_MANAGER='asyncio' + JUST_EVENT_LOOP=1 + matrix: + exclude: + - python: [2.7] + + weekly_async: + schedule: 0 10 * * 7 + disable_pull_requests: true + branches: + include: [master] + env_vars: | + EVENT_LOOP_MANAGER='asyncore' + JUST_EVENT_LOOP=1 + matrix: + exclude: + - python: [3.4] + + weekly_twister: + schedule: 0 14 * * 7 + disable_pull_requests: true + branches: + include: [master] + env_vars: | + EVENT_LOOP_MANAGER='twisted' + JUST_EVENT_LOOP=1 + matrix: + exclude: + - python: [3.4] + + upgrade_tests: + schedule: adhoc + branches: + include: [master, python-546] + env_vars: | + EVENT_LOOP_MANAGER='libev' + JUST_UPGRADE=True + matrix: + exclude: + - python: [3.4, 3.6, 3.7] + - cassandra: ['2.0', '2.1', '2.2', '3.0', 'test-dse'] + +python: + - 2.7 + - 3.4 + - 3.5 + - 3.6 + - 3.7 + +os: + - ubuntu/bionic64/python-driver + +cassandra: + - '2.1' + - '2.2' + - '3.0' + - '3.11' + - 'test-dse' + +env: + CYTHON: + - CYTHON + - NO_CYTHON + +build: + - script: | + export JAVA_HOME=$CCM_JAVA_HOME + export PATH=$JAVA_HOME/bin:$PATH + export PYTHONPATH="" + + # Required for unix socket tests + sudo apt-get install socat + + # Install latest setuptools + pip install --upgrade pip + pip install -U setuptools + + pip install git+ssh://git@github.com/riptano/ccm-private.git + + pip install -r test-requirements.txt + pip install nose-ignore-docstring + pip install nose-exclude + pip install service_identity + + FORCE_CYTHON=False + if [[ $CYTHON == 'CYTHON' ]]; then + FORCE_CYTHON=True + pip install cython + pip install numpy + # Install the driver & compile C extensions + python setup.py build_ext --inplace + else + # Install the driver & compile C extensions with no cython + python setup.py build_ext --inplace --no-cython + fi + + echo "JUST_UPGRADE: $JUST_UPGRADE" + if [[ $JUST_UPGRADE == 'True' ]]; then + EVENT_LOOP_MANAGER=$EVENT_LOOP_MANAGER VERIFY_CYTHON=$FORCE_CYTHON nosetests -s -v --logging-format="[%(levelname)s] %(asctime)s %(thread)d: %(message)s" --with-ignore-docstrings --with-xunit --xunit-file=upgrade_results.xml tests/integration/upgrade || true + exit 0 + fi + + if [[ $CCM_IS_DSE == 'true' ]]; then + # We only use a DSE version for unreleased DSE versions, so we only need to run the smoke tests here + echo "CCM_IS_DSE: $CCM_IS_DSE" + echo "==========RUNNING SMOKE TESTS===========" + EVENT_LOOP_MANAGER=$EVENT_LOOP_MANAGER CCM_ARGS="$CCM_ARGS" CASSANDRA_VERSION=$CCM_CASSANDRA_VERSION DSE_VERSION='6.7.0' MAPPED_CASSANDRA_VERSION=$MAPPED_CASSANDRA_VERSION VERIFY_CYTHON=$FORCE_CYTHON nosetests -s -v --logging-format="[%(levelname)s] %(asctime)s %(thread)d: %(message)s" --with-ignore-docstrings --with-xunit --xunit-file=standard_results.xml tests/integration/standard/test_dse.py || true + exit 0 + fi + + # Run the unit tests, this is not done in travis because + # it takes too much time for the whole matrix to build with cython + if [[ $CYTHON == 'CYTHON' ]]; then + EVENT_LOOP_MANAGER=$EVENT_LOOP_MANAGER VERIFY_CYTHON=1 nosetests -s -v --logging-format="[%(levelname)s] %(asctime)s %(thread)d: %(message)s" --with-ignore-docstrings --with-xunit --xunit-file=unit_results.xml tests/unit/ || true + EVENT_LOOP_MANAGER=eventlet VERIFY_CYTHON=1 nosetests -s -v --logging-format="[%(levelname)s] %(asctime)s %(thread)d: %(message)s" --with-ignore-docstrings --with-xunit --xunit-file=unit_eventlet_results.xml tests/unit/io/test_eventletreactor.py || true + EVENT_LOOP_MANAGER=gevent VERIFY_CYTHON=1 nosetests -s -v --logging-format="[%(levelname)s] %(asctime)s %(thread)d: %(message)s" --with-ignore-docstrings --with-xunit --xunit-file=unit_gevent_results.xml tests/unit/io/test_geventreactor.py || true + fi + + if [ -n "$JUST_EVENT_LOOP" ]; then + echo "Running integration event loop subset with $EVENT_LOOP_MANAGER" + EVENT_LOOP_TESTS=( + "tests/integration/standard/test_cluster.py" + "tests/integration/standard/test_concurrent.py" + "tests/integration/standard/test_connection.py" + "tests/integration/standard/test_control_connection.py" + "tests/integration/standard/test_metrics.py" + "tests/integration/standard/test_query.py" + "tests/integration/simulacron/test_endpoint.py" + ) + EVENT_LOOP_MANAGER=$EVENT_LOOP_MANAGER CCM_ARGS="$CCM_ARGS" CASSANDRA_VERSION=$CCM_CASSANDRA_VERSION MAPPED_CASSANDRA_VERSION=$MAPPED_CASSANDRA_VERSION VERIFY_CYTHON=$FORCE_CYTHON nosetests -s -v --logging-format="[%(levelname)s] %(asctime)s %(thread)d: %(message)s" --with-ignore-docstrings --with-xunit --xunit-file=standard_results.xml ${EVENT_LOOP_TESTS[@]} || true + exit 0 + fi + + echo "Running with event loop manager: $EVENT_LOOP_MANAGER" + echo "==========RUNNING SIMULACRON TESTS==========" + SIMULACRON_JAR="$HOME/simulacron.jar" + SIMULACRON_JAR=$SIMULACRON_JAR EVENT_LOOP_MANAGER=$EVENT_LOOP_MANAGER CASSANDRA_DIR=$CCM_INSTALL_DIR CCM_ARGS="$CCM_ARGS" DSE_VERSION=$CCM_CASSANDRA_VERSION MAPPED_CASSANDRA_VERSION=$MAPPED_CASSANDRA_VERSION VERIFY_CYTHON=$FORCE_CYTHON nosetests -s -v --logging-format="[%(levelname)s] %(asctime)s %(thread)d: %(message)s" --with-ignore-docstrings --with-xunit --xunit-file=simulacron_results.xml tests/integration/simulacron/ || true + + echo "Running with event loop manager: $EVENT_LOOP_MANAGER" + echo "==========RUNNING CQLENGINE TESTS==========" + EVENT_LOOP_MANAGER=$EVENT_LOOP_MANAGER CCM_ARGS="$CCM_ARGS" CASSANDRA_VERSION=$CCM_CASSANDRA_VERSION MAPPED_CASSANDRA_VERSION=$MAPPED_CASSANDRA_VERSION VERIFY_CYTHON=$FORCE_CYTHON nosetests -s -v --logging-format="[%(levelname)s] %(asctime)s %(thread)d: %(message)s" --with-ignore-docstrings --with-xunit --xunit-file=cqle_results.xml tests/integration/cqlengine/ || true + + echo "==========RUNNING INTEGRATION TESTS==========" + EVENT_LOOP_MANAGER=$EVENT_LOOP_MANAGER CCM_ARGS="$CCM_ARGS" CASSANDRA_VERSION=$CCM_CASSANDRA_VERSION MAPPED_CASSANDRA_VERSION=$MAPPED_CASSANDRA_VERSION VERIFY_CYTHON=$FORCE_CYTHON nosetests -s -v --logging-format="[%(levelname)s] %(asctime)s %(thread)d: %(message)s" --with-ignore-docstrings --with-xunit --xunit-file=standard_results.xml tests/integration/standard/ || true + + echo "==========RUNNING ADVANCED AND CLOUD TESTS==========" + EVENT_LOOP_MANAGER=$EVENT_LOOP_MANAGER CLOUD_PROXY_PATH="$HOME/proxy/" CASSANDRA_VERSION=$CCM_CASSANDRA_VERSION MAPPED_CASSANDRA_VERSION=$MAPPED_CASSANDRA_VERSION VERIFY_CYTHON=$FORCE_CYTHON nosetests -s -v --logging-format="[%(levelname)s] %(asctime)s %(thread)d: %(message)s" --with-ignore-docstrings --with-xunit --xunit-file=advanced_results.xml tests/integration/advanced/ || true + + if [ -z "$EXCLUDE_LONG" ]; then + echo "==========RUNNING LONG INTEGRATION TESTS==========" + EVENT_LOOP_MANAGER=$EVENT_LOOP_MANAGER CCM_ARGS="$CCM_ARGS" CASSANDRA_VERSION=$CCM_CASSANDRA_VERSION MAPPED_CASSANDRA_VERSION=$MAPPED_CASSANDRA_VERSION VERIFY_CYTHON=$FORCE_CYTHON nosetests -s -v --logging-format="[%(levelname)s] %(asctime)s %(thread)d: %(message)s" --exclude-dir=tests/integration/long/upgrade --with-ignore-docstrings --with-xunit --xunit-file=long_results.xml tests/integration/long/ || true + fi + + - xunit: + - "*_results.xml" diff --git a/cassandra_driver.egg-info/PKG-INFO b/cassandra_driver.egg-info/PKG-INFO deleted file mode 100644 index 79d9098..0000000 --- a/cassandra_driver.egg-info/PKG-INFO +++ /dev/null @@ -1,114 +0,0 @@ -Metadata-Version: 1.1 -Name: cassandra-driver -Version: 3.20.2 -Summary: Python driver for Cassandra -Home-page: http://github.com/datastax/python-driver -Author: Tyler Hobbs -Author-email: tyler@datastax.com -License: UNKNOWN -Description: DataStax Python Driver for Apache Cassandra - =========================================== - - .. image:: https://travis-ci.org/datastax/python-driver.png?branch=master - :target: https://travis-ci.org/datastax/python-driver - - A modern, `feature-rich `_ and highly-tunable Python client library for Apache Cassandra (2.1+) using exclusively Cassandra's binary protocol and Cassandra Query Language v3. - - The driver supports Python 2.7, 3.4, 3.5, 3.6 and 3.7. - - If you require compatibility with DataStax Enterprise, use the `DataStax Enterprise Python Driver `_. - - **Note:** DataStax products do not support big-endian systems. - - Feedback Requested - ------------------ - **Help us focus our efforts!** Provide your input on the `Platform and Runtime Survey `_ (we kept it short). - - Features - -------- - * `Synchronous `_ and `Asynchronous `_ APIs - * `Simple, Prepared, and Batch statements `_ - * Asynchronous IO, parallel execution, request pipelining - * `Connection pooling `_ - * Automatic node discovery - * `Automatic reconnection `_ - * Configurable `load balancing `_ and `retry policies `_ - * `Concurrent execution utilities `_ - * `Object mapper `_ - * `Connecting to DataStax Apollo database (cloud) `_ - - Installation - ------------ - Installation through pip is recommended:: - - $ pip install cassandra-driver - - For more complete installation instructions, see the - `installation guide `_. - - Documentation - ------------- - The documentation can be found online `here `_. - - A couple of links for getting up to speed: - - * `Installation `_ - * `Getting started guide `_ - * `API docs `_ - * `Performance tips `_ - - Object Mapper - ------------- - cqlengine (originally developed by Blake Eggleston and Jon Haddad, with contributions from the - community) is now maintained as an integral part of this package. Refer to - `documentation here `_. - - Contributing - ------------ - See `CONTRIBUTING.md `_. - - Reporting Problems - ------------------ - Please report any bugs and make any feature requests on the - `JIRA `_ issue tracker. - - If you would like to contribute, please feel free to open a pull request. - - Getting Help - ------------ - Your best options for getting help with the driver are the - `mailing list `_ - and the ``#datastax-drivers`` channel in the `DataStax Academy Slack `_. - - License - ------- - Copyright DataStax, Inc. - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. - -Keywords: cassandra,cql,orm -Platform: UNKNOWN -Classifier: Development Status :: 5 - Production/Stable -Classifier: Intended Audience :: Developers -Classifier: License :: OSI Approved :: Apache Software License -Classifier: Natural Language :: English -Classifier: Operating System :: OS Independent -Classifier: Programming Language :: Python -Classifier: Programming Language :: Python :: 2.7 -Classifier: Programming Language :: Python :: 3.4 -Classifier: Programming Language :: Python :: 3.5 -Classifier: Programming Language :: Python :: 3.6 -Classifier: Programming Language :: Python :: 3.7 -Classifier: Programming Language :: Python :: Implementation :: CPython -Classifier: Programming Language :: Python :: Implementation :: PyPy -Classifier: Topic :: Software Development :: Libraries :: Python Modules diff --git a/cassandra_driver.egg-info/SOURCES.txt b/cassandra_driver.egg-info/SOURCES.txt deleted file mode 100644 index cb77c19..0000000 --- a/cassandra_driver.egg-info/SOURCES.txt +++ /dev/null @@ -1,200 +0,0 @@ -LICENSE -MANIFEST.in -README.rst -ez_setup.py -setup.py -cassandra/__init__.py -cassandra/auth.py -cassandra/buffer.pxd -cassandra/bytesio.pxd -cassandra/bytesio.pyx -cassandra/cluster.py -cassandra/cmurmur3.c -cassandra/compat.py -cassandra/concurrent.py -cassandra/connection.py -cassandra/cqltypes.py -cassandra/cython_deps.py -cassandra/cython_marshal.pyx -cassandra/cython_utils.pxd -cassandra/cython_utils.pyx -cassandra/deserializers.pxd -cassandra/deserializers.pyx -cassandra/encoder.py -cassandra/ioutils.pyx -cassandra/marshal.py -cassandra/metadata.py -cassandra/metrics.py -cassandra/murmur3.py -cassandra/numpyFlags.h -cassandra/numpy_parser.pyx -cassandra/obj_parser.pyx -cassandra/parsing.pxd -cassandra/parsing.pyx -cassandra/policies.py -cassandra/pool.py -cassandra/protocol.py -cassandra/query.py -cassandra/row_parser.pyx -cassandra/timestamps.py -cassandra/tuple.pxd -cassandra/type_codes.pxd -cassandra/type_codes.py -cassandra/util.py -cassandra/cqlengine/__init__.py -cassandra/cqlengine/columns.py -cassandra/cqlengine/connection.py -cassandra/cqlengine/functions.py -cassandra/cqlengine/management.py -cassandra/cqlengine/models.py -cassandra/cqlengine/named.py -cassandra/cqlengine/operators.py -cassandra/cqlengine/query.py -cassandra/cqlengine/statements.py -cassandra/cqlengine/usertype.py -cassandra/datastax/__init__.py -cassandra/datastax/cloud/__init__.py -cassandra/io/__init__.py -cassandra/io/asyncioreactor.py -cassandra/io/asyncorereactor.py -cassandra/io/eventletreactor.py -cassandra/io/geventreactor.py -cassandra/io/libevreactor.py -cassandra/io/libevwrapper.c -cassandra/io/twistedreactor.py -cassandra_driver.egg-info/PKG-INFO -cassandra_driver.egg-info/SOURCES.txt -cassandra_driver.egg-info/dependency_links.txt -cassandra_driver.egg-info/requires.txt -cassandra_driver.egg-info/top_level.txt -tests/__init__.py -tests/integration/__init__.py -tests/integration/datatype_utils.py -tests/integration/util.py -tests/integration/cqlengine/__init__.py -tests/integration/cqlengine/base.py -tests/integration/cqlengine/test_batch_query.py -tests/integration/cqlengine/test_connections.py -tests/integration/cqlengine/test_consistency.py -tests/integration/cqlengine/test_context_query.py -tests/integration/cqlengine/test_ifexists.py -tests/integration/cqlengine/test_ifnotexists.py -tests/integration/cqlengine/test_lwt_conditional.py -tests/integration/cqlengine/test_timestamp.py -tests/integration/cqlengine/test_ttl.py -tests/integration/cqlengine/columns/__init__.py -tests/integration/cqlengine/columns/test_container_columns.py -tests/integration/cqlengine/columns/test_counter_column.py -tests/integration/cqlengine/columns/test_static_column.py -tests/integration/cqlengine/columns/test_validation.py -tests/integration/cqlengine/columns/test_value_io.py -tests/integration/cqlengine/connections/__init__.py -tests/integration/cqlengine/connections/test_connection.py -tests/integration/cqlengine/management/__init__.py -tests/integration/cqlengine/management/test_compaction_settings.py -tests/integration/cqlengine/management/test_management.py -tests/integration/cqlengine/model/__init__.py -tests/integration/cqlengine/model/test_class_construction.py -tests/integration/cqlengine/model/test_equality_operations.py -tests/integration/cqlengine/model/test_model.py -tests/integration/cqlengine/model/test_model_io.py -tests/integration/cqlengine/model/test_polymorphism.py -tests/integration/cqlengine/model/test_udts.py -tests/integration/cqlengine/model/test_updates.py -tests/integration/cqlengine/model/test_value_lists.py -tests/integration/cqlengine/operators/__init__.py -tests/integration/cqlengine/operators/test_where_operators.py -tests/integration/cqlengine/query/__init__.py -tests/integration/cqlengine/query/test_batch_query.py -tests/integration/cqlengine/query/test_datetime_queries.py -tests/integration/cqlengine/query/test_named.py -tests/integration/cqlengine/query/test_queryoperators.py -tests/integration/cqlengine/query/test_queryset.py -tests/integration/cqlengine/query/test_updates.py -tests/integration/cqlengine/statements/__init__.py -tests/integration/cqlengine/statements/test_assignment_clauses.py -tests/integration/cqlengine/statements/test_base_clause.py -tests/integration/cqlengine/statements/test_base_statement.py -tests/integration/cqlengine/statements/test_delete_statement.py -tests/integration/cqlengine/statements/test_insert_statement.py -tests/integration/cqlengine/statements/test_select_statement.py -tests/integration/cqlengine/statements/test_update_statement.py -tests/integration/cqlengine/statements/test_where_clause.py -tests/integration/long/__init__.py -tests/integration/long/test_consistency.py -tests/integration/long/test_failure_types.py -tests/integration/long/test_ipv6.py -tests/integration/long/test_large_data.py -tests/integration/long/test_loadbalancingpolicies.py -tests/integration/long/test_schema.py -tests/integration/long/test_ssl.py -tests/integration/long/utils.py -tests/integration/simulacron/__init__.py -tests/integration/simulacron/test_cluster.py -tests/integration/simulacron/test_connection.py -tests/integration/simulacron/test_policies.py -tests/integration/simulacron/utils.py -tests/integration/standard/__init__.py -tests/integration/standard/test_authentication.py -tests/integration/standard/test_client_warnings.py -tests/integration/standard/test_cluster.py -tests/integration/standard/test_concurrent.py -tests/integration/standard/test_connection.py -tests/integration/standard/test_control_connection.py -tests/integration/standard/test_custom_payload.py -tests/integration/standard/test_custom_protocol_handler.py -tests/integration/standard/test_cython_protocol_handlers.py -tests/integration/standard/test_dse.py -tests/integration/standard/test_metadata.py -tests/integration/standard/test_metrics.py -tests/integration/standard/test_policies.py -tests/integration/standard/test_prepared_statements.py -tests/integration/standard/test_query.py -tests/integration/standard/test_query_paging.py -tests/integration/standard/test_routing.py -tests/integration/standard/test_row_factories.py -tests/integration/standard/test_types.py -tests/integration/standard/test_udts.py -tests/integration/standard/utils.py -tests/integration/upgrade/__init__.py -tests/integration/upgrade/test_upgrade.py -tests/unit/__init__.py -tests/unit/test_cluster.py -tests/unit/test_concurrent.py -tests/unit/test_connection.py -tests/unit/test_control_connection.py -tests/unit/test_exception.py -tests/unit/test_marshalling.py -tests/unit/test_metadata.py -tests/unit/test_orderedmap.py -tests/unit/test_parameter_binding.py -tests/unit/test_policies.py -tests/unit/test_protocol.py -tests/unit/test_query.py -tests/unit/test_response_future.py -tests/unit/test_resultset.py -tests/unit/test_sortedset.py -tests/unit/test_time_util.py -tests/unit/test_timestamps.py -tests/unit/test_types.py -tests/unit/test_util_types.py -tests/unit/utils.py -tests/unit/cqlengine/__init__.py -tests/unit/cqlengine/test_columns.py -tests/unit/cqlengine/test_connection.py -tests/unit/cqlengine/test_udt.py -tests/unit/cython/__init__.py -tests/unit/cython/test_bytesio.py -tests/unit/cython/test_types.py -tests/unit/cython/test_utils.py -tests/unit/cython/utils.py -tests/unit/io/__init__.py -tests/unit/io/eventlet_utils.py -tests/unit/io/gevent_utils.py -tests/unit/io/test_asyncioreactor.py -tests/unit/io/test_asyncorereactor.py -tests/unit/io/test_eventletreactor.py -tests/unit/io/test_geventreactor.py -tests/unit/io/test_libevreactor.py -tests/unit/io/test_twistedreactor.py -tests/unit/io/utils.py \ No newline at end of file diff --git a/cassandra_driver.egg-info/dependency_links.txt b/cassandra_driver.egg-info/dependency_links.txt deleted file mode 100644 index 8b13789..0000000 --- a/cassandra_driver.egg-info/dependency_links.txt +++ /dev/null @@ -1 +0,0 @@ - diff --git a/cassandra_driver.egg-info/requires.txt b/cassandra_driver.egg-info/requires.txt deleted file mode 100644 index e323a45..0000000 --- a/cassandra_driver.egg-info/requires.txt +++ /dev/null @@ -1 +0,0 @@ -six>=1.9 diff --git a/cassandra_driver.egg-info/top_level.txt b/cassandra_driver.egg-info/top_level.txt deleted file mode 100644 index e26dc81..0000000 --- a/cassandra_driver.egg-info/top_level.txt +++ /dev/null @@ -1,2 +0,0 @@ -DUMMY -cassandra diff --git a/docs.yaml b/docs.yaml new file mode 100644 index 0000000..6212699 --- /dev/null +++ b/docs.yaml @@ -0,0 +1,65 @@ +title: DataStax Python Driver for Apache Cassandra +summary: DataStax Python Driver for Apache Cassandra Documentation +output: docs/_build/ +swiftype_drivers: pythondrivers +checks: + external_links: + exclude: + - 'http://aka.ms/vcpython27' +sections: + - title: N/A + prefix: / + type: sphinx + directory: docs + virtualenv_init: | + set -x + CASS_DRIVER_NO_CYTHON=1 pip install -r test-requirements.txt + # for newer versions this is redundant, but in older versions we need to + # install, e.g., the cassandra driver, and those versions don't specify + # the cassandra driver version in requirements files + CASS_DRIVER_NO_CYTHON=1 python setup.py develop + pip install "jinja2==2.8.1;python_version<'3.6'" "sphinx>=1.3,<2" geomet + # build extensions like libev + CASS_DRIVER_NO_CYTHON=1 python setup.py build_ext --inplace --force +versions: + - name: '3.20' + ref: d30d166f + - name: '3.19' + ref: ac2471f9 + - name: '3.18' + ref: ec36b957 + - name: '3.17' + ref: 38e359e1 + - name: '3.16' + ref: '3.16.0' + - name: '3.15' + ref: '2ce0bd97' + - name: '3.14' + ref: '9af8bd19' + - name: '3.13' + ref: '3.13.0' + - name: '3.12' + ref: '43b9c995' + - name: '3.11' + ref: '3.11.0' + - name: '3.10' + ref: 64572368 + - name: 3.9 + ref: 3.9-doc + - name: 3.8 + ref: 3.8-doc + - name: 3.7 + ref: 3.7-doc + - name: 3.6 + ref: 3.6-doc + - name: 3.5 + ref: 3.5-doc +redirects: + - \A\/(.*)/\Z: /\1.html +rewrites: + - search: cassandra.apache.org/doc/cql3/CQL.html + replace: cassandra.apache.org/doc/cql3/CQL-3.0.html + - search: http://www.datastax.com/documentation/cql/3.1/ + replace: https://docs.datastax.com/en/archived/cql/3.1/ + - search: http://www.datastax.com/docs/1.2/cql_cli/cql/BATCH + replace: https://docs.datastax.com/en/dse/6.7/cql/cql/cql_reference/cql_commands/cqlBatch.html diff --git a/docs/.nav b/docs/.nav new file mode 100644 index 0000000..7b39d90 --- /dev/null +++ b/docs/.nav @@ -0,0 +1,14 @@ +installation +getting_started +execution_profiles +lwt +object_mapper +performance +query_paging +security +upgrading +user_defined_types +dates_and_times +cloud +faq +api diff --git a/docs/Makefile b/docs/Makefile new file mode 100644 index 0000000..bf300ec --- /dev/null +++ b/docs/Makefile @@ -0,0 +1,130 @@ +# Makefile for Sphinx documentation +# + +# You can set these variables from the command line. +SPHINXOPTS = +SPHINXBUILD = sphinx-build +PAPER = +BUILDDIR = _build + +# Internal variables. +PAPEROPT_a4 = -D latex_paper_size=a4 +PAPEROPT_letter = -D latex_paper_size=letter +ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . + +.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest + +help: + @echo "Please use \`make ' where is one of" + @echo " html to make standalone HTML files" + @echo " dirhtml to make HTML files named index.html in directories" + @echo " singlehtml to make a single large HTML file" + @echo " pickle to make pickle files" + @echo " json to make JSON files" + @echo " htmlhelp to make HTML files and a HTML help project" + @echo " qthelp to make HTML files and a qthelp project" + @echo " devhelp to make HTML files and a Devhelp project" + @echo " epub to make an epub" + @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" + @echo " latexpdf to make LaTeX files and run them through pdflatex" + @echo " text to make text files" + @echo " man to make manual pages" + @echo " changes to make an overview of all changed/added/deprecated items" + @echo " linkcheck to check all external links for integrity" + @echo " doctest to run all doctests embedded in the documentation (if enabled)" + +clean: + -rm -rf $(BUILDDIR)/* + +html: + $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html + @echo + @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." + +dirhtml: + $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml + @echo + @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." + +singlehtml: + $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml + @echo + @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." + +pickle: + $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle + @echo + @echo "Build finished; now you can process the pickle files." + +json: + $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json + @echo + @echo "Build finished; now you can process the JSON files." + +htmlhelp: + $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp + @echo + @echo "Build finished; now you can run HTML Help Workshop with the" \ + ".hhp project file in $(BUILDDIR)/htmlhelp." + +qthelp: + $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp + @echo + @echo "Build finished; now you can run "qcollectiongenerator" with the" \ + ".qhcp project file in $(BUILDDIR)/qthelp, like this:" + @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/cassandra-driver.qhcp" + @echo "To view the help file:" + @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/cassandra-driver.qhc" + +devhelp: + $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp + @echo + @echo "Build finished." + @echo "To view the help file:" + @echo "# mkdir -p $$HOME/.local/share/devhelp/cassandra-driver" + @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/cassandra-driver" + @echo "# devhelp" + +epub: + $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub + @echo + @echo "Build finished. The epub file is in $(BUILDDIR)/epub." + +latex: + $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex + @echo + @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." + @echo "Run \`make' in that directory to run these through (pdf)latex" \ + "(use \`make latexpdf' here to do that automatically)." + +latexpdf: + $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex + @echo "Running LaTeX files through pdflatex..." + $(MAKE) -C $(BUILDDIR)/latex all-pdf + @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." + +text: + $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text + @echo + @echo "Build finished. The text files are in $(BUILDDIR)/text." + +man: + $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man + @echo + @echo "Build finished. The manual pages are in $(BUILDDIR)/man." + +changes: + $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes + @echo + @echo "The overview file is in $(BUILDDIR)/changes." + +linkcheck: + $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck + @echo + @echo "Link check complete; look for any errors in the above output " \ + "or in $(BUILDDIR)/linkcheck/output.txt." + +doctest: + $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest + @echo "Testing of doctests in the sources finished, look at the " \ + "results in $(BUILDDIR)/doctest/output.txt." diff --git a/docs/api/cassandra.rst b/docs/api/cassandra.rst new file mode 100644 index 0000000..d46aae5 --- /dev/null +++ b/docs/api/cassandra.rst @@ -0,0 +1,77 @@ +:mod:`cassandra` - Exceptions and Enums +======================================= + +.. module:: cassandra + +.. data:: __version_info__ + + The version of the driver in a tuple format + +.. data:: __version__ + + The version of the driver in a string format + +.. autoclass:: ConsistencyLevel + :members: + +.. autoclass:: ProtocolVersion + :members: + +.. autoclass:: UserFunctionDescriptor + :members: + :inherited-members: + +.. autoclass:: UserAggregateDescriptor + :members: + :inherited-members: + +.. autoexception:: DriverException() + :members: + +.. autoexception:: RequestExecutionException() + :members: + +.. autoexception:: Unavailable() + :members: + +.. autoexception:: Timeout() + :members: + +.. autoexception:: ReadTimeout() + :members: + +.. autoexception:: WriteTimeout() + :members: + +.. autoexception:: CoordinationFailure() + :members: + +.. autoexception:: ReadFailure() + :members: + +.. autoexception:: WriteFailure() + :members: + +.. autoexception:: FunctionFailure() + :members: + +.. autoexception:: RequestValidationException() + :members: + +.. autoexception:: ConfigurationException() + :members: + +.. autoexception:: AlreadyExists() + :members: + +.. autoexception:: InvalidRequest() + :members: + +.. autoexception:: Unauthorized() + :members: + +.. autoexception:: AuthenticationFailed() + :members: + +.. autoexception:: OperationTimedOut() + :members: diff --git a/docs/api/cassandra/auth.rst b/docs/api/cassandra/auth.rst new file mode 100644 index 0000000..58c964c --- /dev/null +++ b/docs/api/cassandra/auth.rst @@ -0,0 +1,22 @@ +``cassandra.auth`` - Authentication +=================================== + +.. module:: cassandra.auth + +.. autoclass:: AuthProvider + :members: + +.. autoclass:: Authenticator + :members: + +.. autoclass:: PlainTextAuthProvider + :members: + +.. autoclass:: PlainTextAuthenticator + :members: + +.. autoclass:: SaslAuthProvider + :members: + +.. autoclass:: SaslAuthenticator + :members: diff --git a/docs/api/cassandra/cluster.rst b/docs/api/cassandra/cluster.rst new file mode 100644 index 0000000..81cf1f0 --- /dev/null +++ b/docs/api/cassandra/cluster.rst @@ -0,0 +1,209 @@ +``cassandra.cluster`` - Clusters and Sessions +============================================= + +.. module:: cassandra.cluster + +.. autoclass:: Cluster ([contact_points=('127.0.0.1',)][, port=9042][, executor_threads=2], **attr_kwargs) + + .. autoattribute:: contact_points + + .. autoattribute:: port + + .. autoattribute:: cql_version + + .. autoattribute:: protocol_version + + .. autoattribute:: compression + + .. autoattribute:: auth_provider + + .. autoattribute:: load_balancing_policy + + .. autoattribute:: reconnection_policy + + .. autoattribute:: default_retry_policy + :annotation: = + + .. autoattribute:: conviction_policy_factory + + .. autoattribute:: address_translator + + .. autoattribute:: metrics_enabled + + .. autoattribute:: metrics + + .. autoattribute:: ssl_context + + .. autoattribute:: ssl_options + + .. autoattribute:: sockopts + + .. autoattribute:: max_schema_agreement_wait + + .. autoattribute:: metadata + + .. autoattribute:: connection_class + + .. autoattribute:: control_connection_timeout + + .. autoattribute:: idle_heartbeat_interval + + .. autoattribute:: idle_heartbeat_timeout + + .. autoattribute:: schema_event_refresh_window + + .. autoattribute:: topology_event_refresh_window + + .. autoattribute:: status_event_refresh_window + + .. autoattribute:: prepare_on_all_hosts + + .. autoattribute:: reprepare_on_up + + .. autoattribute:: connect_timeout + + .. autoattribute:: schema_metadata_enabled + :annotation: = True + + .. autoattribute:: token_metadata_enabled + :annotation: = True + + .. autoattribute:: timestamp_generator + + .. autoattribute:: endpoint_factory + + .. autoattribute:: cloud + + .. automethod:: connect + + .. automethod:: shutdown + + .. automethod:: register_user_type + + .. automethod:: register_listener + + .. automethod:: unregister_listener + + .. automethod:: add_execution_profile + + .. automethod:: set_max_requests_per_connection + + .. automethod:: get_max_requests_per_connection + + .. automethod:: set_min_requests_per_connection + + .. automethod:: get_min_requests_per_connection + + .. automethod:: get_core_connections_per_host + + .. automethod:: set_core_connections_per_host + + .. automethod:: get_max_connections_per_host + + .. automethod:: set_max_connections_per_host + + .. automethod:: get_control_connection_host + + .. automethod:: refresh_schema_metadata + + .. automethod:: refresh_keyspace_metadata + + .. automethod:: refresh_table_metadata + + .. automethod:: refresh_user_type_metadata + + .. automethod:: refresh_user_function_metadata + + .. automethod:: refresh_user_aggregate_metadata + + .. automethod:: refresh_nodes + + .. automethod:: set_meta_refresh_enabled + +.. autoclass:: ExecutionProfile (load_balancing_policy=, retry_policy=None, consistency_level=LOCAL_ONE, serial_consistency_level=None, request_timeout=10.0, row_factory=, speculative_execution_policy=None) + :members: + :exclude-members: consistency_level + + .. autoattribute:: consistency_level + :annotation: = LOCAL_ONE + +.. autodata:: EXEC_PROFILE_DEFAULT + :annotation: + +.. autoclass:: Session () + + .. autoattribute:: default_timeout + :annotation: = 10.0 + + .. autoattribute:: default_consistency_level + :annotation: = LOCAL_ONE + + .. autoattribute:: default_serial_consistency_level + :annotation: = None + + .. autoattribute:: row_factory + :annotation: = + + .. autoattribute:: default_fetch_size + + .. autoattribute:: use_client_timestamp + + .. autoattribute:: timestamp_generator + + .. autoattribute:: encoder + + .. autoattribute:: client_protocol_handler + + .. automethod:: execute(statement[, parameters][, timeout][, trace][, custom_payload][, paging_state][, host]) + + .. automethod:: execute_async(statement[, parameters][, trace][, custom_payload][, paging_state][, host]) + + .. automethod:: prepare(statement) + + .. automethod:: shutdown() + + .. automethod:: set_keyspace(keyspace) + + .. automethod:: get_execution_profile + + .. automethod:: execution_profile_clone_update + + .. automethod:: add_request_init_listener + + .. automethod:: remove_request_init_listener + +.. autoclass:: ResponseFuture () + + .. autoattribute:: query + + .. automethod:: result() + + .. automethod:: get_query_trace() + + .. automethod:: get_all_query_traces() + + .. autoattribute:: custom_payload() + + .. autoattribute:: is_schema_agreed + + .. autoattribute:: has_more_pages + + .. autoattribute:: warnings + + .. automethod:: start_fetching_next_page() + + .. automethod:: add_callback(fn, *args, **kwargs) + + .. automethod:: add_errback(fn, *args, **kwargs) + + .. automethod:: add_callbacks(callback, errback, callback_args=(), callback_kwargs=None, errback_args=(), errback_args=None) + +.. autoclass:: ResultSet () + :members: + +.. autoexception:: QueryExhausted () + +.. autoexception:: NoHostAvailable () + :members: + +.. autoexception:: UserTypeDoesNotExist () diff --git a/docs/api/cassandra/concurrent.rst b/docs/api/cassandra/concurrent.rst new file mode 100644 index 0000000..f4bab6f --- /dev/null +++ b/docs/api/cassandra/concurrent.rst @@ -0,0 +1,8 @@ +``cassandra.concurrent`` - Utilities for Concurrent Statement Execution +======================================================================= + +.. module:: cassandra.concurrent + +.. autofunction:: execute_concurrent + +.. autofunction:: execute_concurrent_with_args diff --git a/docs/api/cassandra/connection.rst b/docs/api/cassandra/connection.rst new file mode 100644 index 0000000..32cca59 --- /dev/null +++ b/docs/api/cassandra/connection.rst @@ -0,0 +1,21 @@ +``cassandra.connection`` - Low Level Connection Info +==================================================== + +.. module:: cassandra.connection + +.. autoexception:: ConnectionException () +.. autoexception:: ConnectionShutdown () +.. autoexception:: ConnectionBusy () +.. autoexception:: ProtocolError () + +.. autoclass:: EndPoint + :members: + +.. autoclass:: EndPointFactory + :members: + +.. autoclass:: SniEndPoint + +.. autoclass:: SniEndPointFactory + +.. autoclass:: UnixSocketEndPoint diff --git a/docs/api/cassandra/cqlengine/columns.rst b/docs/api/cassandra/cqlengine/columns.rst new file mode 100644 index 0000000..d44be8a --- /dev/null +++ b/docs/api/cassandra/cqlengine/columns.rst @@ -0,0 +1,89 @@ +``cassandra.cqlengine.columns`` - Column types for object mapping models +======================================================================== + +.. module:: cassandra.cqlengine.columns + +Columns +------- + +Columns in your models map to columns in your CQL table. You define CQL columns by defining column attributes on your model classes. +For a model to be valid it needs at least one primary key column and one non-primary key column. + +Just as in CQL, the order you define your columns in is important, and is the same order they are defined in on a model's corresponding table. + +Each column on your model definitions needs to be an instance of a Column class. + +.. autoclass:: Column(**kwargs) + + .. autoattribute:: primary_key + + .. autoattribute:: partition_key + + .. autoattribute:: index + + .. autoattribute:: custom_index + + .. autoattribute:: db_field + + .. autoattribute:: default + + .. autoattribute:: required + + .. autoattribute:: clustering_order + + .. autoattribute:: discriminator_column + + .. autoattribute:: static + +Column Types +------------ + +Columns of all types are initialized by passing :class:`.Column` attributes to the constructor by keyword. + +.. autoclass:: Ascii(**kwargs) + +.. autoclass:: BigInt(**kwargs) + +.. autoclass:: Blob(**kwargs) + +.. autoclass:: Bytes(**kwargs) + +.. autoclass:: Boolean(**kwargs) + +.. autoclass:: Counter + +.. autoclass:: Date(**kwargs) + +.. autoclass:: DateTime(**kwargs) + + .. autoattribute:: truncate_microseconds + +.. autoclass:: Decimal(**kwargs) + +.. autoclass:: Double(**kwargs) + +.. autoclass:: Float + +.. autoclass:: Integer(**kwargs) + +.. autoclass:: List + +.. autoclass:: Map + +.. autoclass:: Set + +.. autoclass:: SmallInt(**kwargs) + +.. autoclass:: Text + +.. autoclass:: Time(**kwargs) + +.. autoclass:: TimeUUID(**kwargs) + +.. autoclass:: TinyInt(**kwargs) + +.. autoclass:: UserDefinedType + +.. autoclass:: UUID(**kwargs) + +.. autoclass:: VarInt(**kwargs) diff --git a/docs/api/cassandra/cqlengine/connection.rst b/docs/api/cassandra/cqlengine/connection.rst new file mode 100644 index 0000000..0f584fc --- /dev/null +++ b/docs/api/cassandra/cqlengine/connection.rst @@ -0,0 +1,16 @@ +``cassandra.cqlengine.connection`` - Connection management for cqlengine +======================================================================== + +.. module:: cassandra.cqlengine.connection + +.. autofunction:: default + +.. autofunction:: set_session + +.. autofunction:: setup + +.. autofunction:: register_connection + +.. autofunction:: unregister_connection + +.. autofunction:: set_default_connection diff --git a/docs/api/cassandra/cqlengine/management.rst b/docs/api/cassandra/cqlengine/management.rst new file mode 100644 index 0000000..fb483ab --- /dev/null +++ b/docs/api/cassandra/cqlengine/management.rst @@ -0,0 +1,19 @@ +``cassandra.cqlengine.management`` - Schema management for cqlengine +======================================================================== + +.. module:: cassandra.cqlengine.management + +A collection of functions for managing keyspace and table schema. + +.. autofunction:: create_keyspace_simple + +.. autofunction:: create_keyspace_network_topology + +.. autofunction:: drop_keyspace + +.. autofunction:: sync_table + +.. autofunction:: sync_type + +.. autofunction:: drop_table + diff --git a/docs/api/cassandra/cqlengine/models.rst b/docs/api/cassandra/cqlengine/models.rst new file mode 100644 index 0000000..fbcec06 --- /dev/null +++ b/docs/api/cassandra/cqlengine/models.rst @@ -0,0 +1,198 @@ +``cassandra.cqlengine.models`` - Table models for object mapping +================================================================ + +.. module:: cassandra.cqlengine.models + +Model +----- +.. autoclass:: Model(\*\*kwargs) + + The initializer creates an instance of the model. Pass in keyword arguments for columns you've defined on the model. + + .. code-block:: python + + class Person(Model): + id = columns.UUID(primary_key=True) + first_name = columns.Text() + last_name = columns.Text() + + person = Person(first_name='Blake', last_name='Eggleston') + person.first_name #returns 'Blake' + person.last_name #returns 'Eggleston' + + Model attributes define how the model maps to tables in the database. These are class variables that should be set + when defining Model deriviatives. + + .. autoattribute:: __abstract__ + :annotation: = False + + .. autoattribute:: __table_name__ + + .. autoattribute:: __table_name_case_sensitive__ + + .. autoattribute:: __keyspace__ + + .. autoattribute:: __connection__ + + .. attribute:: __default_ttl__ + :annotation: = None + + Will be deprecated in release 4.0. You can set the default ttl by configuring the table ``__options__``. See :ref:`ttl-change` for more details. + + .. autoattribute:: __discriminator_value__ + + See :ref:`model_inheritance` for usage examples. + + Each table can have its own set of configuration options, including compaction. Unspecified, these default to sensible values in + the server. To override defaults, set options using the model ``__options__`` attribute, which allows options specified a dict. + + When a table is synced, it will be altered to match the options set on your table. + This means that if you are changing settings manually they will be changed back on resync. + + Do not use the options settings of cqlengine if you want to manage your compaction settings manually. + + See the `list of supported table properties for more information + `_. + + .. attribute:: __options__ + + For example: + + .. code-block:: python + + class User(Model): + __options__ = {'compaction': {'class': 'LeveledCompactionStrategy', + 'sstable_size_in_mb': '64', + 'tombstone_threshold': '.2'}, + 'read_repair_chance': '0.5', + 'comment': 'User data stored here'} + + user_id = columns.UUID(primary_key=True) + name = columns.Text() + + or : + + .. code-block:: python + + class TimeData(Model): + __options__ = {'compaction': {'class': 'SizeTieredCompactionStrategy', + 'bucket_low': '.3', + 'bucket_high': '2', + 'min_threshold': '2', + 'max_threshold': '64', + 'tombstone_compaction_interval': '86400'}, + 'gc_grace_seconds': '0'} + + .. autoattribute:: __compute_routing_key__ + + + The base methods allow creating, storing, and querying modeled objects. + + .. automethod:: create + + .. method:: if_not_exists() + + Check the existence of an object before insertion. The existence of an + object is determined by its primary key(s). And please note using this flag + would incur performance cost. + + If the insertion isn't applied, a :class:`~cassandra.cqlengine.query.LWTException` is raised. + + .. code-block:: python + + try: + TestIfNotExistsModel.if_not_exists().create(id=id, count=9, text='111111111111') + except LWTException as e: + # handle failure case + print e.existing # dict containing LWT result fields + + This method is supported on Cassandra 2.0 or later. + + .. method:: if_exists() + + Check the existence of an object before an update or delete. The existence of an + object is determined by its primary key(s). And please note using this flag + would incur performance cost. + + If the update or delete isn't applied, a :class:`~cassandra.cqlengine.query.LWTException` is raised. + + .. code-block:: python + + try: + TestIfExistsModel.objects(id=id).if_exists().update(count=9, text='111111111111') + except LWTException as e: + # handle failure case + pass + + This method is supported on Cassandra 2.0 or later. + + .. automethod:: save + + .. automethod:: update + + .. method:: iff(**values) + + Checks to ensure that the values specified are correct on the Cassandra cluster. + Simply specify the column(s) and the expected value(s). As with if_not_exists, + this incurs a performance cost. + + If the insertion isn't applied, a :class:`~cassandra.cqlengine.query.LWTException` is raised. + + .. code-block:: python + + t = TestTransactionModel(text='some text', count=5) + try: + t.iff(count=5).update('other text') + except LWTException as e: + # handle failure case + print e.existing # existing object + + .. automethod:: get + + .. automethod:: filter + + .. automethod:: all + + .. automethod:: delete + + .. method:: batch(batch_object) + + Sets the batch object to run instance updates and inserts queries with. + + See :doc:`/cqlengine/batches` for usage examples + + .. automethod:: timeout + + .. method:: timestamp(timedelta_or_datetime) + + Sets the timestamp for the query + + .. method:: ttl(ttl_in_sec) + + Sets the ttl values to run instance updates and inserts queries with. + + .. method:: using(connection=None) + + Change the context on the fly of the model instance (keyspace, connection) + + .. automethod:: column_family_name + + Models also support dict-like access: + + .. method:: len(m) + + Returns the number of columns defined in the model + + .. method:: m[col_name] + + Returns the value of column ``col_name`` + + .. method:: m[col_name] = value + + Set ``m[col_name]`` to value + + .. automethod:: keys + + .. automethod:: values + + .. automethod:: items diff --git a/docs/api/cassandra/cqlengine/query.rst b/docs/api/cassandra/cqlengine/query.rst new file mode 100644 index 0000000..ce8f764 --- /dev/null +++ b/docs/api/cassandra/cqlengine/query.rst @@ -0,0 +1,71 @@ +``cassandra.cqlengine.query`` - Query and filter model objects +================================================================= + +.. module:: cassandra.cqlengine.query + +QuerySet +-------- +QuerySet objects are typically obtained by calling :meth:`~.cassandra.cqlengine.models.Model.objects` on a model class. +The methods here are used to filter, order, and constrain results. + +.. autoclass:: ModelQuerySet + + .. automethod:: all + + .. automethod:: batch + + .. automethod:: consistency + + .. automethod:: count + + .. method:: len(queryset) + + Returns the number of rows matched by this query. This function uses :meth:`~.cassandra.cqlengine.query.ModelQuerySet.count` internally. + + *Note: This function executes a SELECT COUNT() and has a performance cost on large datasets* + + .. automethod:: distinct + + .. automethod:: filter + + .. automethod:: get + + .. automethod:: limit + + .. automethod:: fetch_size + + .. automethod:: if_not_exists + + .. automethod:: if_exists + + .. automethod:: order_by + + .. automethod:: allow_filtering + + .. automethod:: only + + .. automethod:: defer + + .. automethod:: timestamp + + .. automethod:: ttl + + .. automethod:: using + + .. _blind_updates: + + .. automethod:: update + +.. autoclass:: BatchQuery + :members: + + .. automethod:: add_query + .. automethod:: execute + +.. autoclass:: ContextQuery + +.. autoclass:: DoesNotExist + +.. autoclass:: MultipleObjectsReturned + +.. autoclass:: LWTException diff --git a/docs/api/cassandra/cqlengine/usertype.rst b/docs/api/cassandra/cqlengine/usertype.rst new file mode 100644 index 0000000..ebed187 --- /dev/null +++ b/docs/api/cassandra/cqlengine/usertype.rst @@ -0,0 +1,10 @@ +``cassandra.cqlengine.usertype`` - Model classes for User Defined Types +======================================================================= + +.. module:: cassandra.cqlengine.usertype + +UserType +-------- +.. autoclass:: UserType + + .. autoattribute:: __type_name__ diff --git a/docs/api/cassandra/decoder.rst b/docs/api/cassandra/decoder.rst new file mode 100644 index 0000000..e213cc6 --- /dev/null +++ b/docs/api/cassandra/decoder.rst @@ -0,0 +1,20 @@ +``cassandra.decoder`` - Data Return Formats +=========================================== + +.. module:: cassandra.decoder + +.. function:: tuple_factory + + **Deprecated in 2.0.0.** Use :meth:`cassandra.query.tuple_factory` + +.. function:: named_tuple_factory + + **Deprecated in 2.0.0.** Use :meth:`cassandra.query.named_tuple_factory` + +.. function:: dict_factory + + **Deprecated in 2.0.0.** Use :meth:`cassandra.query.dict_factory` + +.. function:: ordered_dict_factory + + **Deprecated in 2.0.0.** Use :meth:`cassandra.query.ordered_dict_factory` diff --git a/docs/api/cassandra/encoder.rst b/docs/api/cassandra/encoder.rst new file mode 100644 index 0000000..de3b180 --- /dev/null +++ b/docs/api/cassandra/encoder.rst @@ -0,0 +1,36 @@ +``cassandra.encoder`` - Encoders for non-prepared Statements +============================================================ + +.. module:: cassandra.encoder + +.. autoclass:: Encoder () + + .. autoattribute:: cassandra.encoder.Encoder.mapping + + .. automethod:: cassandra.encoder.Encoder.cql_encode_none () + + .. automethod:: cassandra.encoder.Encoder.cql_encode_object () + + .. automethod:: cassandra.encoder.Encoder.cql_encode_all_types () + + .. automethod:: cassandra.encoder.Encoder.cql_encode_sequence () + + .. automethod:: cassandra.encoder.Encoder.cql_encode_str () + + .. automethod:: cassandra.encoder.Encoder.cql_encode_unicode () + + .. automethod:: cassandra.encoder.Encoder.cql_encode_bytes () + + Converts strings, buffers, and bytearrays into CQL blob literals. + + .. automethod:: cassandra.encoder.Encoder.cql_encode_datetime () + + .. automethod:: cassandra.encoder.Encoder.cql_encode_date () + + .. automethod:: cassandra.encoder.Encoder.cql_encode_map_collection () + + .. automethod:: cassandra.encoder.Encoder.cql_encode_list_collection () + + .. automethod:: cassandra.encoder.Encoder.cql_encode_set_collection () + + .. automethod:: cql_encode_tuple () diff --git a/docs/api/cassandra/io/asyncioreactor.rst b/docs/api/cassandra/io/asyncioreactor.rst new file mode 100644 index 0000000..38ae63c --- /dev/null +++ b/docs/api/cassandra/io/asyncioreactor.rst @@ -0,0 +1,7 @@ +``cassandra.io.asyncioreactor`` - ``asyncio`` Event Loop +===================================================================== + +.. module:: cassandra.io.asyncioreactor + +.. autoclass:: AsyncioConnection + :members: diff --git a/docs/api/cassandra/io/asyncorereactor.rst b/docs/api/cassandra/io/asyncorereactor.rst new file mode 100644 index 0000000..ade7887 --- /dev/null +++ b/docs/api/cassandra/io/asyncorereactor.rst @@ -0,0 +1,7 @@ +``cassandra.io.asyncorereactor`` - ``asyncore`` Event Loop +========================================================== + +.. module:: cassandra.io.asyncorereactor + +.. autoclass:: AsyncoreConnection + :members: diff --git a/docs/api/cassandra/io/eventletreactor.rst b/docs/api/cassandra/io/eventletreactor.rst new file mode 100644 index 0000000..1ba742c --- /dev/null +++ b/docs/api/cassandra/io/eventletreactor.rst @@ -0,0 +1,7 @@ +``cassandra.io.eventletreactor`` - ``eventlet``-compatible Connection +===================================================================== + +.. module:: cassandra.io.eventletreactor + +.. autoclass:: EventletConnection + :members: diff --git a/docs/api/cassandra/io/geventreactor.rst b/docs/api/cassandra/io/geventreactor.rst new file mode 100644 index 0000000..603affe --- /dev/null +++ b/docs/api/cassandra/io/geventreactor.rst @@ -0,0 +1,7 @@ +``cassandra.io.geventreactor`` - ``gevent``-compatible Event Loop +================================================================= + +.. module:: cassandra.io.geventreactor + +.. autoclass:: GeventConnection + :members: diff --git a/docs/api/cassandra/io/libevreactor.rst b/docs/api/cassandra/io/libevreactor.rst new file mode 100644 index 0000000..5b7288e --- /dev/null +++ b/docs/api/cassandra/io/libevreactor.rst @@ -0,0 +1,6 @@ +``cassandra.io.libevreactor`` - ``libev`` Event Loop +==================================================== + +.. module:: cassandra.io.libevreactor + +.. autoclass:: LibevConnection diff --git a/docs/api/cassandra/io/twistedreactor.rst b/docs/api/cassandra/io/twistedreactor.rst new file mode 100644 index 0000000..24e93bd --- /dev/null +++ b/docs/api/cassandra/io/twistedreactor.rst @@ -0,0 +1,9 @@ +``cassandra.io.twistedreactor`` - Twisted Event Loop +==================================================== + +.. module:: cassandra.io.twistedreactor + +.. class:: TwistedConnection + + An implementation of :class:`~cassandra.io.connection.Connection` that uses + Twisted's reactor as its event loop. diff --git a/docs/api/cassandra/metadata.rst b/docs/api/cassandra/metadata.rst new file mode 100644 index 0000000..ed79d04 --- /dev/null +++ b/docs/api/cassandra/metadata.rst @@ -0,0 +1,76 @@ +``cassandra.metadata`` - Schema and Ring Topology +================================================= + +.. module:: cassandra.metadata + +.. autodata:: cql_keywords + :annotation: + +.. autodata:: cql_keywords_unreserved + :annotation: + +.. autodata:: cql_keywords_reserved + :annotation: + +.. autoclass:: Metadata () + :members: + :exclude-members: rebuild_schema, rebuild_token_map, add_host, remove_host + +Schemas +------- + +.. autoclass:: KeyspaceMetadata () + :members: + +.. autoclass:: UserType () + :members: + +.. autoclass:: Function () + :members: + +.. autoclass:: Aggregate () + :members: + +.. autoclass:: TableMetadata () + :members: + +.. autoclass:: ColumnMetadata () + :members: + +.. autoclass:: IndexMetadata () + :members: + +.. autoclass:: MaterializedViewMetadata () + :members: + +Tokens and Ring Topology +------------------------ + +.. autoclass:: TokenMap () + :members: + +.. autoclass:: Token () + :members: + +.. autoclass:: Murmur3Token + :members: + +.. autoclass:: MD5Token + :members: + +.. autoclass:: BytesToken + :members: + +.. autoclass:: ReplicationStrategy + :members: + +.. autoclass:: SimpleStrategy + :members: + +.. autoclass:: NetworkTopologyStrategy + :members: + +.. autoclass:: LocalStrategy + :members: + +.. autofunction:: group_keys_by_replica diff --git a/docs/api/cassandra/metrics.rst b/docs/api/cassandra/metrics.rst new file mode 100644 index 0000000..0df7f8b --- /dev/null +++ b/docs/api/cassandra/metrics.rst @@ -0,0 +1,7 @@ +``cassandra.metrics`` - Performance Metrics +=========================================== + +.. module:: cassandra.metrics + +.. autoclass:: cassandra.metrics.Metrics () + :members: diff --git a/docs/api/cassandra/policies.rst b/docs/api/cassandra/policies.rst new file mode 100644 index 0000000..b662755 --- /dev/null +++ b/docs/api/cassandra/policies.rst @@ -0,0 +1,90 @@ +``cassandra.policies`` - Load balancing and Failure Handling Policies +===================================================================== + +.. module:: cassandra.policies + +Load Balancing +-------------- + +.. autoclass:: HostDistance + :members: + +.. autoclass:: LoadBalancingPolicy + :members: + +.. autoclass:: RoundRobinPolicy + :members: + +.. autoclass:: DCAwareRoundRobinPolicy + :members: + +.. autoclass:: WhiteListRoundRobinPolicy + :members: + +.. autoclass:: TokenAwarePolicy + :members: + +.. autoclass:: HostFilterPolicy + + .. we document these methods manually so we can specify a param to predicate + + .. automethod:: predicate(host) + .. automethod:: distance + .. automethod:: make_query_plan + +Translating Server Node Addresses +--------------------------------- + +.. autoclass:: AddressTranslator + :members: + +.. autoclass:: IdentityTranslator + :members: + +.. autoclass:: EC2MultiRegionTranslator + :members: + +Marking Hosts Up or Down +------------------------ + +.. autoclass:: ConvictionPolicy + :members: + +.. autoclass:: SimpleConvictionPolicy + :members: + +Reconnecting to Dead Hosts +-------------------------- + +.. autoclass:: ReconnectionPolicy + :members: + +.. autoclass:: ConstantReconnectionPolicy + :members: + +.. autoclass:: ExponentialReconnectionPolicy + :members: + +Retrying Failed Operations +-------------------------- + +.. autoclass:: WriteType + :members: + +.. autoclass:: RetryPolicy + :members: + +.. autoclass:: FallthroughRetryPolicy + :members: + +.. autoclass:: DowngradingConsistencyRetryPolicy + :members: + +Retrying Idempotent Operations +------------------------------ + +.. autoclass:: SpeculativeExecutionPolicy + :members: + +.. autoclass:: ConstantSpeculativeExecutionPolicy + :members: diff --git a/docs/api/cassandra/pool.rst b/docs/api/cassandra/pool.rst new file mode 100644 index 0000000..b14d30e --- /dev/null +++ b/docs/api/cassandra/pool.rst @@ -0,0 +1,11 @@ +``cassandra.pool`` - Hosts and Connection Pools +=============================================== + +.. automodule:: cassandra.pool + +.. autoclass:: Host () + :members: + :exclude-members: set_location_info, get_and_set_reconnection_handler + +.. autoexception:: NoConnectionsAvailable + :members: diff --git a/docs/api/cassandra/protocol.rst b/docs/api/cassandra/protocol.rst new file mode 100644 index 0000000..f615ab1 --- /dev/null +++ b/docs/api/cassandra/protocol.rst @@ -0,0 +1,55 @@ +``cassandra.protocol`` - Protocol Features +===================================================================== + +.. module:: cassandra.protocol + +.. _custom_payload: + +Custom Payloads +--------------- +Native protocol version 4+ allows for a custom payload to be sent between clients +and custom query handlers. The payload is specified as a string:binary_type dict +holding custom key/value pairs. + +By default these are ignored by the server. They can be useful for servers implementing +a custom QueryHandler. + +See :meth:`.Session.execute`, ::meth:`.Session.execute_async`, :attr:`.ResponseFuture.custom_payload`. + +.. autoclass:: _ProtocolHandler + + .. autoattribute:: message_types_by_opcode + :annotation: = {default mapping} + + .. automethod:: encode_message + + .. automethod:: decode_message + +.. _faster_deser: + +Faster Deserialization +---------------------- +When python-driver is compiled with Cython, it uses a Cython-based deserialization path +to deserialize messages. By default, the driver will use a Cython-based parser that returns +lists of rows similar to the pure-Python version. In addition, there are two additional +ProtocolHandler classes that can be used to deserialize response messages: ``LazyProtocolHandler`` +and ``NumpyProtocolHandler``. They can be used as follows: + +.. code:: python + + from cassandra.protocol import NumpyProtocolHandler, LazyProtocolHandler + from cassandra.query import tuple_factory + s.client_protocol_handler = LazyProtocolHandler # for a result iterator + s.row_factory = tuple_factory #required for Numpy results + s.client_protocol_handler = NumpyProtocolHandler # for a dict of NumPy arrays as result + +These protocol handlers comprise different parsers, and return results as described below: + +- ProtocolHandler: this default implementation is a drop-in replacement for the pure-Python version. + The rows are all parsed upfront, before results are returned. + +- LazyProtocolHandler: near drop-in replacement for the above, except that it returns an iterator over rows, + lazily decoded into the default row format (this is more efficient since all decoded results are not materialized at once) + +- NumpyProtocolHander: deserializes results directly into NumPy arrays. This facilitates efficient integration with + analysis toolkits such as Pandas. diff --git a/docs/api/cassandra/query.rst b/docs/api/cassandra/query.rst new file mode 100644 index 0000000..fcd7973 --- /dev/null +++ b/docs/api/cassandra/query.rst @@ -0,0 +1,59 @@ +``cassandra.query`` - Prepared Statements, Batch Statements, Tracing, and Row Factories +======================================================================================= + +.. module:: cassandra.query + +.. autofunction:: tuple_factory + +.. autofunction:: named_tuple_factory + +.. autofunction:: dict_factory + +.. autofunction:: ordered_dict_factory + +.. autoclass:: SimpleStatement + :members: + +.. autoclass:: PreparedStatement () + :members: + +.. autoclass:: BoundStatement + :members: + +.. autoclass:: Statement () + :members: + +.. autodata:: UNSET_VALUE + :annotation: + +.. autoclass:: BatchStatement (batch_type=BatchType.LOGGED, retry_policy=None, consistency_level=None) + :members: + +.. autoclass:: BatchType () + + .. autoattribute:: LOGGED + + .. autoattribute:: UNLOGGED + + .. autoattribute:: COUNTER + +.. autoclass:: cassandra.query.ValueSequence + + A wrapper class that is used to specify that a sequence of values should + be treated as a CQL list of values instead of a single column collection when used + as part of the `parameters` argument for :meth:`.Session.execute()`. + + This is typically needed when supplying a list of keys to select. + For example:: + + >>> my_user_ids = ('alice', 'bob', 'charles') + >>> query = "SELECT * FROM users WHERE user_id IN %s" + >>> session.execute(query, parameters=[ValueSequence(my_user_ids)]) + +.. autoclass:: QueryTrace () + :members: + +.. autoclass:: TraceEvent () + :members: + +.. autoexception:: TraceUnavailable diff --git a/docs/api/cassandra/timestamps.rst b/docs/api/cassandra/timestamps.rst new file mode 100644 index 0000000..7c7f534 --- /dev/null +++ b/docs/api/cassandra/timestamps.rst @@ -0,0 +1,14 @@ +``cassandra.timestamps`` - Timestamp Generation +============================================= + +.. module:: cassandra.timestamps + +.. autoclass:: MonotonicTimestampGenerator (warn_on_drift=True, warning_threshold=0, warning_interval=0) + + .. autoattribute:: warn_on_drift + + .. autoattribute:: warning_threshold + + .. autoattribute:: warning_interval + + .. automethod:: _next_timestamp diff --git a/docs/api/cassandra/util.rst b/docs/api/cassandra/util.rst new file mode 100644 index 0000000..848d4d5 --- /dev/null +++ b/docs/api/cassandra/util.rst @@ -0,0 +1,5 @@ +``cassandra.util`` - Utilities +=================================== + +.. automodule:: cassandra.util + :members: diff --git a/docs/api/index.rst b/docs/api/index.rst new file mode 100644 index 0000000..cf79228 --- /dev/null +++ b/docs/api/index.rst @@ -0,0 +1,43 @@ +API Documentation +================= + +Core Driver +----------- +.. toctree:: + :maxdepth: 2 + + cassandra + cassandra/cluster + cassandra/policies + cassandra/auth + cassandra/metadata + cassandra/metrics + cassandra/query + cassandra/pool + cassandra/protocol + cassandra/encoder + cassandra/decoder + cassandra/concurrent + cassandra/connection + cassandra/util + cassandra/timestamps + cassandra/io/asyncioreactor + cassandra/io/asyncorereactor + cassandra/io/eventletreactor + cassandra/io/libevreactor + cassandra/io/geventreactor + cassandra/io/twistedreactor + +.. _om_api: + +Object Mapper +------------- +.. toctree:: + :maxdepth: 1 + + cassandra/cqlengine/models + cassandra/cqlengine/columns + cassandra/cqlengine/query + cassandra/cqlengine/connection + cassandra/cqlengine/management + cassandra/cqlengine/usertype diff --git a/docs/cloud.rst b/docs/cloud.rst new file mode 100644 index 0000000..a7e2fb9 --- /dev/null +++ b/docs/cloud.rst @@ -0,0 +1,38 @@ +Cloud +----- +Connecting +========== +To connect to a DataStax Apollo cluster: + +1. Download the secure connect bundle from your Apollo account. +2. Connect to your cluster with + +.. code-block:: python + + from cassandra.cluster import Cluster + from cassandra.auth import PlainTextAuthProvider + + cloud_config = { + 'secure_connect_bundle': '/path/to/secure-connect-dbname.zip' + } + auth_provider = PlainTextAuthProvider(username='user', password='pass') + cluster = Cluster(cloud=cloud_config, auth_provider=auth_provider) + session = cluster.connect() + +Apollo Differences +================== +In most circumstances, the client code for interacting with an Apollo cluster will be the same as interacting with any other Cassandra cluster. The exceptions being: + +* A cloud configuration must be passed to a :class:`~.Cluster` instance via the `cloud` attribute (as demonstrated above). +* An SSL connection will be established automatically. Manual SSL configuration is not allowed, and using `ssl_context` or `ssl_options` will result in an exception. +* A :class:`~.Cluster`'s `contact_points` attribute should not be used. The cloud config contains all of the necessary contact information. +* If a consistency level is not specified for an execution profile or query, then :attr:`.ConsistencyLevel.LOCAL_QUORUM` will be used as the default. + + +Limitations +=========== + +Event loops +^^^^^^^^^^^ +Twisted and Evenlet aren't supported yet. These event loops are still using the old way to configure +SSL (ssl_options), which is not compatible with the secure connect bundle provided by Apollo. diff --git a/docs/conf.py b/docs/conf.py new file mode 100644 index 0000000..b2bfbe0 --- /dev/null +++ b/docs/conf.py @@ -0,0 +1,227 @@ +# -*- coding: utf-8 -*- +# +# Cassandra Driver documentation build configuration file, created by +# sphinx-quickstart on Mon Jul 1 11:40:09 2013. +# +# This file is execfile()d with the current directory set to its containing dir. +# +# Note that not all possible configuration values are present in this +# autogenerated file. +# +# All configuration values have a default; values that are commented out +# serve to show the default. + +import os +import sys + +# If extensions (or modules to document with autodoc) are in another directory, +# add these directories to sys.path here. If the directory is relative to the +# documentation root, use os.path.abspath to make it absolute, like shown here. +sys.path.insert(0, os.path.abspath('..')) +import cassandra + +# -- General configuration ----------------------------------------------------- + +# If your documentation needs a minimal Sphinx version, state it here. +#needs_sphinx = '1.0' + +# Add any Sphinx extension module names here, as strings. They can be extensions +# coming with Sphinx (named 'sphinx.ext.*') or your custom ones. +extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode'] + +# Add any paths that contain templates here, relative to this directory. +templates_path = ['_templates'] + +# The suffix of source filenames. +source_suffix = '.rst' + +# The encoding of source files. +#source_encoding = 'utf-8-sig' + +# The master toctree document. +master_doc = 'index' + +# General information about the project. +project = u'Cassandra Driver' +copyright = u'2013-2017 DataStax' + +# The version info for the project you're documenting, acts as replacement for +# |version| and |release|, also used in various other places throughout the +# built documents. +# +# The short X.Y version. +version = cassandra.__version__ +# The full version, including alpha/beta/rc tags. +release = cassandra.__version__ + +autodoc_member_order = 'bysource' +autoclass_content = 'both' + +# The language for content autogenerated by Sphinx. Refer to documentation +# for a list of supported languages. +#language = None + +# There are two options for replacing |today|: either, you set today to some +# non-false value, then it is used: +#today = '' +# Else, today_fmt is used as the format for a strftime call. +#today_fmt = '%B %d, %Y' + +# List of patterns, relative to source directory, that match files and +# directories to ignore when looking for source files. +exclude_patterns = ['_build'] + +# The reST default role (used for this markup: `text`) to use for all documents. +#default_role = None + +# If true, '()' will be appended to :func: etc. cross-reference text. +#add_function_parentheses = True + +# If true, the current module name will be prepended to all description +# unit titles (such as .. function::). +#add_module_names = True + +# If true, sectionauthor and moduleauthor directives will be shown in the +# output. They are ignored by default. +#show_authors = False + +# The name of the Pygments (syntax highlighting) style to use. +pygments_style = 'sphinx' + +# A list of ignored prefixes for module index sorting. +#modindex_common_prefix = [] + + +# -- Options for HTML output --------------------------------------------------- + +# The theme to use for HTML and HTML Help pages. See the documentation for +# a list of builtin themes. +html_theme = 'custom' + +# Theme options are theme-specific and customize the look and feel of a theme +# further. For a list of options available for each theme, see the +# documentation. +#html_theme_options = {} + +# Add any paths that contain custom themes here, relative to this directory. +html_theme_path = ['./themes'] + +# The name for this set of Sphinx documents. If None, it defaults to +# " v documentation". +#html_title = None + +# A shorter title for the navigation bar. Default is the same as html_title. +#html_short_title = None + +# The name of an image file (relative to this directory) to place at the top +# of the sidebar. +#html_logo = None + +# The name of an image file (within the static path) to use as favicon of the +# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 +# pixels large. +#html_favicon = None + +# Add any paths that contain custom static files (such as style sheets) here, +# relative to this directory. They are copied after the builtin static files, +# so a file named "default.css" will overwrite the builtin "default.css". +html_static_path = [] + +# If not '', a 'Last updated on:' timestamp is inserted at every page bottom, +# using the given strftime format. +#html_last_updated_fmt = '%b %d, %Y' + +# If true, SmartyPants will be used to convert quotes and dashes to +# typographically correct entities. +#html_use_smartypants = True + +# Custom sidebar templates, maps document names to template names. +html_sidebars = { + '**': [ + 'about.html', + 'navigation.html', + 'relations.html', + 'searchbox.html' + ] +} + +# Additional templates that should be rendered to pages, maps page names to +# template names. +#html_additional_pages = {} + +# If false, no module index is generated. +#html_domain_indices = True + +# If false, no index is generated. +#html_use_index = True + +# If true, the index is split into individual pages for each letter. +#html_split_index = False + +# If true, links to the reST sources are added to the pages. +#html_show_sourcelink = True + +# If true, "Created using Sphinx" is shown in the HTML footer. Default is True. +#html_show_sphinx = True + +# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. +#html_show_copyright = True + +# If true, an OpenSearch description file will be output, and all pages will +# contain a tag referring to it. The value of this option must be the +# base URL from which the finished HTML is served. +#html_use_opensearch = '' + +# This is the file name suffix for HTML files (e.g. ".xhtml"). +#html_file_suffix = None + +# Output file base name for HTML help builder. +htmlhelp_basename = 'CassandraDriverdoc' + + +# -- Options for LaTeX output -------------------------------------------------- + +# The paper size ('letter' or 'a4'). +#latex_paper_size = 'letter' + +# The font size ('10pt', '11pt' or '12pt'). +#latex_font_size = '10pt' + +# Grouping the document tree into LaTeX files. List of tuples +# (source start file, target name, title, author, documentclass [howto/manual]). +latex_documents = [ + ('index', 'cassandra-driver.tex', u'Cassandra Driver Documentation', u'DataStax', 'manual'), +] + +# The name of an image file (relative to this directory) to place at the top of +# the title page. +#latex_logo = None + +# For "manual" documents, if this is true, then toplevel headings are parts, +# not chapters. +#latex_use_parts = False + +# If true, show page references after internal links. +#latex_show_pagerefs = False + +# If true, show URL addresses after external links. +#latex_show_urls = False + +# Additional stuff for the LaTeX preamble. +#latex_preamble = '' + +# Documents to append as an appendix to all manuals. +#latex_appendices = [] + +# If false, no module index is generated. +#latex_domain_indices = True + + +# -- Options for manual page output -------------------------------------------- + +# One entry per manual page. List of tuples +# (source start file, name, description, authors, manual section). +man_pages = [ + ('index', 'cassandra-driver', u'Cassandra Driver Documentation', + [u'Tyler Hobbs'], 1) +] diff --git a/docs/cqlengine/batches.rst b/docs/cqlengine/batches.rst new file mode 100644 index 0000000..306e7d0 --- /dev/null +++ b/docs/cqlengine/batches.rst @@ -0,0 +1,108 @@ +============= +Batch Queries +============= + +cqlengine supports batch queries using the BatchQuery class. Batch queries can be started and stopped manually, or within a context manager. To add queries to the batch object, you just need to precede the create/save/delete call with a call to batch, and pass in the batch object. + + +Batch Query General Use Pattern +=============================== + +You can only create, update, and delete rows with a batch query, attempting to read rows out of the database with a batch query will fail. + +.. code-block:: python + + from cassandra.cqlengine.query import BatchQuery + + #using a context manager + with BatchQuery() as b: + now = datetime.now() + em1 = ExampleModel.batch(b).create(example_type=0, description="1", created_at=now) + em2 = ExampleModel.batch(b).create(example_type=0, description="2", created_at=now) + em3 = ExampleModel.batch(b).create(example_type=0, description="3", created_at=now) + + # -- or -- + + #manually + b = BatchQuery() + now = datetime.now() + em1 = ExampleModel.batch(b).create(example_type=0, description="1", created_at=now) + em2 = ExampleModel.batch(b).create(example_type=0, description="2", created_at=now) + em3 = ExampleModel.batch(b).create(example_type=0, description="3", created_at=now) + b.execute() + + # updating in a batch + + b = BatchQuery() + em1.description = "new description" + em1.batch(b).save() + em2.description = "another new description" + em2.batch(b).save() + b.execute() + + # deleting in a batch + b = BatchQuery() + ExampleModel.objects(id=some_id).batch(b).delete() + ExampleModel.objects(id=some_id2).batch(b).delete() + b.execute() + + +Typically you will not want the block to execute if an exception occurs inside the `with` block. However, in the case that this is desirable, it's achievable by using the following syntax: + +.. code-block:: python + + with BatchQuery(execute_on_exception=True) as b: + LogEntry.batch(b).create(k=1, v=1) + mystery_function() # exception thrown in here + LogEntry.batch(b).create(k=1, v=2) # this code is never reached due to the exception, but anything leading up to here will execute in the batch. + +If an exception is thrown somewhere in the block, any statements that have been added to the batch will still be executed. This is useful for some logging situations. + +Batch Query Execution Callbacks +=============================== + +In order to allow secondary tasks to be chained to the end of batch, BatchQuery instances allow callbacks to be +registered with the batch, to be executed immediately after the batch executes. + +Multiple callbacks can be attached to same BatchQuery instance, they are executed in the same order that they +are added to the batch. + +The callbacks attached to a given batch instance are executed only if the batch executes. If the batch is used as a +context manager and an exception is raised, the queued up callbacks will not be run. + +.. code-block:: python + + def my_callback(*args, **kwargs): + pass + + batch = BatchQuery() + + batch.add_callback(my_callback) + batch.add_callback(my_callback, 'positional arg', named_arg='named arg value') + + # if you need reference to the batch within the callback, + # just trap it in the arguments to be passed to the callback: + batch.add_callback(my_callback, cqlengine_batch=batch) + + # once the batch executes... + batch.execute() + + # the effect of the above scheduled callbacks will be similar to + my_callback() + my_callback('positional arg', named_arg='named arg value') + my_callback(cqlengine_batch=batch) + +Failure in any of the callbacks does not affect the batch's execution, as the callbacks are started after the execution +of the batch is complete. + +Logged vs Unlogged Batches +--------------------------- +By default, queries in cqlengine are LOGGED, which carries additional overhead from UNLOGGED. To explicitly state which batch type to use, simply: + + +.. code-block:: python + + from cassandra.cqlengine.query import BatchType + with BatchQuery(batch_type=BatchType.Unlogged) as b: + LogEntry.batch(b).create(k=1, v=1) + LogEntry.batch(b).create(k=1, v=2) diff --git a/docs/cqlengine/connections.rst b/docs/cqlengine/connections.rst new file mode 100644 index 0000000..03ade27 --- /dev/null +++ b/docs/cqlengine/connections.rst @@ -0,0 +1,137 @@ +=========== +Connections +=========== + +Connections aim to ease the use of multiple sessions with cqlengine. Connections can be set on a model class, per query or using a context manager. + + +Register a new connection +========================= + +To use cqlengine, you need at least a default connection. If you initialize cqlengine's connections with with :func:`connection.setup <.connection.setup>`, a connection will be created automatically. If you want to use another cluster/session, you need to register a new cqlengine connection. You register a connection with :func:`~.connection.register_connection`: + +.. code-block:: python + + from cassandra.cqlengine import connection + + connection.setup(['127.0.0.1') + connection.register_connection('cluster2', ['127.0.0.2']) + +:func:`~.connection.register_connection` can take a list of hosts, as shown above, in which case it will create a connection with a new session. It can also take a `session` argument if you've already created a session: + +.. code-block:: python + + from cassandra.cqlengine import connection + from cassandra.cluster import Cluster + + session = Cluster(['127.0.0.1']).connect() + connection.register_connection('cluster3', session=session) + + +Change the default connection +============================= + +You can change the default cqlengine connection on registration: + +.. code-block:: python + + from cassandra.cqlengine import connection + + connection.register_connection('cluster2', ['127.0.0.2'] default=True) + +or on the fly using :func:`~.connection.set_default_connection` + +.. code-block:: python + + connection.set_default_connection('cluster2') + +Unregister a connection +======================= + +You can unregister a connection using :func:`~.connection.unregister_connection`: + +.. code-block:: python + + connection.unregister_connection('cluster2') + +Management +========== + +When using multiples connections, you also need to sync your models on all connections (and keyspaces) that you need operate on. Management commands have been improved to ease this part. Here is an example: + +.. code-block:: python + + from cassandra.cqlengine import management + + keyspaces = ['ks1', 'ks2'] + conns = ['cluster1', 'cluster2'] + + # registers your connections + # ... + + # create all keyspaces on all connections + for ks in keyspaces: + management.create_simple_keyspace(ks, connections=conns) + + # define your Automobile model + # ... + + # sync your models + management.sync_table(Automobile, keyspaces=keyspaces, connections=conns) + + +Connection Selection +==================== + +cqlengine will select the default connection, unless your specify a connection using one of the following methods. + +Default Model Connection +------------------------ + +You can specify a default connection per model: + +.. code-block:: python + + class Automobile(Model): + __keyspace__ = 'test' + __connection__ = 'cluster2' + manufacturer = columns.Text(primary_key=True) + year = columns.Integer(primary_key=True) + model = columns.Text(primary_key=True) + + print len(Automobile.objects.all()) # executed on the connection 'cluster2' + +QuerySet and model instance +--------------------------- + +You can use the :attr:`using() <.query.ModelQuerySet.using>` method to select a connection (or keyspace): + +.. code-block:: python + + Automobile.objects.using(connection='cluster1').create(manufacturer='honda', year=2010, model='civic') + q = Automobile.objects.filter(manufacturer='Tesla') + autos = q.using(keyspace='ks2', connection='cluster2').all() + + for auto in autos: + auto.using(connection='cluster1').save() + +Context Manager +--------------- + +You can use the ContextQuery as well to select a connection: + +.. code-block:: python + + with ContextQuery(Automobile, connection='cluster1') as A: + A.objects.filter(manufacturer='honda').all() # executed on 'cluster1' + + +BatchQuery +---------- + +With a BatchQuery, you can select the connection with the context manager. Note that all operations in the batch need to use the same connection. + +.. code-block:: python + + with BatchQuery(connection='cluster1') as b: + Automobile.objects.batch(b).create(manufacturer='honda', year=2010, model='civic') diff --git a/docs/cqlengine/faq.rst b/docs/cqlengine/faq.rst new file mode 100644 index 0000000..6c056d0 --- /dev/null +++ b/docs/cqlengine/faq.rst @@ -0,0 +1,67 @@ +========================== +Frequently Asked Questions +========================== + +Why don't updates work correctly on models instantiated as Model(field=value, field2=value2)? +------------------------------------------------------------------------------------------------ + +The recommended way to create new rows is with the models .create method. The values passed into a model's init method are interpreted by the model as the values as they were read from a row. This allows the model to "know" which rows have changed since the row was read out of cassandra, and create suitable update statements. + +How to preserve ordering in batch query? +------------------------------------------- + +Statement Ordering is not supported by CQL3 batches. Therefore, +once cassandra needs resolving conflict(Updating the same column in one batch), +The algorithm below would be used. + +* If timestamps are different, pick the column with the largest timestamp (the value being a regular column or a tombstone) +* If timestamps are the same, and one of the columns in a tombstone ('null') - pick the tombstone +* If timestamps are the same, and none of the columns are tombstones, pick the column with the largest value + +Below is an example to show this scenario. + +.. code-block:: python + + class MyMode(Model): + id = columns.Integer(primary_key=True) + count = columns.Integer() + text = columns.Text() + + with BatchQuery() as b: + MyModel.batch(b).create(id=1, count=2, text='123') + MyModel.batch(b).create(id=1, count=3, text='111') + + assert MyModel.objects(id=1).first().count == 3 + assert MyModel.objects(id=1).first().text == '123' + +The largest value of count is 3, and the largest value of text would be '123'. + +The workaround is applying timestamp to each statement, then Cassandra would +resolve to the statement with the lastest timestamp. + +.. code-block:: python + + with BatchQuery() as b: + MyModel.timestamp(datetime.now()).batch(b).create(id=1, count=2, text='123') + MyModel.timestamp(datetime.now()).batch(b).create(id=1, count=3, text='111') + + assert MyModel.objects(id=1).first().count == 3 + assert MyModel.objects(id=1).first().text == '111' + +How can I delete individual values from a row? +------------------------------------------------- + +When inserting with CQLEngine, ``None`` is equivalent to CQL ``NULL`` or to +issuing a ``DELETE`` on that column. For example: + +.. code-block:: python + + class MyModel(Model): + id = columns.Integer(primary_key=True) + text = columns.Text() + + m = MyModel.create(id=1, text='We can delete this with None') + assert MyModel.objects(id=1).first().text is not None + + m.update(text=None) + assert MyModel.objects(id=1).first().text is None diff --git a/docs/cqlengine/models.rst b/docs/cqlengine/models.rst new file mode 100644 index 0000000..c0ba390 --- /dev/null +++ b/docs/cqlengine/models.rst @@ -0,0 +1,218 @@ +====== +Models +====== + +.. module:: cqlengine.models + +A model is a python class representing a CQL table. Models derive from :class:`Model`, and +define basic table properties and columns for a table. + +Columns in your models map to columns in your CQL table. You define CQL columns by defining column attributes on your model classes. +For a model to be valid it needs at least one primary key column and one non-primary key column. Just as in CQL, the order you define +your columns in is important, and is the same order they are defined in on a model's corresponding table. + +Some basic examples defining models are shown below. Consult the :doc:`Model API docs ` and :doc:`Column API docs ` for complete details. + +Example Definitions +=================== + +This example defines a ``Person`` table, with the columns ``first_name`` and ``last_name`` + +.. code-block:: python + + from cassandra.cqlengine import columns + from cassandra.cqlengine.models import Model + + class Person(Model): + id = columns.UUID(primary_key=True) + first_name = columns.Text() + last_name = columns.Text() + + +The Person model would create this CQL table: + +.. code-block:: sql + + CREATE TABLE cqlengine.person ( + id uuid, + first_name text, + last_name text, + PRIMARY KEY (id) + ); + +Here's an example of a comment table created with clustering keys, in descending order: + +.. code-block:: python + + from cassandra.cqlengine import columns + from cassandra.cqlengine.models import Model + + class Comment(Model): + photo_id = columns.UUID(primary_key=True) + comment_id = columns.TimeUUID(primary_key=True, clustering_order="DESC") + comment = columns.Text() + +The Comment model's ``create table`` would look like the following: + +.. code-block:: sql + + CREATE TABLE comment ( + photo_id uuid, + comment_id timeuuid, + comment text, + PRIMARY KEY (photo_id, comment_id) + ) WITH CLUSTERING ORDER BY (comment_id DESC); + +To sync the models to the database, you may do the following*: + +.. code-block:: python + + from cassandra.cqlengine.management import sync_table + sync_table(Person) + sync_table(Comment) + +\*Note: synchronizing models causes schema changes, and should be done with caution. +Please see the discussion in :doc:`/api/cassandra/cqlengine/management` for considerations. + +For examples on manipulating data and creating queries, see :doc:`queryset` + +Manipulating model instances as dictionaries +============================================ + +Model instances can be accessed like dictionaries. + +.. code-block:: python + + class Person(Model): + first_name = columns.Text() + last_name = columns.Text() + + kevin = Person.create(first_name="Kevin", last_name="Deldycke") + dict(kevin) # returns {'first_name': 'Kevin', 'last_name': 'Deldycke'} + kevin['first_name'] # returns 'Kevin' + kevin.keys() # returns ['first_name', 'last_name'] + kevin.values() # returns ['Kevin', 'Deldycke'] + kevin.items() # returns [('first_name', 'Kevin'), ('last_name', 'Deldycke')] + + kevin['first_name'] = 'KEVIN5000' # changes the models first name + +Extending Model Validation +========================== + +Each time you save a model instance in cqlengine, the data in the model is validated against the schema you've defined +for your model. Most of the validation is fairly straightforward, it basically checks that you're not trying to do +something like save text into an integer column, and it enforces the ``required`` flag set on column definitions. +It also performs any transformations needed to save the data properly. + +However, there are often additional constraints or transformations you want to impose on your data, beyond simply +making sure that Cassandra won't complain when you try to insert it. To define additional validation on a model, +extend the model's validation method: + +.. code-block:: python + + class Member(Model): + person_id = UUID(primary_key=True) + name = Text(required=True) + + def validate(self): + super(Member, self).validate() + if self.name == 'jon': + raise ValidationError('no jon\'s allowed') + +*Note*: while not required, the convention is to raise a ``ValidationError`` (``from cassandra.cqlengine import ValidationError``) +if validation fails. + +.. _model_inheritance: + +Model Inheritance +================= +It is possible to save and load different model classes using a single CQL table. +This is useful in situations where you have different object types that you want to store in a single cassandra row. + +For instance, suppose you want a table that stores rows of pets owned by an owner: + +.. code-block:: python + + class Pet(Model): + __table_name__ = 'pet' + owner_id = UUID(primary_key=True) + pet_id = UUID(primary_key=True) + pet_type = Text(discriminator_column=True) + name = Text() + + def eat(self, food): + pass + + def sleep(self, time): + pass + + class Cat(Pet): + __discriminator_value__ = 'cat' + cuteness = Float() + + def tear_up_couch(self): + pass + + class Dog(Pet): + __discriminator_value__ = 'dog' + fierceness = Float() + + def bark_all_night(self): + pass + +After calling ``sync_table`` on each of these tables, the columns defined in each model will be added to the +``pet`` table. Additionally, saving ``Cat`` and ``Dog`` models will save the meta data needed to identify each row +as either a cat or dog. + +To setup a model structure with inheritance, follow these steps + +1. Create a base model with a column set as the distriminator (``distriminator_column=True`` in the column definition) +2. Create subclass models, and define a unique ``__discriminator_value__`` value on each +3. Run ``sync_table`` on each of the sub tables + +**About the discriminator value** + +The discriminator value is what cqlengine uses under the covers to map logical cql rows to the appropriate model type. The +base model maintains a map of discriminator values to subclasses. When a specialized model is saved, its discriminator value is +automatically saved into the discriminator column. The discriminator column may be any column type except counter and container types. +Additionally, if you set ``index=True`` on your discriminator column, you can execute queries against specialized subclasses, and a +``WHERE`` clause will be automatically added to your query, returning only rows of that type. Note that you must +define a unique ``__discriminator_value__`` to each subclass, and that you can only assign a single discriminator column per model. + +.. _user_types: + +User Defined Types +================== +cqlengine models User Defined Types (UDTs) much like tables, with fields defined by column type attributes. However, UDT instances +are only created, presisted, and queried via table Models. A short example to introduce the pattern:: + + from cassandra.cqlengine.columns import * + from cassandra.cqlengine.models import Model + from cassandra.cqlengine.usertype import UserType + + class address(UserType): + street = Text() + zipcode = Integer() + + class users(Model): + __keyspace__ = 'account' + name = Text(primary_key=True) + addr = UserDefinedType(address) + + users.create(name="Joe", addr=address(street="Easy St.", zipcode=99999)) + user = users.objects(name="Joe")[0] + print user.name, user.addr + # Joe address(street=u'Easy St.', zipcode=99999) + +UDTs are modeled by inheriting :class:`~.usertype.UserType`, and setting column type attributes. Types are then used in defining +models by declaring a column of type :class:`~.columns.UserDefinedType`, with the ``UserType`` class as a parameter. + +``sync_table`` will implicitly +synchronize any types contained in the table. Alternatively :func:`~.management.sync_type` can be used to create/alter types +explicitly. + +Upon declaration, types are automatically registered with the driver, so query results return instances of your ``UserType`` +class*. + +***Note**: UDTs were not added to the native protocol until v3. When setting up the cqlengine connection, be sure to specify +``protocol_version=3``. If using an earlier version, UDT queries will still work, but the returned type will be a namedtuple. diff --git a/docs/cqlengine/queryset.rst b/docs/cqlengine/queryset.rst new file mode 100644 index 0000000..fa99585 --- /dev/null +++ b/docs/cqlengine/queryset.rst @@ -0,0 +1,419 @@ +============== +Making Queries +============== + +.. module:: cqlengine.queryset + +Retrieving objects +================== +Once you've populated Cassandra with data, you'll probably want to retrieve some of it. This is accomplished with QuerySet objects. This section will describe how to use QuerySet objects to retrieve the data you're looking for. + +Retrieving all objects +---------------------- +The simplest query you can make is to return all objects from a table. + +This is accomplished with the ``.all()`` method, which returns a QuerySet of all objects in a table + +Using the Person example model, we would get all Person objects like this: + +.. code-block:: python + + all_objects = Person.objects.all() + +.. _retrieving-objects-with-filters: + +Retrieving objects with filters +------------------------------- +Typically, you'll want to query only a subset of the records in your database. + +That can be accomplished with the QuerySet's ``.filter(\*\*)`` method. + +For example, given the model definition: + +.. code-block:: python + + class Automobile(Model): + manufacturer = columns.Text(primary_key=True) + year = columns.Integer(primary_key=True) + model = columns.Text() + price = columns.Decimal() + options = columns.Set(columns.Text) + +...and assuming the Automobile table contains a record of every car model manufactured in the last 20 years or so, we can retrieve only the cars made by a single manufacturer like this: + + +.. code-block:: python + + q = Automobile.objects.filter(manufacturer='Tesla') + +You can also use the more convenient syntax: + +.. code-block:: python + + q = Automobile.objects(Automobile.manufacturer == 'Tesla') + +We can then further filter our query with another call to **.filter** + +.. code-block:: python + + q = q.filter(year=2012) + +*Note: all queries involving any filtering MUST define either an '=' or an 'in' relation to either a primary key column, or an indexed column.* + +Accessing objects in a QuerySet +=============================== + +There are several methods for getting objects out of a queryset + +* iterating over the queryset + .. code-block:: python + + for car in Automobile.objects.all(): + #...do something to the car instance + pass + +* list index + .. code-block:: python + + q = Automobile.objects.all() + q[0] #returns the first result + q[1] #returns the second result + + .. note:: + + * CQL does not support specifying a start position in it's queries. Therefore, accessing elements using array indexing will load every result up to the index value requested + * Using negative indices requires a "SELECT COUNT()" to be executed. This has a performance cost on large datasets. + +* list slicing + .. code-block:: python + + q = Automobile.objects.all() + q[1:] #returns all results except the first + q[1:9] #returns a slice of the results + + .. note:: + + * CQL does not support specifying a start position in it's queries. Therefore, accessing elements using array slicing will load every result up to the index value requested + * Using negative indices requires a "SELECT COUNT()" to be executed. This has a performance cost on large datasets. + +* calling :attr:`get() ` on the queryset + .. code-block:: python + + q = Automobile.objects.filter(manufacturer='Tesla') + q = q.filter(year=2012) + car = q.get() + + this returns the object matching the queryset + +* calling :attr:`first() ` on the queryset + .. code-block:: python + + q = Automobile.objects.filter(manufacturer='Tesla') + q = q.filter(year=2012) + car = q.first() + + this returns the first value in the queryset + +.. _query-filtering-operators: + +Filtering Operators +=================== + +:attr:`Equal To ` + +The default filtering operator. + +.. code-block:: python + + q = Automobile.objects.filter(manufacturer='Tesla') + q = q.filter(year=2012) #year == 2012 + +In addition to simple equal to queries, cqlengine also supports querying with other operators by appending a ``__`` to the field name on the filtering call + +:attr:`in (__in) ` + +.. code-block:: python + + q = Automobile.objects.filter(manufacturer='Tesla') + q = q.filter(year__in=[2011, 2012]) + + +:attr:`> (__gt) ` + +.. code-block:: python + + q = Automobile.objects.filter(manufacturer='Tesla') + q = q.filter(year__gt=2010) # year > 2010 + + # or the nicer syntax + + q.filter(Automobile.year > 2010) + +:attr:`>= (__gte) ` + +.. code-block:: python + + q = Automobile.objects.filter(manufacturer='Tesla') + q = q.filter(year__gte=2010) # year >= 2010 + + # or the nicer syntax + + q.filter(Automobile.year >= 2010) + +:attr:`< (__lt) ` + +.. code-block:: python + + q = Automobile.objects.filter(manufacturer='Tesla') + q = q.filter(year__lt=2012) # year < 2012 + + # or... + + q.filter(Automobile.year < 2012) + +:attr:`<= (__lte) ` + +.. code-block:: python + + q = Automobile.objects.filter(manufacturer='Tesla') + q = q.filter(year__lte=2012) # year <= 2012 + + q.filter(Automobile.year <= 2012) + +:attr:`CONTAINS (__contains) ` + +The CONTAINS operator is available for all collection types (List, Set, Map). + +.. code-block:: python + + q = Automobile.objects.filter(manufacturer='Tesla') + q.filter(options__contains='backup camera').allow_filtering() + +Note that we need to use allow_filtering() since the *options* column has no secondary index. + +:attr:`LIKE (__like) ` + +The LIKE operator is available for text columns that have a SASI secondary index. + +.. code-block:: python + + q = Automobile.objects.filter(model__like='%Civic%').allow_filtering() + +:attr:`IS NOT NULL (IsNotNull(column_name)) ` + +The IS NOT NULL operator is not yet supported for C*. + +.. code-block:: python + + q = Automobile.objects.filter(IsNotNull('model')) + +Limitations: + +- Currently, cqlengine does not support SASI index creation. To use this feature, you need to create the SASI index using the core driver. +- Queries using LIKE must use allow_filtering() since the *model* column has no standard secondary index. Note that the server will use the SASI index properly when executing the query. + +TimeUUID Functions +================== + +In addition to querying using regular values, there are two functions you can pass in when querying TimeUUID columns to help make filtering by them easier. Note that these functions don't actually return a value, but instruct the cql interpreter to use the functions in it's query. + +.. class:: MinTimeUUID(datetime) + + returns the minimum time uuid value possible for the given datetime + +.. class:: MaxTimeUUID(datetime) + + returns the maximum time uuid value possible for the given datetime + +*Example* + +.. code-block:: python + + class DataStream(Model): + id = columns.UUID(partition_key=True) + time = columns.TimeUUID(primary_key=True) + data = columns.Bytes() + + min_time = datetime(1982, 1, 1) + max_time = datetime(1982, 3, 9) + + DataStream.filter(time__gt=functions.MinTimeUUID(min_time), time__lt=functions.MaxTimeUUID(max_time)) + +Token Function +============== + +Token functon may be used only on special, virtual column pk__token, representing token of partition key (it also works for composite partition keys). +Cassandra orders returned items by value of partition key token, so using cqlengine.Token we can easy paginate through all table rows. + +See http://cassandra.apache.org/doc/cql3/CQL-3.0.html#tokenFun + +*Example* + +.. code-block:: python + + class Items(Model): + id = columns.Text(primary_key=True) + data = columns.Bytes() + + query = Items.objects.all().limit(10) + + first_page = list(query); + last = first_page[-1] + next_page = list(query.filter(pk__token__gt=cqlengine.Token(last.pk))) + +QuerySets are immutable +======================= + +When calling any method that changes a queryset, the method does not actually change the queryset object it's called on, but returns a new queryset object with the attributes of the original queryset, plus the attributes added in the method call. + +*Example* + +.. code-block:: python + + #this produces 3 different querysets + #q does not change after it's initial definition + q = Automobiles.objects.filter(year=2012) + tesla2012 = q.filter(manufacturer='Tesla') + honda2012 = q.filter(manufacturer='Honda') + +Ordering QuerySets +================== + +Since Cassandra is essentially a distributed hash table on steroids, the order you get records back in will not be particularly predictable. + +However, you can set a column to order on with the ``.order_by(column_name)`` method. + +*Example* + +.. code-block:: python + + #sort ascending + q = Automobiles.objects.all().order_by('year') + #sort descending + q = Automobiles.objects.all().order_by('-year') + +*Note: Cassandra only supports ordering on a clustering key. In other words, to support ordering results, your model must have more than one primary key, and you must order on a primary key, excluding the first one.* + +*For instance, given our Automobile model, year is the only column we can order on.* + +Values Lists +============ + +There is a special QuerySet's method ``.values_list()`` - when called, QuerySet returns lists of values instead of model instances. It may significantly speedup things with lower memory footprint for large responses. +Each tuple contains the value from the respective field passed into the ``values_list()`` call — so the first item is the first field, etc. For example: + +.. code-block:: python + + items = list(range(20)) + random.shuffle(items) + for i in items: + TestModel.create(id=1, clustering_key=i) + + values = list(TestModel.objects.values_list('clustering_key', flat=True)) + # [19L, 18L, 17L, 16L, 15L, 14L, 13L, 12L, 11L, 10L, 9L, 8L, 7L, 6L, 5L, 4L, 3L, 2L, 1L, 0L] + +Per Query Timeouts +=================== + +By default all queries are executed with the timeout defined in `~cqlengine.connection.setup()` +The examples below show how to specify a per-query timeout. +A timeout is specified in seconds and can be an int, float or None. +None means no timeout. + + +.. code-block:: python + + class Row(Model): + id = columns.Integer(primary_key=True) + name = columns.Text() + + +Fetch all objects with a timeout of 5 seconds + +.. code-block:: python + + Row.objects().timeout(5).all() + +Create a single row with a 50ms timeout + +.. code-block:: python + + Row(id=1, name='Jon').timeout(0.05).create() + +Delete a single row with no timeout + +.. code-block:: python + + Row(id=1).timeout(None).delete() + +Update a single row with no timeout + +.. code-block:: python + + Row(id=1).timeout(None).update(name='Blake') + +Batch query timeouts + +.. code-block:: python + + with BatchQuery(timeout=10) as b: + Row(id=1, name='Jon').create() + + +NOTE: You cannot set both timeout and batch at the same time, batch will use the timeout defined in it's constructor. +Setting the timeout on the model is meaningless and will raise an AssertionError. + + +.. _ttl-change: + +Default TTL and Per Query TTL +============================= + +Model default TTL now relies on the *default_time_to_live* feature, introduced in Cassandra 2.0. It is not handled anymore in the CQLEngine Model (cassandra-driver >=3.6). You can set the default TTL of a table like this: + +Example: + +.. code-block:: python + + class User(Model): + __options__ = {'default_time_to_live': 20} + + user_id = columns.UUID(primary_key=True) + ... + +You can set TTL per-query if needed. Here are a some examples: + +Example: + +.. code-block:: python + + class User(Model): + __options__ = {'default_time_to_live': 20} + + user_id = columns.UUID(primary_key=True) + ... + + user = User.objects.create(user_id=1) # Default TTL 20 will be set automatically on the server + + user.ttl(30).update(age=21) # Update the TTL to 30 + User.objects.ttl(10).create(user_id=1) # TTL 10 + User(user_id=1, age=21).ttl(10).save() # TTL 10 + + +Named Tables +=================== + +Named tables are a way of querying a table without creating an class. They're useful for querying system tables or exploring an unfamiliar database. + + +.. code-block:: python + + from cassandra.cqlengine.connection import setup + setup("127.0.0.1", "cqlengine_test") + + from cassandra.cqlengine.named import NamedTable + user = NamedTable("cqlengine_test", "user") + user.objects() + user.objects()[0] + + # {u'pk': 1, u't': datetime.datetime(2014, 6, 26, 17, 10, 31, 774000)} diff --git a/docs/cqlengine/third_party.rst b/docs/cqlengine/third_party.rst new file mode 100644 index 0000000..20c26df --- /dev/null +++ b/docs/cqlengine/third_party.rst @@ -0,0 +1,64 @@ +======================== +Third party integrations +======================== + + +Celery +------ + +Here's how, in substance, CQLengine can be plugged to `Celery +`_: + +.. code-block:: python + + from celery import Celery + from celery.signals import worker_process_init, beat_init + from cassandra.cqlengine import connection + from cassandra.cqlengine.connection import ( + cluster as cql_cluster, session as cql_session) + + def cassandra_init(**kwargs): + """ Initialize a clean Cassandra connection. """ + if cql_cluster is not None: + cql_cluster.shutdown() + if cql_session is not None: + cql_session.shutdown() + connection.setup() + + # Initialize worker context for both standard and periodic tasks. + worker_process_init.connect(cassandra_init) + beat_init.connect(cassandra_init) + + app = Celery() + + +uWSGI +----- + +This is the code required for proper connection handling of CQLengine for a +`uWSGI `_-run application: + +.. code-block:: python + + from cassandra.cqlengine import connection + from cassandra.cqlengine.connection import ( + cluster as cql_cluster, session as cql_session) + + try: + from uwsgidecorators import postfork + except ImportError: + # We're not in a uWSGI context, no need to hook Cassandra session + # initialization to the postfork event. + pass + else: + @postfork + def cassandra_init(**kwargs): + """ Initialize a new Cassandra session in the context. + + Ensures that a new session is returned for every new request. + """ + if cql_cluster is not None: + cql_cluster.shutdown() + if cql_session is not None: + cql_session.shutdown() + connection.setup() diff --git a/docs/cqlengine/upgrade_guide.rst b/docs/cqlengine/upgrade_guide.rst new file mode 100644 index 0000000..5b0ab39 --- /dev/null +++ b/docs/cqlengine/upgrade_guide.rst @@ -0,0 +1,155 @@ +======================== +Upgrade Guide +======================== + +This is an overview of things that changed as the cqlengine project was merged into +cassandra-driver. While efforts were taken to preserve the API and most functionality exactly, +conversion to this package will still require certain minimal updates (namely, imports). + +**THERE IS ONE FUNCTIONAL CHANGE**, described in the first section below. + +Functional Changes +================== +List Prepend Reversing +---------------------- +Legacy cqlengine included a workaround for a Cassandra bug in which prepended list segments were +reversed (`CASSANDRA-8733 `_). As of +this integration, this workaround is removed. The first released integrated version emits +a warning when prepend is used. Subsequent versions will have this warning removed. + +Date Column Type +---------------- +The Date column type in legacy cqlengine used a ``timestamp`` CQL type and truncated the time. +Going forward, the :class:`~.columns.Date` type represents a ``date`` for Cassandra 2.2+ +(`PYTHON-245 `_). +Users of the legacy functionality should convert models to use :class:`~.columns.DateTime` (which +uses ``timestamp`` internally), and use the build-in ``datetime.date`` for input values. + +Remove cqlengine +================ +To avoid confusion or mistakes using the legacy package in your application, it +is prudent to remove the cqlengine package when upgrading to the integrated version. + +The driver setup script will warn if the legacy package is detected during install, +but it will not prevent side-by-side installation. + +Organization +============ +Imports +------- +cqlengine is now integrated as a sub-package of the driver base package 'cassandra'. +Upgrading will require adjusting imports to cqlengine. For example:: + + from cassandra.cqlengine import columns + +is now:: + + from cassandra.cqlengine import columns + +Package-Level Aliases +--------------------- +Legacy cqlengine defined a number of aliases at the package level, which became redundant +when the package was integrated for a driver. These have been removed in favor of absolute +imports, and referring to cannonical definitions. For example, ``cqlengine.ONE`` was an alias +of ``cassandra.ConsistencyLevel.ONE``. In the integrated package, only the +:class:`cassandra.ConsistencyLevel` remains. + +Additionally, submodule aliases are removed from cqlengine in favor of absolute imports. + +These aliases are removed, and not deprecated because they should be straightforward to iron out +at module load time. + +Exceptions +---------- +The legacy cqlengine.exceptions module had a number of Exception classes that were variously +common to the package, or only used in specific modules. Common exceptions were relocated to +cqlengine, and specialized exceptions were placed in the module that raises them. Below is a +listing of updated locations: + +============================ ========== +Exception class New module +============================ ========== +CQLEngineException cassandra.cqlengine +ModelException cassandra.cqlengine.models +ValidationError cassandra.cqlengine +UndefinedKeyspaceException cassandra.cqlengine.connection +LWTException cassandra.cqlengine.query +IfNotExistsWithCounterColumn cassandra.cqlengine.query +============================ ========== + +UnicodeMixin Consolidation +-------------------------- +``class UnicodeMixin`` was defined in several cqlengine modules. This has been consolidated +to a single definition in the cqlengine package init file. This is not technically part of +the API, but noted here for completeness. + +API Deprecations +================ +This upgrade served as a good juncture to deprecate certain API features and invite users to upgrade +to new ones. The first released version does not change functionality -- only introduces deprecation +warnings. Future releases will remove these features in favor of the alternatives. + +Float/Double Overload +--------------------- +Previously there was no ``Double`` column type. Doubles were modeled by specifying ``Float(double_precision=True)``. +This inititializer parameter is now deprecated. Applications should use :class:`~.columns.Double` for CQL ``double``, and :class:`~.columns.Float` +for CQL ``float``. + +Schema Management +----------------- +``cassandra.cqlengine.management.create_keyspace`` is deprecated. Instead, use the new replication-strategy-specific +functions that accept explicit options for known strategies: + +- :func:`~.create_keyspace_simple` +- :func:`~.create_keyspace_network_topology` + +``cassandra.cqlengine.management.delete_keyspace`` is deprecated in favor of a new function, :func:`~.drop_keyspace`. The +intent is simply to make the function match the CQL verb it invokes. + +Model Inheritance +----------------- +The names for class attributes controlling model inheritance are changing. Changes are as follows: + +- Replace 'polymorphic_key' in the base class Column definition with :attr:`~.discriminator_column` +- Replace the '__polymorphic_key__' class attribute the derived classes with :attr:`~.__discriminator_value__` + +The functionality is unchanged -- the intent here is to make the names and language around these attributes more precise. +For now, the old names are just deprecated, and the mapper will emit warnings if they are used. The old names +will be removed in a future version. + +The example below shows a simple translation: + +Before:: + + class Pet(Model): + __table_name__ = 'pet' + owner_id = UUID(primary_key=True) + pet_id = UUID(primary_key=True) + pet_type = Text(polymorphic_key=True) + name = Text() + + class Cat(Pet): + __polymorphic_key__ = 'cat' + + class Dog(Pet): + __polymorphic_key__ = 'dog' + +After:: + + class Pet(models.Model): + __table_name__ = 'pet' + owner_id = UUID(primary_key=True) + pet_id = UUID(primary_key=True) + pet_type = Text(discriminator_column=True) + name = Text() + + class Cat(Pet): + __discriminator_value__ = 'cat' + + class Dog(Pet): + __discriminator_value__ = 'dog' + + +TimeUUID.from_datetime +---------------------- +This function is deprecated in favor of the core utility function :func:`~.uuid_from_time`. diff --git a/docs/dates_and_times.rst b/docs/dates_and_times.rst new file mode 100644 index 0000000..7a89f77 --- /dev/null +++ b/docs/dates_and_times.rst @@ -0,0 +1,87 @@ +Working with Dates and Times +============================ + +This document is meant to provide on overview of the assumptions and limitations of the driver time handling, the +reasoning behind it, and describe approaches to working with these types. + +timestamps (Cassandra DateType) +------------------------------- + +Timestamps in Cassandra are timezone-naive timestamps encoded as millseconds since UNIX epoch. Clients working with +timestamps in this database usually find it easiest to reason about them if they are always assumed to be UTC. To quote the +pytz documentation, "The preferred way of dealing with times is to always work in UTC, converting to localtime only when +generating output to be read by humans." The driver adheres to this tenant, and assumes UTC is always in the database. The +driver attempts to make this correct on the way in, and assumes no timezone on the way out. + +Write Path +~~~~~~~~~~ +When inserting timestamps, the driver handles serialization for the write path as follows: + +If the input is a ``datetime.datetime``, the serialization is normalized by starting with the ``utctimetuple()`` of the +value. + +- If the ``datetime`` object is timezone-aware, the timestamp is shifted, and represents the UTC timestamp equivalent. +- If the ``datetime`` object is timezone-naive, this results in no shift -- any ``datetime`` with no timezone information is assumed to be UTC + +Note the second point above applies even to "local" times created using ``now()``:: + + >>> d = datetime.now() + + >>> print(d.tzinfo) + None + + +These do not contain timezone information intrinsically, so they will be assumed to be UTC and not shifted. When generating +timestamps in the application, it is clearer to use ``datetime.utcnow()`` to be explicit about it. + +If the input for a timestamp is numeric, it is assumed to be a epoch-relative millisecond timestamp, as specified in the +CQL spec -- no scaling or conversion is done. + +Read Path +~~~~~~~~~ +The driver always assumes persisted timestamps are UTC and makes no attempt to localize them. Returned values are +timezone-naive ``datetime.datetime``. We follow this approach because the datetime API has deficiencies around daylight +saving time, and the defacto package for handling this is a third-party package (we try to minimize external dependencies +and not make decisions for the integrator). + +The decision for how to handle timezones is left to the application. For the most part it is straightforward to apply +localization to the ``datetime``\s returned by queries. One prevalent method is to use pytz for localization:: + + import pytz + user_tz = pytz.timezone('US/Central') + timestamp_naive = row.ts + timestamp_utc = pytz.utc.localize(timestamp_naive) + timestamp_presented = timestamp_utc.astimezone(user_tz) + +This is the most robust approach (likely refactored into a function). If it is deemed too cumbersome to apply for all call +sites in the application, it is possible to patch the driver with custom deserialization for this type. However, doing +this depends depends some on internal APIs and what extensions are present, so we will only mention the possibility, and +not spell it out here. + +date, time (Cassandra DateType) +------------------------------- +Date and time in Cassandra are idealized markers, much like ``datetime.date`` and ``datetime.time`` in the Python standard +library. Unlike these Python implementations, the Cassandra encoding supports much wider ranges. To accommodate these +ranges without overflow, this driver returns these data in custom types: :class:`.util.Date` and :class:`.util.Time`. + +Write Path +~~~~~~~~~~ +For simple (not prepared) statements, the input values for each of these can be either a string literal or an encoded +integer. See `Working with dates `_ +or `Working with time `_ for details +on the encoding or string formats. + +For prepared statements, the driver accepts anything that can be used to construct the :class:`.util.Date` or +:class:`.util.Time` classes. See the linked API docs for details. + +Read Path +~~~~~~~~~ +The driver always returns custom types for ``date`` and ``time``. + +The driver returns :class:`.util.Date` for ``date`` in order to accommodate the wider range of values without overflow. +For applications working within the supported range of [``datetime.MINYEAR``, ``datetime.MAXYEAR``], these are easily +converted to standard ``datetime.date`` insances using :meth:`.Date.date`. + +The driver returns :class:`.util.Time` for ``time`` in order to retain nanosecond precision stored in the database. +For applications not concerned with this level of precision, these are easily converted to standard ``datetime.time`` +insances using :meth:`.Time.time`. diff --git a/docs/execution_profiles.rst b/docs/execution_profiles.rst new file mode 100644 index 0000000..698f3db --- /dev/null +++ b/docs/execution_profiles.rst @@ -0,0 +1,156 @@ +Execution Profiles +================== + +Execution profiles aim at making it easier to execute requests in different ways within +a single connected ``Session``. Execution profiles are being introduced to deal with the exploding number of +configuration options, especially as the database platform evolves more complex workloads. + +The legacy configuration remains intact, but legacy and Execution Profile APIs +cannot be used simultaneously on the same client ``Cluster``. Legacy configuration +will be removed in the next major release (4.0). + +An execution profile and its parameters should be unique across ``Cluster`` instances. +For example, an execution profile and its ``LoadBalancingPolicy`` should +not be applied to more than one ``Cluster`` instance. + +This document explains how Execution Profiles relate to existing settings, and shows how to use the new profiles for +request execution. + +Mapping Legacy Parameters to Profiles +------------------------------------- + +Execution profiles can inherit from :class:`.cluster.ExecutionProfile`, and currently provide the following options, +previously input from the noted attributes: + +- load_balancing_policy - :attr:`.Cluster.load_balancing_policy` +- request_timeout - :attr:`.Session.default_timeout`, optional :meth:`.Session.execute` parameter +- retry_policy - :attr:`.Cluster.default_retry_policy`, optional :attr:`.Statement.retry_policy` attribute +- consistency_level - :attr:`.Session.default_consistency_level`, optional :attr:`.Statement.consistency_level` attribute +- serial_consistency_level - :attr:`.Session.default_serial_consistency_level`, optional :attr:`.Statement.serial_consistency_level` attribute +- row_factory - :attr:`.Session.row_factory` attribute + +When using the new API, these parameters can be defined by instances of :class:`.cluster.ExecutionProfile`. + +Using Execution Profiles +------------------------ +Default +~~~~~~~ + +.. code:: python + + from cassandra.cluster import Cluster + cluster = Cluster() + session = cluster.connect() + local_query = 'SELECT rpc_address FROM system.local' + for _ in cluster.metadata.all_hosts(): + print session.execute(local_query)[0] + + +.. parsed-literal:: + + Row(rpc_address='127.0.0.2') + Row(rpc_address='127.0.0.1') + + +The default execution profile is built from Cluster parameters and default Session attributes. This profile matches existing default +parameters. + +Initializing cluster with profiles +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. code:: python + + from cassandra.cluster import ExecutionProfile + from cassandra.policies import WhiteListRoundRobinPolicy + + node1_profile = ExecutionProfile(load_balancing_policy=WhiteListRoundRobinPolicy(['127.0.0.1'])) + node2_profile = ExecutionProfile(load_balancing_policy=WhiteListRoundRobinPolicy(['127.0.0.2'])) + + profiles = {'node1': node1_profile, 'node2': node2_profile} + session = Cluster(execution_profiles=profiles).connect() + for _ in cluster.metadata.all_hosts(): + print session.execute(local_query, execution_profile='node1')[0] + + +.. parsed-literal:: + + Row(rpc_address='127.0.0.1') + Row(rpc_address='127.0.0.1') + + +.. code:: python + + for _ in cluster.metadata.all_hosts(): + print session.execute(local_query, execution_profile='node2')[0] + + +.. parsed-literal:: + + Row(rpc_address='127.0.0.2') + Row(rpc_address='127.0.0.2') + + +.. code:: python + + for _ in cluster.metadata.all_hosts(): + print session.execute(local_query)[0] + + +.. parsed-literal:: + + Row(rpc_address='127.0.0.2') + Row(rpc_address='127.0.0.1') + +Note that, even when custom profiles are injected, the default ``TokenAwarePolicy(DCAwareRoundRobinPolicy())`` is still +present. To override the default, specify a policy with the :data:`~.cluster.EXEC_PROFILE_DEFAULT` key. + +.. code:: python + + from cassandra.cluster import EXEC_PROFILE_DEFAULT + profile = ExecutionProfile(request_timeout=30) + cluster = Cluster(execution_profiles={EXEC_PROFILE_DEFAULT: profile}) + + +Adding named profiles +~~~~~~~~~~~~~~~~~~~~~ + +New profiles can be added constructing from scratch, or deriving from default: + +.. code:: python + + locked_execution = ExecutionProfile(load_balancing_policy=WhiteListRoundRobinPolicy(['127.0.0.1'])) + node1_profile = 'node1_whitelist' + cluster.add_execution_profile(node1_profile, locked_execution) + + for _ in cluster.metadata.all_hosts(): + print session.execute(local_query, execution_profile=node1_profile)[0] + + +.. parsed-literal:: + + Row(rpc_address='127.0.0.1') + Row(rpc_address='127.0.0.1') + +See :meth:`.Cluster.add_execution_profile` for details and optional parameters. + +Passing a profile instance without mapping +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +We also have the ability to pass profile instances to be used for execution, but not added to the mapping: + +.. code:: python + + from cassandra.query import tuple_factory + + tmp = session.execution_profile_clone_update('node1', request_timeout=100, row_factory=tuple_factory) + + print session.execute(local_query, execution_profile=tmp)[0] + print session.execute(local_query, execution_profile='node1')[0] + +.. parsed-literal:: + + ('127.0.0.1',) + Row(rpc_address='127.0.0.1') + +The new profile is a shallow copy, so the ``tmp`` profile shares a load balancing policy with one managed by the cluster. +If reference objects are to be updated in the clone, one would typically set those attributes to a new instance. diff --git a/docs/faq.rst b/docs/faq.rst new file mode 100644 index 0000000..56cb648 --- /dev/null +++ b/docs/faq.rst @@ -0,0 +1,83 @@ +Frequently Asked Questions +========================== + +See also :doc:`cqlengine FAQ ` + +Why do connections or IO operations timeout in my WSGI application? +------------------------------------------------------------------- +Depending on your application process model, it may be forking after driver Session is created. Most IO reactors do not handle this, and problems will manifest as timeouts. + +To avoid this, make sure to create sessions per process, after the fork. Using uWSGI and Flask for example: + +.. code-block:: python + + from flask import Flask + from uwsgidecorators import postfork + from cassandra.cluster import Cluster + + session = None + prepared = None + + @postfork + def connect(): + global session, prepared + session = Cluster().connect() + prepared = session.prepare("SELECT release_version FROM system.local WHERE key=?") + + app = Flask(__name__) + + @app.route('/') + def server_version(): + row = session.execute(prepared, ('local',))[0] + return row.release_version + +uWSGI provides a ``postfork`` hook you can use to create sessions and prepared statements after the child process forks. + +How do I trace a request? +------------------------- +Request tracing can be turned on for any request by setting ``trace=True`` in :meth:`.Session.execute_async`. View the results by waiting on the future, then :meth:`.ResponseFuture.get_query_trace`. +Since tracing is done asynchronously to the request, this method polls until the trace is complete before querying data. + +.. code-block:: python + + >>> future = session.execute_async("SELECT * FROM system.local", trace=True) + >>> result = future.result() + >>> trace = future.get_query_trace() + >>> for e in trace.events: + >>> print e.source_elapsed, e.description + + 0:00:00.000077 Parsing select * from system.local + 0:00:00.000153 Preparing statement + 0:00:00.000309 Computing ranges to query + 0:00:00.000368 Submitting range requests on 1 ranges with a concurrency of 1 (279.77142 rows per range expected) + 0:00:00.000422 Submitted 1 concurrent range requests covering 1 ranges + 0:00:00.000480 Executing seq scan across 1 sstables for (min(-9223372036854775808), min(-9223372036854775808)) + 0:00:00.000669 Read 1 live and 0 tombstone cells + 0:00:00.000755 Scanned 1 rows and matched 1 + +``trace`` is a :class:`QueryTrace` object. + +How do I determine the replicas for a query? +---------------------------------------------- +With prepared statements, the replicas are obtained by ``routing_key``, based on current cluster token metadata: + +.. code-block:: python + + >>> prepared = session.prepare("SELECT * FROM example.t WHERE key=?") + >>> bound = prepared.bind((1,)) + >>> replicas = cluster.metadata.get_replicas(bound.keyspace, bound.routing_key) + >>> for h in replicas: + >>> print h.address + 127.0.0.1 + 127.0.0.2 + +``replicas`` is a list of :class:`Host` objects. + +How does the driver manage request retries? +------------------------------------------- +By default, retries are managed by the :attr:`.Cluster.default_retry_policy` set on the session Cluster. It can also +be specialized per statement by setting :attr:`.Statement.retry_policy`. + +Retries are presently attempted on the same coordinator, but this may change in the future. + +Please see :class:`.policies.RetryPolicy` for further details. diff --git a/docs/getting_started.rst b/docs/getting_started.rst new file mode 100644 index 0000000..2dc32e6 --- /dev/null +++ b/docs/getting_started.rst @@ -0,0 +1,405 @@ +Getting Started +=============== + +First, make sure you have the driver properly :doc:`installed `. + +Connecting to Cassandra +----------------------- +Before we can start executing any queries against a Cassandra cluster we need to setup +an instance of :class:`~.Cluster`. As the name suggests, you will typically have one +instance of :class:`~.Cluster` for each Cassandra cluster you want to interact +with. + +The simplest way to create a :class:`~.Cluster` is like this: + +.. code-block:: python + + from cassandra.cluster import Cluster + + cluster = Cluster() + +This will attempt to connection to a Cassandra instance on your +local machine (127.0.0.1). You can also specify a list of IP +addresses for nodes in your cluster: + +.. code-block:: python + + from cassandra.cluster import Cluster + + cluster = Cluster(['192.168.0.1', '192.168.0.2']) + +The set of IP addresses we pass to the :class:`~.Cluster` is simply +an initial set of contact points. After the driver connects to one +of these nodes it will *automatically discover* the rest of the +nodes in the cluster and connect to them, so you don't need to list +every node in your cluster. + +If you need to use a non-standard port, use SSL, or customize the driver's +behavior in some other way, this is the place to do it: + +.. code-block:: python + + from cassandra.cluster import Cluster + from cassandra.policies import DCAwareRoundRobinPolicy + + cluster = Cluster( + ['10.1.1.3', '10.1.1.4', '10.1.1.5'], + load_balancing_policy=DCAwareRoundRobinPolicy(local_dc='US_EAST'), + port=9042) + + +You can find a more complete list of options in the :class:`~.Cluster` documentation. + +Instantiating a :class:`~.Cluster` does not actually connect us to any nodes. +To establish connections and begin executing queries we need a +:class:`~.Session`, which is created by calling :meth:`.Cluster.connect()`: + +.. code-block:: python + + cluster = Cluster() + session = cluster.connect() + +The :meth:`~.Cluster.connect()` method takes an optional ``keyspace`` argument +which sets the default keyspace for all queries made through that :class:`~.Session`: + +.. code-block:: python + + cluster = Cluster() + session = cluster.connect('mykeyspace') + + +You can always change a Session's keyspace using :meth:`~.Session.set_keyspace` or +by executing a ``USE `` query: + +.. code-block:: python + + session.set_keyspace('users') + # or you can do this instead + session.execute('USE users') + + +Executing Queries +----------------- +Now that we have a :class:`.Session` we can begin to execute queries. The simplest +way to execute a query is to use :meth:`~.Session.execute()`: + +.. code-block:: python + + rows = session.execute('SELECT name, age, email FROM users') + for user_row in rows: + print user_row.name, user_row.age, user_row.email + +This will transparently pick a Cassandra node to execute the query against +and handle any retries that are necessary if the operation fails. + +By default, each row in the result set will be a +`namedtuple `_. +Each row will have a matching attribute for each column defined in the schema, +such as ``name``, ``age``, and so on. You can also treat them as normal tuples +by unpacking them or accessing fields by position. The following three +examples are equivalent: + +.. code-block:: python + + rows = session.execute('SELECT name, age, email FROM users') + for row in rows: + print row.name, row.age, row.email + +.. code-block:: python + + rows = session.execute('SELECT name, age, email FROM users') + for (name, age, email) in rows: + print name, age, email + +.. code-block:: python + + rows = session.execute('SELECT name, age, email FROM users') + for row in rows: + print row[0], row[1], row[2] + +If you prefer another result format, such as a ``dict`` per row, you +can change the :attr:`~.Session.row_factory` attribute. + +For queries that will be run repeatedly, you should use +`Prepared statements <#prepared-statements>`_. + +Passing Parameters to CQL Queries +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +When executing non-prepared statements, the driver supports two forms of +parameter place-holders: positional and named. + +Positional parameters are used with a ``%s`` placeholder. For example, +when you execute: + +.. code-block:: python + + session.execute( + """ + INSERT INTO users (name, credits, user_id) + VALUES (%s, %s, %s) + """, + ("John O'Reilly", 42, uuid.uuid1()) + ) + +It is translated to the following CQL query:: + + INSERT INTO users (name, credits, user_id) + VALUES ('John O''Reilly', 42, 2644bada-852c-11e3-89fb-e0b9a54a6d93) + +Note that you should use ``%s`` for all types of arguments, not just strings. +For example, this would be **wrong**: + +.. code-block:: python + + session.execute("INSERT INTO USERS (name, age) VALUES (%s, %d)", ("bob", 42)) # wrong + +Instead, use ``%s`` for the age placeholder. + +If you need to use a literal ``%`` character, use ``%%``. + +**Note**: you must always use a sequence for the second argument, even if you are +only passing in a single variable: + +.. code-block:: python + + session.execute("INSERT INTO foo (bar) VALUES (%s)", "blah") # wrong + session.execute("INSERT INTO foo (bar) VALUES (%s)", ("blah")) # wrong + session.execute("INSERT INTO foo (bar) VALUES (%s)", ("blah", )) # right + session.execute("INSERT INTO foo (bar) VALUES (%s)", ["blah"]) # right + + +Note that the second line is incorrect because in Python, single-element tuples +require a comma. + +Named place-holders use the ``%(name)s`` form: + +.. code-block:: python + + session.execute( + """ + INSERT INTO users (name, credits, user_id, username) + VALUES (%(name)s, %(credits)s, %(user_id)s, %(name)s) + """, + {'name': "John O'Reilly", 'credits': 42, 'user_id': uuid.uuid1()} + ) + +Note that you can repeat placeholders with the same name, such as ``%(name)s`` +in the above example. + +Only data values should be supplied this way. Other items, such as keyspaces, +table names, and column names should be set ahead of time (typically using +normal string formatting). + +.. _type-conversions: + +Type Conversions +^^^^^^^^^^^^^^^^ +For non-prepared statements, Python types are cast to CQL literals in the +following way: + +.. table:: + + +--------------------+-------------------------+ + | Python Type | CQL Literal Type | + +====================+=========================+ + | ``None`` | ``NULL`` | + +--------------------+-------------------------+ + | ``bool`` | ``boolean`` | + +--------------------+-------------------------+ + | ``float`` | | ``float`` | + | | | ``double`` | + +--------------------+-------------------------+ + | | ``int`` | | ``int`` | + | | ``long`` | | ``bigint`` | + | | | ``varint`` | + | | | ``smallint`` | + | | | ``tinyint`` | + | | | ``counter`` | + +--------------------+-------------------------+ + | ``decimal.Decimal``| ``decimal`` | + +--------------------+-------------------------+ + | | ``str`` | | ``ascii`` | + | | ``unicode`` | | ``varchar`` | + | | | ``text`` | + +--------------------+-------------------------+ + | | ``buffer`` | ``blob`` | + | | ``bytearray`` | | + +--------------------+-------------------------+ + | ``date`` | ``date`` | + +--------------------+-------------------------+ + | ``datetime`` | ``timestamp`` | + +--------------------+-------------------------+ + | ``time`` | ``time`` | + +--------------------+-------------------------+ + | | ``list`` | ``list`` | + | | ``tuple`` | | + | | generator | | + +--------------------+-------------------------+ + | | ``set`` | ``set`` | + | | ``frozenset`` | | + +--------------------+-------------------------+ + | | ``dict`` | ``map`` | + | | ``OrderedDict`` | | + +--------------------+-------------------------+ + | ``uuid.UUID`` | | ``timeuuid`` | + | | | ``uuid`` | + +--------------------+-------------------------+ + + +Asynchronous Queries +^^^^^^^^^^^^^^^^^^^^ +The driver supports asynchronous query execution through +:meth:`~.Session.execute_async()`. Instead of waiting for the query to +complete and returning rows directly, this method almost immediately +returns a :class:`~.ResponseFuture` object. There are two ways of +getting the final result from this object. + +The first is by calling :meth:`~.ResponseFuture.result()` on it. If +the query has not yet completed, this will block until it has and +then return the result or raise an Exception if an error occurred. +For example: + +.. code-block:: python + + from cassandra import ReadTimeout + + query = "SELECT * FROM users WHERE user_id=%s" + future = session.execute_async(query, [user_id]) + + # ... do some other work + + try: + rows = future.result() + user = rows[0] + print user.name, user.age + except ReadTimeout: + log.exception("Query timed out:") + +This works well for executing many queries concurrently: + +.. code-block:: python + + # build a list of futures + futures = [] + query = "SELECT * FROM users WHERE user_id=%s" + for user_id in ids_to_fetch: + futures.append(session.execute_async(query, [user_id]) + + # wait for them to complete and use the results + for future in futures: + rows = future.result() + print rows[0].name + +Alternatively, instead of calling :meth:`~.ResponseFuture.result()`, +you can attach callback and errback functions through the +:meth:`~.ResponseFuture.add_callback()`, +:meth:`~.ResponseFuture.add_errback()`, and +:meth:`~.ResponseFuture.add_callbacks()`, methods. If you have used +Twisted Python before, this is designed to be a lightweight version of +that: + +.. code-block:: python + + def handle_success(rows): + user = rows[0] + try: + process_user(user.name, user.age, user.id) + except Exception: + log.error("Failed to process user %s", user.id) + # don't re-raise errors in the callback + + def handle_error(exception): + log.error("Failed to fetch user info: %s", exception) + + + future = session.execute_async(query) + future.add_callbacks(handle_success, handle_error) + +There are a few important things to remember when working with callbacks: + * **Exceptions that are raised inside the callback functions will be logged and then ignored.** + * Your callback will be run on the event loop thread, so any long-running + operations will prevent other requests from being handled + + +Setting a Consistency Level +--------------------------- +The consistency level used for a query determines how many of the +replicas of the data you are interacting with need to respond for +the query to be considered a success. + +By default, :attr:`.ConsistencyLevel.LOCAL_ONE` will be used for all queries. +You can specify a different default for the session on :attr:`.Session.default_consistency_level` +if the cluster is configured in legacy mode (not using execution profiles). Otherwise this can +be done by setting the :attr:`.ExecutionProfile.consistency_level` for the execution profile with key +:data:`~.cluster.EXEC_PROFILE_DEFAULT`. +To specify a different consistency level per request, wrap queries +in a :class:`~.SimpleStatement`: + +.. code-block:: python + + from cassandra import ConsistencyLevel + from cassandra.query import SimpleStatement + + query = SimpleStatement( + "INSERT INTO users (name, age) VALUES (%s, %s)", + consistency_level=ConsistencyLevel.QUORUM) + session.execute(query, ('John', 42)) + +Prepared Statements +------------------- +Prepared statements are queries that are parsed by Cassandra and then saved +for later use. When the driver uses a prepared statement, it only needs to +send the values of parameters to bind. This lowers network traffic +and CPU utilization within Cassandra because Cassandra does not have to +re-parse the query each time. + +To prepare a query, use :meth:`.Session.prepare()`: + +.. code-block:: python + + user_lookup_stmt = session.prepare("SELECT * FROM users WHERE user_id=?") + + users = [] + for user_id in user_ids_to_query: + user = session.execute(user_lookup_stmt, [user_id]) + users.append(user) + +:meth:`~.Session.prepare()` returns a :class:`~.PreparedStatement` instance +which can be used in place of :class:`~.SimpleStatement` instances or literal +string queries. It is automatically prepared against all nodes, and the driver +handles re-preparing against new nodes and restarted nodes when necessary. + +Note that the placeholders for prepared statements are ``?`` characters. This +is different than for simple, non-prepared statements (although future versions +of the driver may use the same placeholders for both). + +Setting a Consistency Level with Prepared Statements +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +To specify a consistency level for prepared statements, you have two options. + +The first is to set a default consistency level for every execution of the +prepared statement: + +.. code-block:: python + + from cassandra import ConsistencyLevel + + cluster = Cluster() + session = cluster.connect("mykeyspace") + user_lookup_stmt = session.prepare("SELECT * FROM users WHERE user_id=?") + user_lookup_stmt.consistency_level = ConsistencyLevel.QUORUM + + # these will both use QUORUM + user1 = session.execute(user_lookup_stmt, [user_id1])[0] + user2 = session.execute(user_lookup_stmt, [user_id2])[0] + +The second option is to create a :class:`~.BoundStatement` from the +:class:`~.PreparedStatement` and binding parameters and set a consistency +level on that: + +.. code-block:: python + + # override the QUORUM default + user3_lookup = user_lookup_stmt.bind([user_id3]) + user3_lookup.consistency_level = ConsistencyLevel.ALL + user3 = session.execute(user3_lookup) diff --git a/docs/index.rst b/docs/index.rst new file mode 100644 index 0000000..13fca18 --- /dev/null +++ b/docs/index.rst @@ -0,0 +1,90 @@ +Python Cassandra Driver +======================= +A Python client driver for `Apache Cassandra `_. +This driver works exclusively with the Cassandra Query Language v3 (CQL3) +and Cassandra's native protocol. Cassandra 2.1+ is supported. + +The driver supports Python 2.7, 3.4, 3.5, 3.6 and 3.7. + +This driver is open source under the +`Apache v2 License `_. +The source code for this driver can be found on `GitHub `_. + +**Note:** DataStax products do not support big-endian systems. + +Contents +-------- +:doc:`installation` + How to install the driver. + +:doc:`getting_started` + A guide through the first steps of connecting to Cassandra and executing queries + +:doc:`execution_profiles` + An introduction to a more flexible way of configuring request execution + +:doc:`lwt` + Working with results of conditional requests + +:doc:`object_mapper` + Introduction to the integrated object mapper, cqlengine + +:doc:`performance` + Tips for getting good performance. + +:doc:`query_paging` + Notes on paging large query results + +:doc:`security` + An overview of the security features of the driver + +:doc:`upgrading` + A guide to upgrading versions of the driver + +:doc:`user_defined_types` + Working with Cassandra 2.1's user-defined types + +:doc:`dates_and_times` + Some discussion on the driver's approach to working with timestamp, date, time types + +:doc:`cloud` + A guide to connecting to Datastax Apollo + +:doc:`faq` + A collection of Frequently Asked Questions + +:doc:`api/index` + The API documentation. + +.. toctree:: + :hidden: + + api/index + installation + getting_started + upgrading + execution_profiles + performance + query_paging + lwt + security + user_defined_types + object_mapper + dates_and_times + cloud + faq + +Getting Help +------------ +Visit the :doc:`FAQ section ` in this documentation. + +Please send questions to the `mailing list `_. + +Alternatively, you can use the `#datastax-drivers` channel in the DataStax Acadamy Slack to ask questions in real time. + +Reporting Issues +---------------- +Please report any bugs and make any feature requests on the +`JIRA `_ issue tracker. + +If you would like to contribute, please feel free to open a pull request. diff --git a/docs/installation.rst b/docs/installation.rst new file mode 100644 index 0000000..a6eedf4 --- /dev/null +++ b/docs/installation.rst @@ -0,0 +1,247 @@ +Installation +============ + +Supported Platforms +------------------- +Python 2.7, 3.4, 3.5, 3.6 and 3.7 are supported. Both CPython (the standard Python +implementation) and `PyPy `_ are supported and tested. + +Linux, OSX, and Windows are supported. + +Installation through pip +------------------------ +`pip `_ is the suggested tool for installing +packages. It will handle installing all Python dependencies for the driver at +the same time as the driver itself. To install the driver*:: + + pip install cassandra-driver + +You can use ``pip install --pre cassandra-driver`` if you need to install a beta version. + +***Note**: if intending to use optional extensions, install the `dependencies <#optional-non-python-dependencies>`_ first. The driver may need to be reinstalled if dependencies are added after the initial installation. + +Speeding Up Installation +^^^^^^^^^^^^^^^^^^^^^^^^ + +By default, installing the driver through ``pip`` uses Cython to compile +certain parts of the driver. +This makes those hot paths faster at runtime, but the Cython compilation +process can take a long time -- as long as 10 minutes in some environments. + +In environments where performance is less important, it may be worth it to +:ref:`disable Cython as documented below `. +You can also use ``CASS_DRIVER_BUILD_CONCURRENCY`` to increase the number of +threads used to build the driver and any C extensions: + +.. code-block:: bash + + $ # installing from source + $ CASS_DRIVER_BUILD_CONCURRENCY=8 python setup.py install + $ # installing from pip + $ CASS_DRIVER_BUILD_CONCURRENCY=8 pip install cassandra-driver + +Finally, you can `build a wheel `_ from the driver's source and distribute that to computers +that depend on it. For example: + +.. code-block:: bash + + $ git clone https://github.com/datastax/python-driver.git + $ cd python-driver + $ git checkout 3.14.0 # or other desired tag + $ pip install wheel + $ python setup.py bdist_wheel + $ # build wheel with optional concurrency settings + $ CASS_DRIVER_BUILD_CONCURRENCY=8 python setup.py bdist_wheel + $ scp ./dist/cassandra_driver-3.14.0-cp27-cp27mu-linux_x86_64.whl user@host:/remote_dir + +Then, on the remote machine or machines, simply + +.. code-block:: bash + + $ pip install /remote_dir/cassandra_driver-3.14.0-cp27-cp27mu-linux_x86_64.whl + +Note that the wheel created this way is a `platform wheel +`_ +and as such will not work across platforms or architectures. + +OSX Installation Error +^^^^^^^^^^^^^^^^^^^^^^ +If you're installing on OSX and have XCode 5.1 installed, you may see an error like this:: + + clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future] + +To fix this, re-run the installation with an extra compilation flag: + +.. code-block:: bash + + ARCHFLAGS=-Wno-error=unused-command-line-argument-hard-error-in-future pip install cassandra-driver + +.. _windows_build: + +Windows Installation Notes +-------------------------- +Installing the driver with extensions in Windows sometimes presents some challenges. A few notes about common +hang-ups: + +Setup requires a compiler. When using Python 2, this is as simple as installing `this package `_ +(this link is also emitted during install if setuptools is unable to find the resources it needs). Depending on your +system settings, this package may install as a user-specific application. Make sure to install for everyone, or at least +as the user that will be building the Python environment. + +It is also possible to run the build with your compiler of choice. Just make sure to have your environment setup with +the proper paths. Make sure the compiler target architecture matches the bitness of your Python runtime. +Perhaps the easiest way to do this is to run the build/install from a Visual Studio Command Prompt (a +shortcut installed with Visual Studio that sources the appropriate environment and presents a shell). + +Manual Installation +------------------- +You can always install the driver directly from a source checkout or tarball. +When installing manually, ensure the python dependencies are already +installed. You can find the list of dependencies in +`requirements.txt `_. + +Once the dependencies are installed, simply run:: + + python setup.py install + +Verifying your Installation +--------------------------- +To check if the installation was successful, you can run:: + + python -c 'import cassandra; print cassandra.__version__' + +It should print something like "2.7.0". + +(*Optional*) Compression Support +-------------------------------- +Compression can optionally be used for communication between the driver and +Cassandra. There are currently two supported compression algorithms: +snappy (in Cassandra 1.2+) and LZ4 (only in Cassandra 2.0+). If either is +available for the driver and Cassandra also supports it, it will +be used automatically. + +For lz4 support:: + + pip install lz4 + +For snappy support:: + + pip install python-snappy + +(If using a Debian Linux derivative such as Ubuntu, it may be easier to +just run ``apt-get install python-snappy``.) + +(*Optional*) Metrics Support +---------------------------- +The driver has built-in support for capturing :attr:`.Cluster.metrics` about +the queries you run. However, the ``scales`` library is required to +support this:: + + pip install scales + + +(*Optional*) Non-python Dependencies +------------------------------------ +The driver has several **optional** features that have non-Python dependencies. + +C Extensions +^^^^^^^^^^^^ +By default, a number of extensions are compiled, providing faster hashing +for token-aware routing with the ``Murmur3Partitioner``, +`libev `_ event loop integration, +and Cython optimized extensions. + +When installing manually through setup.py, you can disable both with +the ``--no-extensions`` option, or selectively disable them with +with ``--no-murmur3``, ``--no-libev``, or ``--no-cython``. + +To compile the extensions, ensure that GCC and the Python headers are available. + +On Ubuntu and Debian, this can be accomplished by running:: + + $ sudo apt-get install gcc python-dev + +On RedHat and RedHat-based systems like CentOS and Fedora:: + + $ sudo yum install gcc python-devel + +On OS X, homebrew installations of Python should provide the necessary headers. + +See :ref:`windows_build` for notes on configuring the build environment on Windows. + +.. _cython-extensions: + +Cython-based Extensions +~~~~~~~~~~~~~~~~~~~~~~~ +By default, this package uses `Cython `_ to optimize core modules and build custom extensions. +This is not a hard requirement, but is engaged by default to build extensions offering better performance than the +pure Python implementation. + +This is a costly build phase, especially in clean environments where the Cython compiler must be built +This build phase can be avoided using the build switch, or an environment variable:: + + python setup.py install --no-cython + +Alternatively, an environment variable can be used to switch this option regardless of +context:: + + CASS_DRIVER_NO_CYTHON=1 + - or, to disable all extensions: + CASS_DRIVER_NO_EXTENSIONS=1 + +This method is required when using pip, which provides no other way of injecting user options in a single command:: + + CASS_DRIVER_NO_CYTHON=1 pip install cassandra-driver + CASS_DRIVER_NO_CYTHON=1 sudo -E pip install ~/python-driver + +The environment variable is the preferred option because it spans all invocations of setup.py, and will +prevent Cython from being materialized as a setup requirement. + +If your sudo configuration does not allow SETENV, you must push the option flag down via pip. However, pip +applies these options to all dependencies (which break on the custom flag). Therefore, you must first install +dependencies, then use install-option:: + + sudo pip install six futures + sudo pip install --install-option="--no-cython" + + +libev support +^^^^^^^^^^^^^ +The driver currently uses Python's ``asyncore`` module for its default +event loop. For better performance, ``libev`` is also supported through +a C extension. + +If you're on Linux, you should be able to install libev +through a package manager. For example, on Debian/Ubuntu:: + + $ sudo apt-get install libev4 libev-dev + +On RHEL/CentOS/Fedora:: + + $ sudo yum install libev libev-devel + +If you're on Mac OS X, you should be able to install libev +through `Homebrew `_. For example, on Mac OS X:: + + $ brew install libev + +The libev extension is not built for Windows (the build process is complex, and the Windows implementation uses +select anyway). + +If successful, you should be able to build and install the extension +(just using ``setup.py build`` or ``setup.py install``) and then use +the libev event loop by doing the following: + +.. code-block:: python + + >>> from cassandra.io.libevreactor import LibevConnection + >>> from cassandra.cluster import Cluster + + >>> cluster = Cluster() + >>> cluster.connection_class = LibevConnection + >>> session = cluster.connect() + +(*Optional*) Configuring SSL +----------------------------- +Andrew Mussey has published a thorough guide on +`Using SSL with the DataStax Python driver `_. diff --git a/docs/lwt.rst b/docs/lwt.rst new file mode 100644 index 0000000..2cc272f --- /dev/null +++ b/docs/lwt.rst @@ -0,0 +1,91 @@ +Lightweight Transactions (Compare-and-set) +========================================== + +Lightweight Transactions (LWTs) are mostly pass-through CQL for the driver. However, +the server returns some specialized results indicating the outcome and optional state +preceding the transaction. + +For pertinent execution parameters, see :attr:`.Statement.serial_consistency_level`. + +This section discusses working with specialized result sets returned by the server for LWTs, +and how to work with them using the driver. + + +Specialized Results +------------------- +The result returned from a LWT request is always a single row result. It will always have +prepended a special column named ``[applied]``. How this value appears in your results depends +on the row factory in use. See below for examples. + +The value of this ``[applied]`` column is boolean value indicating whether or not the transaction was applied. +If ``True``, it is the only column in the result. If ``False``, the additional columns depend on the LWT operation being +executed: + +- When using a ``UPDATE ... IF "col" = ...`` clause, the result will contain the ``[applied]`` column, plus the existing columns + and values for any columns in the ``IF`` clause (and thus the value that caused the transaction to fail). + +- When using ``INSERT ... IF NOT EXISTS``, the result will contain the ``[applied]`` column, plus all columns and values + of the existing row that rejected the transaction. + +- ``UPDATE .. IF EXISTS`` never has additional columns, regardless of ``[applied]`` status. + +How the ``[applied]`` column manifests depends on the row factory in use. Considering the following (initially empty) table:: + + CREATE TABLE test.t ( + k int PRIMARY KEY, + v int, + x int + ) + +... the following sections show the expected result for a number of example statements, using the three base row factories. + +named_tuple_factory (default) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +The name ``[applied]`` is not a valid Python identifier, so the square brackets are actually removed +from the attribute for the resulting ``namedtuple``. The row always has a boolean column ``applied`` in position 0:: + + >>> session.execute("INSERT INTO t (k,v) VALUES (0,0) IF NOT EXISTS") + Row(applied=True) + + >>> session.execute("INSERT INTO t (k,v) VALUES (0,0) IF NOT EXISTS") + Row(applied=False, k=0, v=0, x=None) + + >>> session.execute("UPDATE t SET v = 1, x = 2 WHERE k = 0 IF v =0") + Row(applied=True) + + >>> session.execute("UPDATE t SET v = 1, x = 2 WHERE k = 0 IF v =0 AND x = 1") + Row(applied=False, v=1, x=2) + +tuple_factory +~~~~~~~~~~~~~ +This return type does not refer to names, but the boolean value ``applied`` is always present in position 0:: + + >>> session.execute("INSERT INTO t (k,v) VALUES (0,0) IF NOT EXISTS") + (True,) + + >>> session.execute("INSERT INTO t (k,v) VALUES (0,0) IF NOT EXISTS") + (False, 0, 0, None) + + >>> session.execute("UPDATE t SET v = 1, x = 2 WHERE k = 0 IF v =0") + (True,) + + >>> session.execute("UPDATE t SET v = 1, x = 2 WHERE k = 0 IF v =0 AND x = 1") + (False, 1, 2) + +dict_factory +~~~~~~~~~~~~ +The retuned ``dict`` contains the ``[applied]`` key:: + + >>> session.execute("INSERT INTO t (k,v) VALUES (0,0) IF NOT EXISTS") + {u'[applied]': True} + + >>> session.execute("INSERT INTO t (k,v) VALUES (0,0) IF NOT EXISTS") + {u'x': 2, u'[applied]': False, u'v': 1} + + >>> session.execute("UPDATE t SET v = 1, x = 2 WHERE k = 0 IF v =0") + {u'x': None, u'[applied]': False, u'k': 0, u'v': 0} + + >>> session.execute("UPDATE t SET v = 1, x = 2 WHERE k = 0 IF v =0 AND x = 1") + {u'[applied]': True} + + diff --git a/docs/make.bat b/docs/make.bat new file mode 100644 index 0000000..6be2277 --- /dev/null +++ b/docs/make.bat @@ -0,0 +1,190 @@ +@ECHO OFF + +REM Command file for Sphinx documentation + +if "%SPHINXBUILD%" == "" ( + set SPHINXBUILD=sphinx-build +) +set BUILDDIR=_build +set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . +set I18NSPHINXOPTS=%SPHINXOPTS% . +if NOT "%PAPER%" == "" ( + set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% + set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% +) + +if "%1" == "" goto help + +if "%1" == "help" ( + :help + echo.Please use `make ^` where ^ is one of + echo. html to make standalone HTML files + echo. dirhtml to make HTML files named index.html in directories + echo. singlehtml to make a single large HTML file + echo. pickle to make pickle files + echo. json to make JSON files + echo. htmlhelp to make HTML files and a HTML help project + echo. qthelp to make HTML files and a qthelp project + echo. devhelp to make HTML files and a Devhelp project + echo. epub to make an epub + echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter + echo. text to make text files + echo. man to make manual pages + echo. texinfo to make Texinfo files + echo. gettext to make PO message catalogs + echo. changes to make an overview over all changed/added/deprecated items + echo. linkcheck to check all external links for integrity + echo. doctest to run all doctests embedded in the documentation if enabled + goto end +) + +if "%1" == "clean" ( + for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i + del /q /s %BUILDDIR%\* + goto end +) + +if "%1" == "html" ( + %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html + if errorlevel 1 exit /b 1 + echo. + echo.Build finished. The HTML pages are in %BUILDDIR%/html. + goto end +) + +if "%1" == "dirhtml" ( + %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml + if errorlevel 1 exit /b 1 + echo. + echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. + goto end +) + +if "%1" == "singlehtml" ( + %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml + if errorlevel 1 exit /b 1 + echo. + echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. + goto end +) + +if "%1" == "pickle" ( + %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle + if errorlevel 1 exit /b 1 + echo. + echo.Build finished; now you can process the pickle files. + goto end +) + +if "%1" == "json" ( + %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json + if errorlevel 1 exit /b 1 + echo. + echo.Build finished; now you can process the JSON files. + goto end +) + +if "%1" == "htmlhelp" ( + %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp + if errorlevel 1 exit /b 1 + echo. + echo.Build finished; now you can run HTML Help Workshop with the ^ +.hhp project file in %BUILDDIR%/htmlhelp. + goto end +) + +if "%1" == "qthelp" ( + %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp + if errorlevel 1 exit /b 1 + echo. + echo.Build finished; now you can run "qcollectiongenerator" with the ^ +.qhcp project file in %BUILDDIR%/qthelp, like this: + echo.^> qcollectiongenerator %BUILDDIR%\qthelp\cqlengine.qhcp + echo.To view the help file: + echo.^> assistant -collectionFile %BUILDDIR%\qthelp\cqlengine.ghc + goto end +) + +if "%1" == "devhelp" ( + %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp + if errorlevel 1 exit /b 1 + echo. + echo.Build finished. + goto end +) + +if "%1" == "epub" ( + %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub + if errorlevel 1 exit /b 1 + echo. + echo.Build finished. The epub file is in %BUILDDIR%/epub. + goto end +) + +if "%1" == "latex" ( + %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex + if errorlevel 1 exit /b 1 + echo. + echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. + goto end +) + +if "%1" == "text" ( + %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text + if errorlevel 1 exit /b 1 + echo. + echo.Build finished. The text files are in %BUILDDIR%/text. + goto end +) + +if "%1" == "man" ( + %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man + if errorlevel 1 exit /b 1 + echo. + echo.Build finished. The manual pages are in %BUILDDIR%/man. + goto end +) + +if "%1" == "texinfo" ( + %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo + if errorlevel 1 exit /b 1 + echo. + echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. + goto end +) + +if "%1" == "gettext" ( + %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale + if errorlevel 1 exit /b 1 + echo. + echo.Build finished. The message catalogs are in %BUILDDIR%/locale. + goto end +) + +if "%1" == "changes" ( + %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes + if errorlevel 1 exit /b 1 + echo. + echo.The overview file is in %BUILDDIR%/changes. + goto end +) + +if "%1" == "linkcheck" ( + %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck + if errorlevel 1 exit /b 1 + echo. + echo.Link check complete; look for any errors in the above output ^ +or in %BUILDDIR%/linkcheck/output.txt. + goto end +) + +if "%1" == "doctest" ( + %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest + if errorlevel 1 exit /b 1 + echo. + echo.Testing of doctests in the sources finished, look at the ^ +results in %BUILDDIR%/doctest/output.txt. + goto end +) + +:end diff --git a/docs/object_mapper.rst b/docs/object_mapper.rst new file mode 100644 index 0000000..50d3cbf --- /dev/null +++ b/docs/object_mapper.rst @@ -0,0 +1,105 @@ +Object Mapper +============= + +cqlengine is the Cassandra CQL 3 Object Mapper packaged with this driver + +:ref:`Jump to Getting Started ` + +Contents +-------- +:doc:`cqlengine/upgrade_guide` + For migrating projects from legacy cqlengine, to the integrated product + +:doc:`cqlengine/models` + Examples defining models, and mapping them to tables + +:doc:`cqlengine/queryset` + Overview of query sets and filtering + +:doc:`cqlengine/batches` + Working with batch mutations + +:doc:`cqlengine/connections` + Working with multiple sessions + +:ref:`API Documentation ` + Index of API documentation + +:doc:`cqlengine/third_party` + High-level examples in Celery and uWSGI + +:doc:`cqlengine/faq` + +.. toctree:: + :hidden: + + cqlengine/upgrade_guide + cqlengine/models + cqlengine/queryset + cqlengine/batches + cqlengine/connections + cqlengine/third_party + cqlengine/faq + +.. _getting-started: + +Getting Started +--------------- + +.. code-block:: python + + import uuid + from cassandra.cqlengine import columns + from cassandra.cqlengine import connection + from datetime import datetime + from cassandra.cqlengine.management import sync_table + from cassandra.cqlengine.models import Model + + #first, define a model + class ExampleModel(Model): + example_id = columns.UUID(primary_key=True, default=uuid.uuid4) + example_type = columns.Integer(index=True) + created_at = columns.DateTime() + description = columns.Text(required=False) + + #next, setup the connection to your cassandra server(s)... + # see http://datastax.github.io/python-driver/api/cassandra/cluster.html for options + # the list of hosts will be passed to create a Cluster() instance + connection.setup(['127.0.0.1'], "cqlengine", protocol_version=3) + + #...and create your CQL table + >>> sync_table(ExampleModel) + + #now we can create some rows: + >>> em1 = ExampleModel.create(example_type=0, description="example1", created_at=datetime.now()) + >>> em2 = ExampleModel.create(example_type=0, description="example2", created_at=datetime.now()) + >>> em3 = ExampleModel.create(example_type=0, description="example3", created_at=datetime.now()) + >>> em4 = ExampleModel.create(example_type=0, description="example4", created_at=datetime.now()) + >>> em5 = ExampleModel.create(example_type=1, description="example5", created_at=datetime.now()) + >>> em6 = ExampleModel.create(example_type=1, description="example6", created_at=datetime.now()) + >>> em7 = ExampleModel.create(example_type=1, description="example7", created_at=datetime.now()) + >>> em8 = ExampleModel.create(example_type=1, description="example8", created_at=datetime.now()) + + #and now we can run some queries against our table + >>> ExampleModel.objects.count() + 8 + >>> q = ExampleModel.objects(example_type=1) + >>> q.count() + 4 + >>> for instance in q: + >>> print instance.description + example5 + example6 + example7 + example8 + + #here we are applying additional filtering to an existing query + #query objects are immutable, so calling filter returns a new + #query object + >>> q2 = q.filter(example_id=em5.example_id) + + >>> q2.count() + 1 + >>> for instance in q2: + >>> print instance.description + example5 diff --git a/docs/performance.rst b/docs/performance.rst new file mode 100644 index 0000000..f7a3f49 --- /dev/null +++ b/docs/performance.rst @@ -0,0 +1,45 @@ +Performance Notes +================= +The Python driver for Cassandra offers several methods for executing queries. +You can synchronously block for queries to complete using +:meth:`.Session.execute()`, you can obtain asynchronous request futures through +:meth:`.Session.execute_async()`, and you can attach a callback to the future +with :meth:`.ResponseFuture.add_callback()`. + +Examples of multiple request patterns can be found in the benchmark scripts included in the driver project. + +The choice of execution pattern will depend on the application context. For applications dealing with multiple +requests in a given context, the recommended pattern is to use concurrent asynchronous +requests with callbacks. For many use cases, you don't need to implement this pattern yourself. +:meth:`cassandra.concurrent.execute_concurrent` and :meth:`cassandra.concurrent.execute_concurrent_with_args` +provide this pattern with a synchronous API and tunable concurrency. + +Due to the GIL and limited concurrency, the driver can become CPU-bound pretty quickly. The sections below +discuss further runtime and design considerations for mitigating this limitation. + +PyPy +---- +`PyPy `_ is an alternative Python runtime which uses a JIT compiler to +reduce CPU consumption. This leads to a huge improvement in the driver performance, +more than doubling throughput for many workloads. + +Cython Extensions +----------------- +`Cython `_ is an optimizing compiler and language that can be used to compile the core files and +optional extensions for the driver. Cython is not a strict dependency, but the extensions will be built by default. + +See :doc:`installation` for details on controlling this build. + +multiprocessing +--------------- +All of the patterns discussed above may be used over multiple processes using the +`multiprocessing `_ +module. Multiple processes will scale better than multiple threads, so if high throughput is your goal, +consider this option. + +Be sure to **never share any** :class:`~.Cluster`, :class:`~.Session`, +**or** :class:`~.ResponseFuture` **objects across multiple processes**. These +objects should all be created after forking the process, not before. + +For further discussion and simple examples using the driver with ``multiprocessing``, +see `this blog post `_. diff --git a/docs/query_paging.rst b/docs/query_paging.rst new file mode 100644 index 0000000..0b97de4 --- /dev/null +++ b/docs/query_paging.rst @@ -0,0 +1,95 @@ +.. _query-paging: + +Paging Large Queries +==================== +Cassandra 2.0+ offers support for automatic query paging. Starting with +version 2.0 of the driver, if :attr:`~.Cluster.protocol_version` is greater than +:const:`2` (it is by default), queries returning large result sets will be +automatically paged. + +Controlling the Page Size +------------------------- +By default, :attr:`.Session.default_fetch_size` controls how many rows will +be fetched per page. This can be overridden per-query by setting +:attr:`~.fetch_size` on a :class:`~.Statement`. By default, each page +will contain at most 5000 rows. + +Handling Paged Results +---------------------- +Whenever the number of result rows for are query exceed the page size, an +instance of :class:`~.PagedResult` will be returned instead of a normal +list. This class implements the iterator interface, so you can treat +it like a normal iterator over rows:: + + from cassandra.query import SimpleStatement + query = "SELECT * FROM users" # users contains 100 rows + statement = SimpleStatement(query, fetch_size=10) + for user_row in session.execute(statement): + process_user(user_row) + +Whenever there are no more rows in the current page, the next page will +be fetched transparently. However, note that it *is* possible for +an :class:`Exception` to be raised while fetching the next page, just +like you might see on a normal call to ``session.execute()``. + +If you use :meth:`.Session.execute_async()` along with, +:meth:`.ResponseFuture.result()`, the first page will be fetched before +:meth:`~.ResponseFuture.result()` returns, but latter pages will be +transparently fetched synchronously while iterating the result. + +Handling Paged Results with Callbacks +------------------------------------- +If callbacks are attached to a query that returns a paged result, +the callback will be called once per page with a normal list of rows. + +Use :attr:`.ResponseFuture.has_more_pages` and +:meth:`.ResponseFuture.start_fetching_next_page()` to continue fetching +pages. For example:: + + class PagedResultHandler(object): + + def __init__(self, future): + self.error = None + self.finished_event = Event() + self.future = future + self.future.add_callbacks( + callback=self.handle_page, + errback=self.handle_err) + + def handle_page(self, rows): + for row in rows: + process_row(row) + + if self.future.has_more_pages: + self.future.start_fetching_next_page() + else: + self.finished_event.set() + + def handle_error(self, exc): + self.error = exc + self.finished_event.set() + + future = session.execute_async("SELECT * FROM users") + handler = PagedResultHandler(future) + handler.finished_event.wait() + if handler.error: + raise handler.error + +Resume Paged Results +-------------------- + +You can resume the pagination when executing a new query by using the :attr:`.ResultSet.paging_state`. This can be useful if you want to provide some stateless pagination capabilities to your application (ie. via http). For example:: + + from cassandra.query import SimpleStatement + query = "SELECT * FROM users" + statement = SimpleStatement(query, fetch_size=10) + results = session.execute(statement) + + # save the paging_state somewhere and return current results + session['paging_stage'] = results.paging_state + + + # resume the pagination sometime later... + statement = SimpleStatement(query, fetch_size=10) + ps = session['paging_state'] + results = session.execute(statement, paging_state=ps) diff --git a/docs/security.rst b/docs/security.rst new file mode 100644 index 0000000..0353091 --- /dev/null +++ b/docs/security.rst @@ -0,0 +1,278 @@ +.. _security: + +Security +======== +The two main security components you will use with the +Python driver are Authentication and SSL. + +Authentication +-------------- +Versions 2.0 and higher of the driver support a SASL-based +authentication mechanism when :attr:`~.Cluster.protocol_version` +is set to 2 or higher. To use this authentication, set +:attr:`~.Cluster.auth_provider` to an instance of a subclass +of :class:`~cassandra.auth.AuthProvider`. When working +with Cassandra's ``PasswordAuthenticator``, you can use +the :class:`~cassandra.auth.PlainTextAuthProvider` class. + +For example, suppose Cassandra is setup with its default +'cassandra' user with a password of 'cassandra': + +.. code-block:: python + + from cassandra.cluster import Cluster + from cassandra.auth import PlainTextAuthProvider + + auth_provider = PlainTextAuthProvider(username='cassandra', password='cassandra') + cluster = Cluster(auth_provider=auth_provider, protocol_version=2) + + + +Custom Authenticators +^^^^^^^^^^^^^^^^^^^^^ +If you're using something other than Cassandra's ``PasswordAuthenticator``, +:class:`~.SaslAuthProvider` is provided for generic SASL authentication mechanisms, +utilizing the ``pure-sasl`` package. +If these do not suit your needs, you may need to create your own subclasses of +:class:`~.AuthProvider` and :class:`~.Authenticator`. You can use the Sasl classes +as example implementations. + +Protocol v1 Authentication +^^^^^^^^^^^^^^^^^^^^^^^^^^ +When working with Cassandra 1.2 (or a higher version with +:attr:`~.Cluster.protocol_version` set to ``1``), you will not pass in +an :class:`~.AuthProvider` instance. Instead, you should pass in a +function that takes one argument, the IP address of a host, and returns +a dict of credentials with a ``username`` and ``password`` key: + +.. code-block:: python + + from cassandra.cluster import Cluster + + def get_credentials(host_address): + return {'username': 'joe', 'password': '1234'} + + cluster = Cluster(auth_provider=get_credentials, protocol_version=1) + +SSL +--- +SSL should be used when client encryption is enabled in Cassandra. + +To give you as much control as possible over your SSL configuration, our SSL +API takes a user-created `SSLContext` instance from the Python standard library. +These docs will include some examples for how to achieve common configurations, +but the `ssl.SSLContext` documentation gives a more complete description of +what is possible. + +To enable SSL with version 3.17.0 and higher, you will need to set :attr:`.Cluster.ssl_context` to a +``ssl.SSLContext`` instance to enable SSL. Optionally, you can also set :attr:`.Cluster.ssl_options` +to a dict of options. These will be passed as kwargs to ``ssl.SSLContext.wrap_socket()`` +when new sockets are created. + +The following examples assume you have generated your Cassandra certificate and +keystore files with these intructions: + +* `Setup SSL Cert `_ + +It might be also useful to learn about the different levels of identity verification to understand the examples: + +* `Using SSL in DSE drivers `_ + +SSL Configuration Examples +^^^^^^^^^^^^^^^^^^^^^^^^^^ +Here, we'll describe the server and driver configuration necessary to set up SSL to meet various goals, such as the client verifying the server and the server verifying the client. We'll also include Python code demonstrating how to use servers and drivers configured in these ways. + +**No identity verification** + +No identity verification at all. Note that this is not recommended for for production deployments. + +The Cassandra configuration:: + + client_encryption_options: + enabled: true + keystore: /path/to/127.0.0.1.keystore + keystore_password: myStorePass + require_client_auth: false + +The driver configuration: + +.. code-block:: python + + from cassandra.cluster import Cluster, Session + from ssl import SSLContext, PROTOCOL_TLSv1 + + ssl_context = SSLContext(PROTOCOL_TLSv1) + + cluster = Cluster(['127.0.0.1'], ssl_context=ssl_context) + session = cluster.connect() + +**Client verifies server** + +Ensure the python driver verifies the identity of the server. + +The Cassandra configuration:: + + client_encryption_options: + enabled: true + keystore: /path/to/127.0.0.1.keystore + keystore_password: myStorePass + require_client_auth: false + +For the driver configuration, it's very important to set `ssl_context.verify_mode` +to `CERT_REQUIRED`. Otherwise, the loaded verify certificate will have no effect: + +.. code-block:: python + + from cassandra.cluster import Cluster, Session + from ssl import SSLContext, PROTOCOL_TLSv1, CERT_REQUIRED + + ssl_context = SSLContext(PROTOCOL_TLSv1) + ssl_context.load_verify_locations('/path/to/rootca.crt') + ssl_context.verify_mode = CERT_REQUIRED + + cluster = Cluster(['127.0.0.1'], ssl_context=ssl_context) + session = cluster.connect() + +Additionally, you can also force the driver to verify the `hostname` of the server by passing additional options to `ssl_context.wrap_socket` via the `ssl_options` kwarg: + +.. code-block:: python + + from cassandra.cluster import Cluster, Session + from ssl import SSLContext, PROTOCOL_TLSv1, CERT_REQUIRED + + ssl_context = SSLContext(PROTOCOL_TLSv1) + ssl_context.load_verify_locations('/path/to/rootca.crt') + ssl_context.verify_mode = CERT_REQUIRED + ssl_context.check_hostname = True + ssl_options = {'server_hostname': '127.0.0.1'} + + cluster = Cluster(['127.0.0.1'], ssl_context=ssl_context, ssl_options=ssl_options) + session = cluster.connect() + +**Server verifies client** + +If Cassandra is configured to verify clients (``require_client_auth``), you need to generate +SSL key and certificate files. + +The cassandra configuration:: + + client_encryption_options: + enabled: true + keystore: /path/to/127.0.0.1.keystore + keystore_password: myStorePass + require_client_auth: true + truststore: /path/to/dse-truststore.jks + truststore_password: myStorePass + +The Python ``ssl`` APIs require the certificate in PEM format. First, create a certificate +conf file: + +.. code-block:: bash + + cat > gen_client_cert.conf <`_ +for more details about ``SSLContext`` configuration. + +Versions 3.16.0 and lower +^^^^^^^^^^^^^^^^^^^^^^^^^ + +To enable SSL you will need to set :attr:`.Cluster.ssl_options` to a +dict of options. These will be passed as kwargs to ``ssl.wrap_socket()`` +when new sockets are created. Note that this use of ssl_options will be +deprecated in the next major release. + +By default, a ``ca_certs`` value should be supplied (the value should be +a string pointing to the location of the CA certs file), and you probably +want to specify ``ssl_version`` as ``ssl.PROTOCOL_TLSv1`` to match +Cassandra's default protocol. + +For example: + +.. code-block:: python + + from cassandra.cluster import Cluster + from ssl import PROTOCOL_TLSv1, CERT_REQUIRED + + ssl_opts = { + 'ca_certs': '/path/to/my/ca.certs', + 'ssl_version': PROTOCOL_TLSv1, + 'cert_reqs': CERT_REQUIRED # Certificates are required and validated + } + cluster = Cluster(ssl_options=ssl_opts) + +This is only an example to show how to pass the ssl parameters. Consider reading +the `python ssl documentation `_ for +your configuration. For further reading, Andrew Mussey has published a thorough guide on +`Using SSL with the DataStax Python driver `_. + +SSL with Twisted +++++++++++++++++ + +In case the twisted event loop is used pyOpenSSL must be installed or an exception will be risen. Also +to set the ``ssl_version`` and ``cert_reqs`` in ``ssl_opts`` the appropriate constants from pyOpenSSL are expected. diff --git a/docs/themes/custom/static/custom.css_t b/docs/themes/custom/static/custom.css_t new file mode 100644 index 0000000..c3460e7 --- /dev/null +++ b/docs/themes/custom/static/custom.css_t @@ -0,0 +1,26 @@ +@import url("alabaster.css"); + +div.document { + width: 1200px; +} + +div.sphinxsidebar h1.logo a { + font-size: 24px; +} + +code.descname { + color: #4885ed; +} + +th.field-name { + min-width: 100px; + color: #3cba54; +} + +div.versionmodified { + font-weight: bold +} + +div.versionadded { + font-weight: bold +} diff --git a/docs/themes/custom/theme.conf b/docs/themes/custom/theme.conf new file mode 100644 index 0000000..b0fbb69 --- /dev/null +++ b/docs/themes/custom/theme.conf @@ -0,0 +1,11 @@ +[theme] +inherit = alabaster +stylesheet = custom.css +pygments_style = friendly + +[options] +description = Python driver for Cassandra +github_user = datastax +github_repo = python-driver +github_button = true +github_type = star \ No newline at end of file diff --git a/docs/upgrading.rst b/docs/upgrading.rst new file mode 100644 index 0000000..9ab8eb3 --- /dev/null +++ b/docs/upgrading.rst @@ -0,0 +1,309 @@ +Upgrading +========= + +.. toctree:: + :maxdepth: 1 + +Upgrading to 3.0 +---------------- +Version 3.0 of the DataStax Python driver for Apache Cassandra +adds support for Cassandra 3.0 while maintaining support for +previously supported versions. In addition to substantial internal rework, +there are several updates to the API that integrators will need +to consider: + +Default consistency is now ``LOCAL_ONE`` +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Previous value was ``ONE``. The new value is introduced to mesh with the default +DC-aware load balancing policy and to match other drivers. + +Execution API Updates +^^^^^^^^^^^^^^^^^^^^^ +Result return normalization +~~~~~~~~~~~~~~~~~~~~~~~~~~~ +`PYTHON-368 `_ + +Previously results would be returned as a ``list`` of rows for result rows +up to ``fetch_size``, and ``PagedResult`` afterward. This could break +application code that assumed one type and got another. + +Now, all results are returned as an iterable :class:`~.ResultSet`. + +The preferred way to consume results of unknown size is to iterate through +them, letting automatic paging occur as they are consumed. + +.. code-block:: python + + results = session.execute("SELECT * FROM system.local") + for row in results: + process(row) + +If the expected size of the results is known, it is still possible to +materialize a list using the iterator: + +.. code-block:: python + + results = session.execute("SELECT * FROM system.local") + row_list = list(results) + +For backward compatability, :class:`~.ResultSet` supports indexing. When +accessed at an index, a `~.ResultSet` object will materialize all its pages: + +.. code-block:: python + + results = session.execute("SELECT * FROM system.local") + first_result = results[0] # materializes results, fetching all pages + +This can send requests and load (possibly large) results into memory, so +`~.ResultSet` will log a warning on implicit materialization. + +Trace information is not attached to executed Statements +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +`PYTHON-318 `_ + +Previously trace data was attached to Statements if tracing was enabled. This +could lead to confusion if the same statement was used for multiple executions. + +Now, trace data is associated with the ``ResponseFuture`` and ``ResultSet`` +returned for each query: + +:meth:`.ResponseFuture.get_query_trace()` + +:meth:`.ResponseFuture.get_all_query_traces()` + +:meth:`.ResultSet.get_query_trace()` + +:meth:`.ResultSet.get_all_query_traces()` + +Binding named parameters now ignores extra names +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +`PYTHON-178 `_ + +Previously, :meth:`.BoundStatement.bind()` would raise if a mapping +was passed with extra names not found in the prepared statement. + +Behavior in 3.0+ is to ignore extra names. + +blist removed as soft dependency +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +`PYTHON-385 `_ + +Previously the driver had a soft dependency on ``blist sortedset``, using +that where available and using an internal fallback where possible. + +Now, the driver never chooses the ``blist`` variant, instead returning the +internal :class:`.util.SortedSet` for all ``set`` results. The class implements +all standard set operations, so no integration code should need to change unless +it explicitly checks for ``sortedset`` type. + +Metadata API Updates +^^^^^^^^^^^^^^^^^^^^ +`PYTHON-276 `_, `PYTHON-408 `_, `PYTHON-400 `_, `PYTHON-422 `_ + +Cassandra 3.0 brought a substantial overhaul to the internal schema metadata representation. +This version of the driver supports that metadata in addition to the legacy version. Doing so +also brought some changes to the metadata model. + +The present API is documented: :any:`cassandra.metadata`. Changes highlighted below: + +* All types are now exposed as CQL types instead of types derived from the internal server implementation +* Some metadata attributes have changed names to match current nomenclature (for example, :attr:`.Index.kind` in place of ``Index.type``). +* Some metadata attributes removed + + * ``TableMetadata.keyspace`` reference replaced with :attr:`.TableMetadata.keyspace_name` + * ``ColumnMetadata.index`` is removed table- and keyspace-level mappings are still maintained + +Several deprecated features are removed +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +`PYTHON-292 `_ + +* ``ResponseFuture.result`` timeout parameter is removed, use ``Session.execute`` timeout instead (`031ebb0 `_) +* ``Cluster.refresh_schema`` removed, use ``Cluster.refresh_*_metadata`` instead (`419fcdf `_) +* ``Cluster.submit_schema_refresh`` removed (`574266d `_) +* ``cqltypes`` time/date functions removed, use ``util`` entry points instead (`bb984ee `_) +* ``decoder`` module removed (`e16a073 `_) +* ``TableMetadata.keyspace`` attribute replaced with ``keyspace_name`` (`cc94073 `_) +* ``cqlengine.columns.TimeUUID.from_datetime`` removed, use ``util`` variant instead (`96489cc `_) +* ``cqlengine.columns.Float(double_precision)`` parameter removed, use ``columns.Double`` instead (`a2d3a98 `_) +* ``cqlengine`` keyspace management functions are removed in favor of the strategy-specific entry points (`4bd5909 `_) +* ``cqlengine.Model.__polymorphic_*__`` attributes removed, use ``__discriminator*`` attributes instead (`9d98c8e `_) +* ``cqlengine.statements`` will no longer warn about list list prepend behavior (`79efe97 `_) + + +Upgrading to 2.1 from 2.0 +------------------------- +Version 2.1 of the DataStax Python driver for Apache Cassandra +adds support for Cassandra 2.1 and version 3 of the native protocol. + +Cassandra 1.2, 2.0, and 2.1 are all supported. However, 1.2 only +supports protocol version 1, and 2.0 only supports versions 1 and +2, so some features may not be available. + +Using the v3 Native Protocol +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +By default, the driver will attempt to use version 2 of the +native protocol. To use version 3, you must explicitly +set the :attr:`~.Cluster.protocol_version`: + +.. code-block:: python + + from cassandra.cluster import Cluster + + cluster = Cluster(protocol_version=3) + +Note that protocol version 3 is only supported by Cassandra 2.1+. + +In future releases, the driver may default to using protocol version +3. + +Working with User-Defined Types +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Cassandra 2.1 introduced the ability to define new types:: + + USE KEYSPACE mykeyspace; + + CREATE TYPE address (street text, city text, zip int); + +The driver generally expects you to use instances of a specific +class to represent column values of this type. You can let the +driver know what class to use with :meth:`.Cluster.register_user_type`: + +.. code-block:: python + + cluster = Cluster() + + class Address(object): + + def __init__(self, street, city, zipcode): + self.street = street + self.city = text + self.zipcode = zipcode + + cluster.register_user_type('mykeyspace', 'address', Address) + +When inserting data for ``address`` columns, you should pass in +instances of ``Address``. When querying data, ``address`` column +values will be instances of ``Address``. + +If no class is registered for a user-defined type, query results +will use a ``namedtuple`` class and data may only be inserted +though prepared statements. + +See :ref:`udts` for more details. + +Customizing Encoders for Non-prepared Statements +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Starting with version 2.1 of the driver, it is possible to customize +how Python types are converted to CQL literals when working with +non-prepared statements. This is done on a per-:class:`~.Session` +basis through :attr:`.Session.encoder`: + +.. code-block:: python + + cluster = Cluster() + session = cluster.connect() + session.encoder.mapping[tuple] = session.encoder.cql_encode_tuple + +See :ref:`type-conversions` for the table of default CQL literal conversions. + +Using Client-Side Protocol-Level Timestamps +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +With version 3 of the native protocol, timestamps may be supplied by the +client at the protocol level. (Normally, if they are not specified within +the CQL query itself, a timestamp is generated server-side.) + +When :attr:`~.Cluster.protocol_version` is set to 3 or higher, the driver +will automatically use client-side timestamps with microsecond precision +unless :attr:`.Session.use_client_timestamp` is changed to :const:`False`. +If a timestamp is specified within the CQL query, it will override the +timestamp generated by the driver. + +Upgrading to 2.0 from 1.x +------------------------- +Version 2.0 of the DataStax Python driver for Apache Cassandra +includes some notable improvements over version 1.x. This version +of the driver supports Cassandra 1.2, 2.0, and 2.1. However, not +all features may be used with Cassandra 1.2, and some new features +in 2.1 are not yet supported. + +Using the v2 Native Protocol +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +By default, the driver will attempt to use version 2 of Cassandra's +native protocol. You can explicitly set the protocol version to +2, though: + +.. code-block:: python + + from cassandra.cluster import Cluster + + cluster = Cluster(protocol_version=2) + +When working with Cassandra 1.2, you will need to +explicitly set the :attr:`~.Cluster.protocol_version` to 1: + +.. code-block:: python + + from cassandra.cluster import Cluster + + cluster = Cluster(protocol_version=1) + +Automatic Query Paging +^^^^^^^^^^^^^^^^^^^^^^ +Version 2 of the native protocol adds support for automatic query +paging, which can make dealing with large result sets much simpler. + +See :ref:`query-paging` for full details. + +Protocol-Level Batch Statements +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +With version 1 of the native protocol, batching of statements required +using a `BATCH cql query `_. +With version 2 of the native protocol, you can now batch statements at +the protocol level. This allows you to use many different prepared +statements within a single batch. + +See :class:`~.query.BatchStatement` for details and usage examples. + +SASL-based Authentication +^^^^^^^^^^^^^^^^^^^^^^^^^ +Also new in version 2 of the native protocol is SASL-based authentication. +See the section on :ref:`security` for details and examples. + +Lightweight Transactions +^^^^^^^^^^^^^^^^^^^^^^^^ +`Lightweight transactions `_ are another new feature. To use lightweight transactions, add ``IF`` clauses +to your CQL queries and set the :attr:`~.Statement.serial_consistency_level` +on your statements. + +Calling Cluster.shutdown() +^^^^^^^^^^^^^^^^^^^^^^^^^^ +In order to fix some issues around garbage collection and unclean interpreter +shutdowns, version 2.0 of the driver requires you to call :meth:`.Cluster.shutdown()` +on your :class:`~.Cluster` objects when you are through with them. +This helps to guarantee a clean shutdown. + +Deprecations +^^^^^^^^^^^^ +The following functions have moved from ``cassandra.decoder`` to ``cassandra.query``. +The original functions have been left in place with a :exc:`DeprecationWarning` for +now: + +* :attr:`cassandra.decoder.tuple_factory` has moved to + :attr:`cassandra.query.tuple_factory` +* :attr:`cassandra.decoder.named_tuple_factory` has moved to + :attr:`cassandra.query.named_tuple_factory` +* :attr:`cassandra.decoder.dict_factory` has moved to + :attr:`cassandra.query.dict_factory` +* :attr:`cassandra.decoder.ordered_dict_factory` has moved to + :attr:`cassandra.query.ordered_dict_factory` + +Dependency Changes +^^^^^^^^^^^^^^^^^^ +The following dependencies have officially been made optional: + +* ``scales`` +* ``blist`` + +And one new dependency has been added (to enable Python 3 support): + +* ``six`` diff --git a/docs/user_defined_types.rst b/docs/user_defined_types.rst new file mode 100644 index 0000000..fd95b09 --- /dev/null +++ b/docs/user_defined_types.rst @@ -0,0 +1,92 @@ +.. _udts: + +User Defined Types +================== +Cassandra 2.1 introduced user-defined types (UDTs). You can create a +new type through ``CREATE TYPE`` statements in CQL:: + + CREATE TYPE address (street text, zip int); + +Version 2.1 of the Python driver adds support for user-defined types. + +Registering a Class to Map to a UDT +----------------------------------- +You can tell the Python driver to return columns of a specific UDT as +instances of a class by registering them with your :class:`~.Cluster` +instance through :meth:`.Cluster.register_user_type`: + +.. code-block:: python + + cluster = Cluster(protocol_version=3) + session = cluster.connect() + session.set_keyspace('mykeyspace') + session.execute("CREATE TYPE address (street text, zipcode int)") + session.execute("CREATE TABLE users (id int PRIMARY KEY, location frozen
)") + + # create a class to map to the "address" UDT + class Address(object): + + def __init__(self, street, zipcode): + self.street = street + self.zipcode = zipcode + + cluster.register_user_type('mykeyspace', 'address', Address) + + # insert a row using an instance of Address + session.execute("INSERT INTO users (id, location) VALUES (%s, %s)", + (0, Address("123 Main St.", 78723))) + + # results will include Address instances + results = session.execute("SELECT * FROM users") + row = results[0] + print row.id, row.location.street, row.location.zipcode + +Using UDTs Without Registering Them +----------------------------------- +Although it is recommended to register your types with +:meth:`.Cluster.register_user_type`, the driver gives you some options +for working with unregistered UDTS. + +When you use prepared statements, the driver knows what data types to +expect for each placeholder. This allows you to pass any object you +want for a UDT, as long as it has attributes that match the field names +for the UDT: + +.. code-block:: python + + cluster = Cluster(protocol_version=3) + session = cluster.connect() + session.set_keyspace('mykeyspace') + session.execute("CREATE TYPE address (street text, zipcode int)") + session.execute("CREATE TABLE users (id int PRIMARY KEY, location frozen
)") + + class Foo(object): + + def __init__(self, street, zipcode, otherstuff): + self.street = street + self.zipcode = zipcode + self.otherstuff = otherstuff + + insert_statement = session.prepare("INSERT INTO users (id, location) VALUES (?, ?)") + + # since we're using a prepared statement, we don't *have* to register + # a class to map to the UDT to insert data. The object just needs to have + # "street" and "zipcode" attributes (which Foo does): + session.execute(insert_statement, [0, Foo("123 Main St.", 78723, "some other stuff")]) + + # when we query data, UDT columns that don't have a class registered + # will be returned as namedtuples: + results = session.execute("SELECT * FROM users") + first_row = results[0] + address = first_row.location + print address # prints "Address(street='123 Main St.', zipcode=78723)" + street = address.street + zipcode = address.street + +As shown in the code example, inserting data for UDT columns without registering +a class works fine for prepared statements. However, **you must register a +class to insert UDT columns with unprepared statements**.\* You can still query +UDT columns without registered classes using unprepared statements, they will +simply return ``namedtuple`` instances (just like prepared statements do). + +\* this applies to *parameterized* unprepared statements, in which the driver will be formatting parameters -- not statements with interpolated UDT literals. diff --git a/doxyfile b/doxyfile new file mode 100644 index 0000000..d453557 --- /dev/null +++ b/doxyfile @@ -0,0 +1,2339 @@ +# Doxyfile 1.8.8 + +# This file describes the settings to be used by the documentation system +# doxygen (www.doxygen.org) for a project. +# +# All text after a double hash (##) is considered a comment and is placed in +# front of the TAG it is preceding. +# +# All text after a single hash (#) is considered a comment and will be ignored. +# The format is: +# TAG = value [value, ...] +# For lists, items can also be appended using: +# TAG += value [value, ...] +# Values that contain spaces should be placed between quotes (\" \"). + +#--------------------------------------------------------------------------- +# Project related configuration options +#--------------------------------------------------------------------------- + +# This tag specifies the encoding used for all characters in the config file +# that follow. The default is UTF-8 which is also the encoding used for all text +# before the first occurrence of this tag. Doxygen uses libiconv (or the iconv +# built into libc) for the transcoding. See http://www.gnu.org/software/libiconv +# for the list of possible encodings. +# The default value is: UTF-8. + +DOXYFILE_ENCODING = UTF-8 + +# The PROJECT_NAME tag is a single word (or a sequence of words surrounded by +# double-quotes, unless you are using Doxywizard) that should identify the +# project for which the documentation is generated. This name is used in the +# title of most generated pages and in a few other places. +# The default value is: My Project. + +PROJECT_NAME = "Python Driver" + +# The PROJECT_NUMBER tag can be used to enter a project or revision number. This +# could be handy for archiving the generated documentation or if some version +# control system is used. + +PROJECT_NUMBER = + +# Using the PROJECT_BRIEF tag one can provide an optional one line description +# for a project that appears at the top of each page and should give viewer a +# quick idea about the purpose of the project. Keep the description short. + +PROJECT_BRIEF = + +# With the PROJECT_LOGO tag one can specify an logo or icon that is included in +# the documentation. The maximum height of the logo should not exceed 55 pixels +# and the maximum width should not exceed 200 pixels. Doxygen will copy the logo +# to the output directory. + +PROJECT_LOGO = + +# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path +# into which the generated documentation will be written. If a relative path is +# entered, it will be relative to the location where doxygen was started. If +# left blank the current directory will be used. + +OUTPUT_DIRECTORY = + +# If the CREATE_SUBDIRS tag is set to YES, then doxygen will create 4096 sub- +# directories (in 2 levels) under the output directory of each output format and +# will distribute the generated files over these directories. Enabling this +# option can be useful when feeding doxygen a huge amount of source files, where +# putting all generated files in the same directory would otherwise causes +# performance problems for the file system. +# The default value is: NO. + +CREATE_SUBDIRS = NO + +# If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII +# characters to appear in the names of generated files. If set to NO, non-ASCII +# characters will be escaped, for example _xE3_x81_x84 will be used for Unicode +# U+3044. +# The default value is: NO. + +ALLOW_UNICODE_NAMES = NO + +# The OUTPUT_LANGUAGE tag is used to specify the language in which all +# documentation generated by doxygen is written. Doxygen will use this +# information to generate all constant output in the proper language. +# Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese, +# Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States), +# Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian, +# Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages), +# Korean, Korean-en (Korean with English messages), Latvian, Lithuanian, +# Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian, +# Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish, +# Ukrainian and Vietnamese. +# The default value is: English. + +OUTPUT_LANGUAGE = English + +# If the BRIEF_MEMBER_DESC tag is set to YES doxygen will include brief member +# descriptions after the members that are listed in the file and class +# documentation (similar to Javadoc). Set to NO to disable this. +# The default value is: YES. + +BRIEF_MEMBER_DESC = NO + +# If the REPEAT_BRIEF tag is set to YES doxygen will prepend the brief +# description of a member or function before the detailed description +# +# Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the +# brief descriptions will be completely suppressed. +# The default value is: YES. + +REPEAT_BRIEF = YES + +# This tag implements a quasi-intelligent brief description abbreviator that is +# used to form the text in various listings. Each string in this list, if found +# as the leading text of the brief description, will be stripped from the text +# and the result, after processing the whole list, is used as the annotated +# text. Otherwise, the brief description is used as-is. If left blank, the +# following values are used ($name is automatically replaced with the name of +# the entity):The $name class, The $name widget, The $name file, is, provides, +# specifies, contains, represents, a, an and the. + +ABBREVIATE_BRIEF = + +# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then +# doxygen will generate a detailed section even if there is only a brief +# description. +# The default value is: NO. + +ALWAYS_DETAILED_SEC = NO + +# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all +# inherited members of a class in the documentation of that class as if those +# members were ordinary class members. Constructors, destructors and assignment +# operators of the base classes will not be shown. +# The default value is: NO. + +INLINE_INHERITED_MEMB = NO + +# If the FULL_PATH_NAMES tag is set to YES doxygen will prepend the full path +# before files name in the file list and in the header files. If set to NO the +# shortest path that makes the file name unique will be used +# The default value is: YES. + +FULL_PATH_NAMES = NO + +# The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path. +# Stripping is only done if one of the specified strings matches the left-hand +# part of the path. The tag can be used to show relative paths in the file list. +# If left blank the directory from which doxygen is run is used as the path to +# strip. +# +# Note that you can specify absolute paths here, but also relative paths, which +# will be relative from the directory where doxygen is started. +# This tag requires that the tag FULL_PATH_NAMES is set to YES. + +STRIP_FROM_PATH = + +# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the +# path mentioned in the documentation of a class, which tells the reader which +# header file to include in order to use a class. If left blank only the name of +# the header file containing the class definition is used. Otherwise one should +# specify the list of include paths that are normally passed to the compiler +# using the -I flag. + +STRIP_FROM_INC_PATH = + +# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but +# less readable) file names. This can be useful is your file systems doesn't +# support long names like on DOS, Mac, or CD-ROM. +# The default value is: NO. + +SHORT_NAMES = NO + +# If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the +# first line (until the first dot) of a Javadoc-style comment as the brief +# description. If set to NO, the Javadoc-style will behave just like regular Qt- +# style comments (thus requiring an explicit @brief command for a brief +# description.) +# The default value is: NO. + +JAVADOC_AUTOBRIEF = NO + +# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first +# line (until the first dot) of a Qt-style comment as the brief description. If +# set to NO, the Qt-style will behave just like regular Qt-style comments (thus +# requiring an explicit \brief command for a brief description.) +# The default value is: NO. + +QT_AUTOBRIEF = NO + +# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a +# multi-line C++ special comment block (i.e. a block of //! or /// comments) as +# a brief description. This used to be the default behavior. The new default is +# to treat a multi-line C++ comment block as a detailed description. Set this +# tag to YES if you prefer the old behavior instead. +# +# Note that setting this tag to YES also means that rational rose comments are +# not recognized any more. +# The default value is: NO. + +MULTILINE_CPP_IS_BRIEF = NO + +# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the +# documentation from any documented member that it re-implements. +# The default value is: YES. + +INHERIT_DOCS = YES + +# If the SEPARATE_MEMBER_PAGES tag is set to YES, then doxygen will produce a +# new page for each member. If set to NO, the documentation of a member will be +# part of the file/class/namespace that contains it. +# The default value is: NO. + +SEPARATE_MEMBER_PAGES = NO + +# The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen +# uses this value to replace tabs by spaces in code fragments. +# Minimum value: 1, maximum value: 16, default value: 4. + +TAB_SIZE = 4 + +# This tag can be used to specify a number of aliases that act as commands in +# the documentation. An alias has the form: +# name=value +# For example adding +# "sideeffect=@par Side Effects:\n" +# will allow you to put the command \sideeffect (or @sideeffect) in the +# documentation, which will result in a user-defined paragraph with heading +# "Side Effects:". You can put \n's in the value part of an alias to insert +# newlines. + +ALIASES = "test_assumptions=\par Test Assumptions\n" \ + "note=\par Note\n" \ + "test_category=\par Test Category\n" \ + "jira_ticket=\par JIRA Ticket\n" \ + "expected_result=\par Expected Result\n" \ + "since=\par Since\n" \ + "param=\par Parameters\n" \ + "return=\par Return\n" \ + "expected_errors=\par Expected Errors\n" + +# This tag can be used to specify a number of word-keyword mappings (TCL only). +# A mapping has the form "name=value". For example adding "class=itcl::class" +# will allow you to use the command class in the itcl::class meaning. + +TCL_SUBST = + +# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources +# only. Doxygen will then generate output that is more tailored for C. For +# instance, some of the names that are used will be different. The list of all +# members will be omitted, etc. +# The default value is: NO. + +OPTIMIZE_OUTPUT_FOR_C = NO + +# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or +# Python sources only. Doxygen will then generate output that is more tailored +# for that language. For instance, namespaces will be presented as packages, +# qualified scopes will look different, etc. +# The default value is: NO. + +OPTIMIZE_OUTPUT_JAVA = YES + +# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran +# sources. Doxygen will then generate output that is tailored for Fortran. +# The default value is: NO. + +OPTIMIZE_FOR_FORTRAN = NO + +# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL +# sources. Doxygen will then generate output that is tailored for VHDL. +# The default value is: NO. + +OPTIMIZE_OUTPUT_VHDL = NO + +# Doxygen selects the parser to use depending on the extension of the files it +# parses. With this tag you can assign which parser to use for a given +# extension. Doxygen has a built-in mapping, but you can override or extend it +# using this tag. The format is ext=language, where ext is a file extension, and +# language is one of the parsers supported by doxygen: IDL, Java, Javascript, +# C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran: +# FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran: +# Fortran. In the later case the parser tries to guess whether the code is fixed +# or free formatted code, this is the default for Fortran type files), VHDL. For +# instance to make doxygen treat .inc files as Fortran files (default is PHP), +# and .f files as C (default is Fortran), use: inc=Fortran f=C. +# +# Note For files without extension you can use no_extension as a placeholder. +# +# Note that for custom extensions you also need to set FILE_PATTERNS otherwise +# the files are not read by doxygen. + +EXTENSION_MAPPING = + +# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments +# according to the Markdown format, which allows for more readable +# documentation. See http://daringfireball.net/projects/markdown/ for details. +# The output of markdown processing is further processed by doxygen, so you can +# mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in +# case of backward compatibilities issues. +# The default value is: YES. + +MARKDOWN_SUPPORT = YES + +# When enabled doxygen tries to link words that correspond to documented +# classes, or namespaces to their corresponding documentation. Such a link can +# be prevented in individual cases by by putting a % sign in front of the word +# or globally by setting AUTOLINK_SUPPORT to NO. +# The default value is: YES. + +AUTOLINK_SUPPORT = YES + +# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want +# to include (a tag file for) the STL sources as input, then you should set this +# tag to YES in order to let doxygen match functions declarations and +# definitions whose arguments contain STL classes (e.g. func(std::string); +# versus func(std::string) {}). This also make the inheritance and collaboration +# diagrams that involve STL classes more complete and accurate. +# The default value is: NO. + +BUILTIN_STL_SUPPORT = NO + +# If you use Microsoft's C++/CLI language, you should set this option to YES to +# enable parsing support. +# The default value is: NO. + +CPP_CLI_SUPPORT = NO + +# Set the SIP_SUPPORT tag to YES if your project consists of sip (see: +# http://www.riverbankcomputing.co.uk/software/sip/intro) sources only. Doxygen +# will parse them like normal C++ but will assume all classes use public instead +# of private inheritance when no explicit protection keyword is present. +# The default value is: NO. + +SIP_SUPPORT = NO + +# For Microsoft's IDL there are propget and propput attributes to indicate +# getter and setter methods for a property. Setting this option to YES will make +# doxygen to replace the get and set methods by a property in the documentation. +# This will only work if the methods are indeed getting or setting a simple +# type. If this is not the case, or you want to show the methods anyway, you +# should set this option to NO. +# The default value is: YES. + +IDL_PROPERTY_SUPPORT = YES + +# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC +# tag is set to YES, then doxygen will reuse the documentation of the first +# member in the group (if any) for the other members of the group. By default +# all members of a group must be documented explicitly. +# The default value is: NO. + +DISTRIBUTE_GROUP_DOC = NO + +# Set the SUBGROUPING tag to YES to allow class member groups of the same type +# (for instance a group of public functions) to be put as a subgroup of that +# type (e.g. under the Public Functions section). Set it to NO to prevent +# subgrouping. Alternatively, this can be done per class using the +# \nosubgrouping command. +# The default value is: YES. + +SUBGROUPING = YES + +# When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions +# are shown inside the group in which they are included (e.g. using \ingroup) +# instead of on a separate page (for HTML and Man pages) or section (for LaTeX +# and RTF). +# +# Note that this feature does not work in combination with +# SEPARATE_MEMBER_PAGES. +# The default value is: NO. + +INLINE_GROUPED_CLASSES = NO + +# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions +# with only public data fields or simple typedef fields will be shown inline in +# the documentation of the scope in which they are defined (i.e. file, +# namespace, or group documentation), provided this scope is documented. If set +# to NO, structs, classes, and unions are shown on a separate page (for HTML and +# Man pages) or section (for LaTeX and RTF). +# The default value is: NO. + +INLINE_SIMPLE_STRUCTS = NO + +# When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or +# enum is documented as struct, union, or enum with the name of the typedef. So +# typedef struct TypeS {} TypeT, will appear in the documentation as a struct +# with name TypeT. When disabled the typedef will appear as a member of a file, +# namespace, or class. And the struct will be named TypeS. This can typically be +# useful for C code in case the coding convention dictates that all compound +# types are typedef'ed and only the typedef is referenced, never the tag name. +# The default value is: NO. + +TYPEDEF_HIDES_STRUCT = NO + +# The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This +# cache is used to resolve symbols given their name and scope. Since this can be +# an expensive process and often the same symbol appears multiple times in the +# code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small +# doxygen will become slower. If the cache is too large, memory is wasted. The +# cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range +# is 0..9, the default is 0, corresponding to a cache size of 2^16=65536 +# symbols. At the end of a run doxygen will report the cache usage and suggest +# the optimal cache size from a speed point of view. +# Minimum value: 0, maximum value: 9, default value: 0. + +LOOKUP_CACHE_SIZE = 0 + +#--------------------------------------------------------------------------- +# Build related configuration options +#--------------------------------------------------------------------------- + +# If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in +# documentation are documented, even if no documentation was available. Private +# class members and static file members will be hidden unless the +# EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES. +# Note: This will also disable the warnings about undocumented members that are +# normally produced when WARNINGS is set to YES. +# The default value is: NO. + +EXTRACT_ALL = NO + +# If the EXTRACT_PRIVATE tag is set to YES all private members of a class will +# be included in the documentation. +# The default value is: NO. + +EXTRACT_PRIVATE = NO + +# If the EXTRACT_PACKAGE tag is set to YES all members with package or internal +# scope will be included in the documentation. +# The default value is: NO. + +EXTRACT_PACKAGE = NO + +# If the EXTRACT_STATIC tag is set to YES all static members of a file will be +# included in the documentation. +# The default value is: NO. + +EXTRACT_STATIC = NO + +# If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs) defined +# locally in source files will be included in the documentation. If set to NO +# only classes defined in header files are included. Does not have any effect +# for Java sources. +# The default value is: YES. + +EXTRACT_LOCAL_CLASSES = YES + +# This flag is only useful for Objective-C code. When set to YES local methods, +# which are defined in the implementation section but not in the interface are +# included in the documentation. If set to NO only methods in the interface are +# included. +# The default value is: NO. + +EXTRACT_LOCAL_METHODS = NO + +# If this flag is set to YES, the members of anonymous namespaces will be +# extracted and appear in the documentation as a namespace called +# 'anonymous_namespace{file}', where file will be replaced with the base name of +# the file that contains the anonymous namespace. By default anonymous namespace +# are hidden. +# The default value is: NO. + +EXTRACT_ANON_NSPACES = NO + +# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all +# undocumented members inside documented classes or files. If set to NO these +# members will be included in the various overviews, but no documentation +# section is generated. This option has no effect if EXTRACT_ALL is enabled. +# The default value is: NO. + +HIDE_UNDOC_MEMBERS = NO + +# If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all +# undocumented classes that are normally visible in the class hierarchy. If set +# to NO these classes will be included in the various overviews. This option has +# no effect if EXTRACT_ALL is enabled. +# The default value is: NO. + +HIDE_UNDOC_CLASSES = NO + +# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend +# (class|struct|union) declarations. If set to NO these declarations will be +# included in the documentation. +# The default value is: NO. + +HIDE_FRIEND_COMPOUNDS = NO + +# If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any +# documentation blocks found inside the body of a function. If set to NO these +# blocks will be appended to the function's detailed documentation block. +# The default value is: NO. + +HIDE_IN_BODY_DOCS = NO + +# The INTERNAL_DOCS tag determines if documentation that is typed after a +# \internal command is included. If the tag is set to NO then the documentation +# will be excluded. Set it to YES to include the internal documentation. +# The default value is: NO. + +INTERNAL_DOCS = NO + +# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file +# names in lower-case letters. If set to YES upper-case letters are also +# allowed. This is useful if you have classes or files whose names only differ +# in case and if your file system supports case sensitive file names. Windows +# and Mac users are advised to set this option to NO. +# The default value is: system dependent. + +CASE_SENSE_NAMES = YES + +# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with +# their full class and namespace scopes in the documentation. If set to YES the +# scope will be hidden. +# The default value is: NO. + +HIDE_SCOPE_NAMES = NO + +# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of +# the files that are included by a file in the documentation of that file. +# The default value is: YES. + +SHOW_INCLUDE_FILES = YES + +# If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each +# grouped member an include statement to the documentation, telling the reader +# which file to include in order to use the member. +# The default value is: NO. + +SHOW_GROUPED_MEMB_INC = NO + +# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include +# files with double quotes in the documentation rather than with sharp brackets. +# The default value is: NO. + +FORCE_LOCAL_INCLUDES = NO + +# If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the +# documentation for inline members. +# The default value is: YES. + +INLINE_INFO = YES + +# If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the +# (detailed) documentation of file and class members alphabetically by member +# name. If set to NO the members will appear in declaration order. +# The default value is: YES. + +SORT_MEMBER_DOCS = YES + +# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief +# descriptions of file, namespace and class members alphabetically by member +# name. If set to NO the members will appear in declaration order. Note that +# this will also influence the order of the classes in the class list. +# The default value is: NO. + +SORT_BRIEF_DOCS = NO + +# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the +# (brief and detailed) documentation of class members so that constructors and +# destructors are listed first. If set to NO the constructors will appear in the +# respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS. +# Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief +# member documentation. +# Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting +# detailed member documentation. +# The default value is: NO. + +SORT_MEMBERS_CTORS_1ST = NO + +# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy +# of group names into alphabetical order. If set to NO the group names will +# appear in their defined order. +# The default value is: NO. + +SORT_GROUP_NAMES = NO + +# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by +# fully-qualified names, including namespaces. If set to NO, the class list will +# be sorted only by class name, not including the namespace part. +# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES. +# Note: This option applies only to the class list, not to the alphabetical +# list. +# The default value is: NO. + +SORT_BY_SCOPE_NAME = NO + +# If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper +# type resolution of all parameters of a function it will reject a match between +# the prototype and the implementation of a member function even if there is +# only one candidate or it is obvious which candidate to choose by doing a +# simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still +# accept a match between prototype and implementation in such cases. +# The default value is: NO. + +STRICT_PROTO_MATCHING = NO + +# The GENERATE_TODOLIST tag can be used to enable ( YES) or disable ( NO) the +# todo list. This list is created by putting \todo commands in the +# documentation. +# The default value is: YES. + +GENERATE_TODOLIST = YES + +# The GENERATE_TESTLIST tag can be used to enable ( YES) or disable ( NO) the +# test list. This list is created by putting \test commands in the +# documentation. +# The default value is: YES. + +GENERATE_TESTLIST = YES + +# The GENERATE_BUGLIST tag can be used to enable ( YES) or disable ( NO) the bug +# list. This list is created by putting \bug commands in the documentation. +# The default value is: YES. + +GENERATE_BUGLIST = YES + +# The GENERATE_DEPRECATEDLIST tag can be used to enable ( YES) or disable ( NO) +# the deprecated list. This list is created by putting \deprecated commands in +# the documentation. +# The default value is: YES. + +GENERATE_DEPRECATEDLIST= YES + +# The ENABLED_SECTIONS tag can be used to enable conditional documentation +# sections, marked by \if ... \endif and \cond +# ... \endcond blocks. + +ENABLED_SECTIONS = + +# The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the +# initial value of a variable or macro / define can have for it to appear in the +# documentation. If the initializer consists of more lines than specified here +# it will be hidden. Use a value of 0 to hide initializers completely. The +# appearance of the value of individual variables and macros / defines can be +# controlled using \showinitializer or \hideinitializer command in the +# documentation regardless of this setting. +# Minimum value: 0, maximum value: 10000, default value: 30. + +MAX_INITIALIZER_LINES = 30 + +# Set the SHOW_USED_FILES tag to NO to disable the list of files generated at +# the bottom of the documentation of classes and structs. If set to YES the list +# will mention the files that were used to generate the documentation. +# The default value is: YES. + +SHOW_USED_FILES = YES + +# Set the SHOW_FILES tag to NO to disable the generation of the Files page. This +# will remove the Files entry from the Quick Index and from the Folder Tree View +# (if specified). +# The default value is: YES. + +SHOW_FILES = YES + +# Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces +# page. This will remove the Namespaces entry from the Quick Index and from the +# Folder Tree View (if specified). +# The default value is: YES. + +SHOW_NAMESPACES = YES + +# The FILE_VERSION_FILTER tag can be used to specify a program or script that +# doxygen should invoke to get the current version for each file (typically from +# the version control system). Doxygen will invoke the program by executing (via +# popen()) the command command input-file, where command is the value of the +# FILE_VERSION_FILTER tag, and input-file is the name of an input file provided +# by doxygen. Whatever the program writes to standard output is used as the file +# version. For an example see the documentation. + +FILE_VERSION_FILTER = + +# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed +# by doxygen. The layout file controls the global structure of the generated +# output files in an output format independent way. To create the layout file +# that represents doxygen's defaults, run doxygen with the -l option. You can +# optionally specify a file name after the option, if omitted DoxygenLayout.xml +# will be used as the name of the layout file. +# +# Note that if you run doxygen from a directory containing a file called +# DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE +# tag is left empty. + +LAYOUT_FILE = + +# The CITE_BIB_FILES tag can be used to specify one or more bib files containing +# the reference definitions. This must be a list of .bib files. The .bib +# extension is automatically appended if omitted. This requires the bibtex tool +# to be installed. See also http://en.wikipedia.org/wiki/BibTeX for more info. +# For LaTeX the style of the bibliography can be controlled using +# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the +# search path. See also \cite for info how to create references. + +CITE_BIB_FILES = + +#--------------------------------------------------------------------------- +# Configuration options related to warning and progress messages +#--------------------------------------------------------------------------- + +# The QUIET tag can be used to turn on/off the messages that are generated to +# standard output by doxygen. If QUIET is set to YES this implies that the +# messages are off. +# The default value is: NO. + +QUIET = NO + +# The WARNINGS tag can be used to turn on/off the warning messages that are +# generated to standard error ( stderr) by doxygen. If WARNINGS is set to YES +# this implies that the warnings are on. +# +# Tip: Turn warnings on while writing the documentation. +# The default value is: YES. + +WARNINGS = YES + +# If the WARN_IF_UNDOCUMENTED tag is set to YES, then doxygen will generate +# warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag +# will automatically be disabled. +# The default value is: YES. + +WARN_IF_UNDOCUMENTED = YES + +# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for +# potential errors in the documentation, such as not documenting some parameters +# in a documented function, or documenting parameters that don't exist or using +# markup commands wrongly. +# The default value is: YES. + +WARN_IF_DOC_ERROR = YES + +# This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that +# are documented, but have no documentation for their parameters or return +# value. If set to NO doxygen will only warn about wrong or incomplete parameter +# documentation, but not about the absence of documentation. +# The default value is: NO. + +WARN_NO_PARAMDOC = NO + +# The WARN_FORMAT tag determines the format of the warning messages that doxygen +# can produce. The string should contain the $file, $line, and $text tags, which +# will be replaced by the file and line number from which the warning originated +# and the warning text. Optionally the format may contain $version, which will +# be replaced by the version of the file (if it could be obtained via +# FILE_VERSION_FILTER) +# The default value is: $file:$line: $text. + +WARN_FORMAT = "$file:$line: $text" + +# The WARN_LOGFILE tag can be used to specify a file to which warning and error +# messages should be written. If left blank the output is written to standard +# error (stderr). + +WARN_LOGFILE = + +#--------------------------------------------------------------------------- +# Configuration options related to the input files +#--------------------------------------------------------------------------- + +# The INPUT tag is used to specify the files and/or directories that contain +# documented source files. You may enter file names like myfile.cpp or +# directories like /usr/src/myproject. Separate the files or directories with +# spaces. +# Note: If this tag is empty the current directory is searched. + +INPUT = ./tests + +# This tag can be used to specify the character encoding of the source files +# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses +# libiconv (or the iconv built into libc) for the transcoding. See the libiconv +# documentation (see: http://www.gnu.org/software/libiconv) for the list of +# possible encodings. +# The default value is: UTF-8. + +INPUT_ENCODING = UTF-8 + +# If the value of the INPUT tag contains directories, you can use the +# FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and +# *.h) to filter out the source-files in the directories. If left blank the +# following patterns are tested:*.c, *.cc, *.cxx, *.cpp, *.c++, *.java, *.ii, +# *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h, *.hh, *.hxx, *.hpp, +# *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc, *.m, *.markdown, +# *.md, *.mm, *.dox, *.py, *.f90, *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf, +# *.qsf, *.as and *.js. + +FILE_PATTERNS = *.py + +# The RECURSIVE tag can be used to specify whether or not subdirectories should +# be searched for input files as well. +# The default value is: NO. + +RECURSIVE = YES + +# The EXCLUDE tag can be used to specify files and/or directories that should be +# excluded from the INPUT source files. This way you can easily exclude a +# subdirectory from a directory tree whose root is specified with the INPUT tag. +# +# Note that relative paths are relative to the directory from which doxygen is +# run. + +EXCLUDE = + +# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or +# directories that are symbolic links (a Unix file system feature) are excluded +# from the input. +# The default value is: NO. + +EXCLUDE_SYMLINKS = NO + +# If the value of the INPUT tag contains directories, you can use the +# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude +# certain files from those directories. +# +# Note that the wildcards are matched against the file with absolute path, so to +# exclude all test directories for example use the pattern */test/* + +EXCLUDE_PATTERNS = + +# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names +# (namespaces, classes, functions, etc.) that should be excluded from the +# output. The symbol name can be a fully qualified name, a word, or if the +# wildcard * is used, a substring. Examples: ANamespace, AClass, +# AClass::ANamespace, ANamespace::*Test +# +# Note that the wildcards are matched against the file with absolute path, so to +# exclude all test directories use the pattern */test/* + +EXCLUDE_SYMBOLS = @Test + +# The EXAMPLE_PATH tag can be used to specify one or more files or directories +# that contain example code fragments that are included (see the \include +# command). + +EXAMPLE_PATH = + +# If the value of the EXAMPLE_PATH tag contains directories, you can use the +# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and +# *.h) to filter out the source-files in the directories. If left blank all +# files are included. + +EXAMPLE_PATTERNS = + +# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be +# searched for input files to be used with the \include or \dontinclude commands +# irrespective of the value of the RECURSIVE tag. +# The default value is: NO. + +EXAMPLE_RECURSIVE = NO + +# The IMAGE_PATH tag can be used to specify one or more files or directories +# that contain images that are to be included in the documentation (see the +# \image command). + +IMAGE_PATH = + +# The INPUT_FILTER tag can be used to specify a program that doxygen should +# invoke to filter for each input file. Doxygen will invoke the filter program +# by executing (via popen()) the command: +# +# +# +# where is the value of the INPUT_FILTER tag, and is the +# name of an input file. Doxygen will then use the output that the filter +# program writes to standard output. If FILTER_PATTERNS is specified, this tag +# will be ignored. +# +# Note that the filter must not add or remove lines; it is applied before the +# code is scanned, but not when the output code is generated. If lines are added +# or removed, the anchors will not be placed correctly. + +INPUT_FILTER = "python /usr/local/bin/doxypy.py" + +# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern +# basis. Doxygen will compare the file name with each pattern and apply the +# filter if there is a match. The filters are a list of the form: pattern=filter +# (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how +# filters are used. If the FILTER_PATTERNS tag is empty or if none of the +# patterns match the file name, INPUT_FILTER is applied. + +FILTER_PATTERNS = + +# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using +# INPUT_FILTER ) will also be used to filter the input files that are used for +# producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES). +# The default value is: NO. + +FILTER_SOURCE_FILES = YES + +# The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file +# pattern. A pattern will override the setting for FILTER_PATTERN (if any) and +# it is also possible to disable source filtering for a specific pattern using +# *.ext= (so without naming a filter). +# This tag requires that the tag FILTER_SOURCE_FILES is set to YES. + +FILTER_SOURCE_PATTERNS = + +# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that +# is part of the input, its contents will be placed on the main page +# (index.html). This can be useful if you have a project on for instance GitHub +# and want to reuse the introduction page also for the doxygen output. + +USE_MDFILE_AS_MAINPAGE = + +#--------------------------------------------------------------------------- +# Configuration options related to source browsing +#--------------------------------------------------------------------------- + +# If the SOURCE_BROWSER tag is set to YES then a list of source files will be +# generated. Documented entities will be cross-referenced with these sources. +# +# Note: To get rid of all source code in the generated output, make sure that +# also VERBATIM_HEADERS is set to NO. +# The default value is: NO. + +SOURCE_BROWSER = NO + +# Setting the INLINE_SOURCES tag to YES will include the body of functions, +# classes and enums directly into the documentation. +# The default value is: NO. + +INLINE_SOURCES = NO + +# Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any +# special comment blocks from generated source code fragments. Normal C, C++ and +# Fortran comments will always remain visible. +# The default value is: YES. + +STRIP_CODE_COMMENTS = YES + +# If the REFERENCED_BY_RELATION tag is set to YES then for each documented +# function all documented functions referencing it will be listed. +# The default value is: NO. + +REFERENCED_BY_RELATION = NO + +# If the REFERENCES_RELATION tag is set to YES then for each documented function +# all documented entities called/used by that function will be listed. +# The default value is: NO. + +REFERENCES_RELATION = NO + +# If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set +# to YES, then the hyperlinks from functions in REFERENCES_RELATION and +# REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will +# link to the documentation. +# The default value is: YES. + +REFERENCES_LINK_SOURCE = YES + +# If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the +# source code will show a tooltip with additional information such as prototype, +# brief description and links to the definition and documentation. Since this +# will make the HTML file larger and loading of large files a bit slower, you +# can opt to disable this feature. +# The default value is: YES. +# This tag requires that the tag SOURCE_BROWSER is set to YES. + +SOURCE_TOOLTIPS = YES + +# If the USE_HTAGS tag is set to YES then the references to source code will +# point to the HTML generated by the htags(1) tool instead of doxygen built-in +# source browser. The htags tool is part of GNU's global source tagging system +# (see http://www.gnu.org/software/global/global.html). You will need version +# 4.8.6 or higher. +# +# To use it do the following: +# - Install the latest version of global +# - Enable SOURCE_BROWSER and USE_HTAGS in the config file +# - Make sure the INPUT points to the root of the source tree +# - Run doxygen as normal +# +# Doxygen will invoke htags (and that will in turn invoke gtags), so these +# tools must be available from the command line (i.e. in the search path). +# +# The result: instead of the source browser generated by doxygen, the links to +# source code will now point to the output of htags. +# The default value is: NO. +# This tag requires that the tag SOURCE_BROWSER is set to YES. + +USE_HTAGS = NO + +# If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a +# verbatim copy of the header file for each class for which an include is +# specified. Set to NO to disable this. +# See also: Section \class. +# The default value is: YES. + +VERBATIM_HEADERS = YES + +#--------------------------------------------------------------------------- +# Configuration options related to the alphabetical class index +#--------------------------------------------------------------------------- + +# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all +# compounds will be generated. Enable this if the project contains a lot of +# classes, structs, unions or interfaces. +# The default value is: YES. + +ALPHABETICAL_INDEX = YES + +# The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in +# which the alphabetical index list will be split. +# Minimum value: 1, maximum value: 20, default value: 5. +# This tag requires that the tag ALPHABETICAL_INDEX is set to YES. + +COLS_IN_ALPHA_INDEX = 5 + +# In case all classes in a project start with a common prefix, all classes will +# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag +# can be used to specify a prefix (or a list of prefixes) that should be ignored +# while generating the index headers. +# This tag requires that the tag ALPHABETICAL_INDEX is set to YES. + +IGNORE_PREFIX = + +#--------------------------------------------------------------------------- +# Configuration options related to the HTML output +#--------------------------------------------------------------------------- + +# If the GENERATE_HTML tag is set to YES doxygen will generate HTML output +# The default value is: YES. + +GENERATE_HTML = YES + +# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a +# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of +# it. +# The default directory is: html. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_OUTPUT = html + +# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each +# generated HTML page (for example: .htm, .php, .asp). +# The default value is: .html. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_FILE_EXTENSION = .html + +# The HTML_HEADER tag can be used to specify a user-defined HTML header file for +# each generated HTML page. If the tag is left blank doxygen will generate a +# standard header. +# +# To get valid HTML the header file that includes any scripts and style sheets +# that doxygen needs, which is dependent on the configuration options used (e.g. +# the setting GENERATE_TREEVIEW). It is highly recommended to start with a +# default header using +# doxygen -w html new_header.html new_footer.html new_stylesheet.css +# YourConfigFile +# and then modify the file new_header.html. See also section "Doxygen usage" +# for information on how to generate the default header that doxygen normally +# uses. +# Note: The header is subject to change so you typically have to regenerate the +# default header when upgrading to a newer version of doxygen. For a description +# of the possible markers and block names see the documentation. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_HEADER = + +# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each +# generated HTML page. If the tag is left blank doxygen will generate a standard +# footer. See HTML_HEADER for more information on how to generate a default +# footer and what special commands can be used inside the footer. See also +# section "Doxygen usage" for information on how to generate the default footer +# that doxygen normally uses. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_FOOTER = + +# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style +# sheet that is used by each HTML page. It can be used to fine-tune the look of +# the HTML output. If left blank doxygen will generate a default style sheet. +# See also section "Doxygen usage" for information on how to generate the style +# sheet that doxygen normally uses. +# Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as +# it is more robust and this tag (HTML_STYLESHEET) will in the future become +# obsolete. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_STYLESHEET = + +# The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined +# cascading style sheets that are included after the standard style sheets +# created by doxygen. Using this option one can overrule certain style aspects. +# This is preferred over using HTML_STYLESHEET since it does not replace the +# standard style sheet and is therefor more robust against future updates. +# Doxygen will copy the style sheet files to the output directory. +# Note: The order of the extra stylesheet files is of importance (e.g. the last +# stylesheet in the list overrules the setting of the previous ones in the +# list). For an example see the documentation. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_EXTRA_STYLESHEET = + +# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or +# other source files which should be copied to the HTML output directory. Note +# that these files will be copied to the base HTML output directory. Use the +# $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these +# files. In the HTML_STYLESHEET file, use the file name only. Also note that the +# files will be copied as-is; there are no commands or markers available. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_EXTRA_FILES = + +# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen +# will adjust the colors in the stylesheet and background images according to +# this color. Hue is specified as an angle on a colorwheel, see +# http://en.wikipedia.org/wiki/Hue for more information. For instance the value +# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300 +# purple, and 360 is red again. +# Minimum value: 0, maximum value: 359, default value: 220. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_COLORSTYLE_HUE = 220 + +# The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors +# in the HTML output. For a value of 0 the output will use grayscales only. A +# value of 255 will produce the most vivid colors. +# Minimum value: 0, maximum value: 255, default value: 100. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_COLORSTYLE_SAT = 100 + +# The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the +# luminance component of the colors in the HTML output. Values below 100 +# gradually make the output lighter, whereas values above 100 make the output +# darker. The value divided by 100 is the actual gamma applied, so 80 represents +# a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not +# change the gamma. +# Minimum value: 40, maximum value: 240, default value: 80. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_COLORSTYLE_GAMMA = 80 + +# If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML +# page will contain the date and time when the page was generated. Setting this +# to NO can help when comparing the output of multiple runs. +# The default value is: YES. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_TIMESTAMP = YES + +# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML +# documentation will contain sections that can be hidden and shown after the +# page has loaded. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_DYNAMIC_SECTIONS = NO + +# With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries +# shown in the various tree structured indices initially; the user can expand +# and collapse entries dynamically later on. Doxygen will expand the tree to +# such a level that at most the specified number of entries are visible (unless +# a fully collapsed tree already exceeds this amount). So setting the number of +# entries 1 will produce a full collapsed tree by default. 0 is a special value +# representing an infinite number of entries and will result in a full expanded +# tree by default. +# Minimum value: 0, maximum value: 9999, default value: 100. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_INDEX_NUM_ENTRIES = 100 + +# If the GENERATE_DOCSET tag is set to YES, additional index files will be +# generated that can be used as input for Apple's Xcode 3 integrated development +# environment (see: http://developer.apple.com/tools/xcode/), introduced with +# OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a +# Makefile in the HTML output directory. Running make will produce the docset in +# that directory and running make install will install the docset in +# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at +# startup. See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html +# for more information. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +GENERATE_DOCSET = NO + +# This tag determines the name of the docset feed. A documentation feed provides +# an umbrella under which multiple documentation sets from a single provider +# (such as a company or product suite) can be grouped. +# The default value is: Doxygen generated docs. +# This tag requires that the tag GENERATE_DOCSET is set to YES. + +DOCSET_FEEDNAME = "Doxygen generated docs" + +# This tag specifies a string that should uniquely identify the documentation +# set bundle. This should be a reverse domain-name style string, e.g. +# com.mycompany.MyDocSet. Doxygen will append .docset to the name. +# The default value is: org.doxygen.Project. +# This tag requires that the tag GENERATE_DOCSET is set to YES. + +DOCSET_BUNDLE_ID = org.doxygen.Project + +# The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify +# the documentation publisher. This should be a reverse domain-name style +# string, e.g. com.mycompany.MyDocSet.documentation. +# The default value is: org.doxygen.Publisher. +# This tag requires that the tag GENERATE_DOCSET is set to YES. + +DOCSET_PUBLISHER_ID = org.doxygen.Publisher + +# The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher. +# The default value is: Publisher. +# This tag requires that the tag GENERATE_DOCSET is set to YES. + +DOCSET_PUBLISHER_NAME = Publisher + +# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three +# additional HTML index files: index.hhp, index.hhc, and index.hhk. The +# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop +# (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on +# Windows. +# +# The HTML Help Workshop contains a compiler that can convert all HTML output +# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML +# files are now used as the Windows 98 help format, and will replace the old +# Windows help format (.hlp) on all Windows platforms in the future. Compressed +# HTML files also contain an index, a table of contents, and you can search for +# words in the documentation. The HTML workshop also contains a viewer for +# compressed HTML files. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +GENERATE_HTMLHELP = NO + +# The CHM_FILE tag can be used to specify the file name of the resulting .chm +# file. You can add a path in front of the file if the result should not be +# written to the html output directory. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +CHM_FILE = + +# The HHC_LOCATION tag can be used to specify the location (absolute path +# including file name) of the HTML help compiler ( hhc.exe). If non-empty +# doxygen will try to run the HTML help compiler on the generated index.hhp. +# The file has to be specified with full path. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +HHC_LOCATION = + +# The GENERATE_CHI flag controls if a separate .chi index file is generated ( +# YES) or that it should be included in the master .chm file ( NO). +# The default value is: NO. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +GENERATE_CHI = NO + +# The CHM_INDEX_ENCODING is used to encode HtmlHelp index ( hhk), content ( hhc) +# and project file content. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +CHM_INDEX_ENCODING = + +# The BINARY_TOC flag controls whether a binary table of contents is generated ( +# YES) or a normal table of contents ( NO) in the .chm file. Furthermore it +# enables the Previous and Next buttons. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +BINARY_TOC = NO + +# The TOC_EXPAND flag can be set to YES to add extra items for group members to +# the table of contents of the HTML help documentation and to the tree view. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +TOC_EXPAND = NO + +# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and +# QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that +# can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help +# (.qch) of the generated HTML documentation. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +GENERATE_QHP = NO + +# If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify +# the file name of the resulting .qch file. The path specified is relative to +# the HTML output folder. +# This tag requires that the tag GENERATE_QHP is set to YES. + +QCH_FILE = + +# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help +# Project output. For more information please see Qt Help Project / Namespace +# (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#namespace). +# The default value is: org.doxygen.Project. +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHP_NAMESPACE = org.doxygen.Project + +# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt +# Help Project output. For more information please see Qt Help Project / Virtual +# Folders (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#virtual- +# folders). +# The default value is: doc. +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHP_VIRTUAL_FOLDER = doc + +# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom +# filter to add. For more information please see Qt Help Project / Custom +# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom- +# filters). +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHP_CUST_FILTER_NAME = + +# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the +# custom filter to add. For more information please see Qt Help Project / Custom +# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom- +# filters). +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHP_CUST_FILTER_ATTRS = + +# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this +# project's filter section matches. Qt Help Project / Filter Attributes (see: +# http://qt-project.org/doc/qt-4.8/qthelpproject.html#filter-attributes). +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHP_SECT_FILTER_ATTRS = + +# The QHG_LOCATION tag can be used to specify the location of Qt's +# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the +# generated .qhp file. +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHG_LOCATION = + +# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be +# generated, together with the HTML files, they form an Eclipse help plugin. To +# install this plugin and make it available under the help contents menu in +# Eclipse, the contents of the directory containing the HTML and XML files needs +# to be copied into the plugins directory of eclipse. The name of the directory +# within the plugins directory should be the same as the ECLIPSE_DOC_ID value. +# After copying Eclipse needs to be restarted before the help appears. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +GENERATE_ECLIPSEHELP = NO + +# A unique identifier for the Eclipse help plugin. When installing the plugin +# the directory name containing the HTML and XML files should also have this +# name. Each documentation set should have its own identifier. +# The default value is: org.doxygen.Project. +# This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES. + +ECLIPSE_DOC_ID = org.doxygen.Project + +# If you want full control over the layout of the generated HTML pages it might +# be necessary to disable the index and replace it with your own. The +# DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top +# of each HTML page. A value of NO enables the index and the value YES disables +# it. Since the tabs in the index contain the same information as the navigation +# tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +DISABLE_INDEX = NO + +# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index +# structure should be generated to display hierarchical information. If the tag +# value is set to YES, a side panel will be generated containing a tree-like +# index structure (just like the one that is generated for HTML Help). For this +# to work a browser that supports JavaScript, DHTML, CSS and frames is required +# (i.e. any modern browser). Windows users are probably better off using the +# HTML help feature. Via custom stylesheets (see HTML_EXTRA_STYLESHEET) one can +# further fine-tune the look of the index. As an example, the default style +# sheet generated by doxygen has an example that shows how to put an image at +# the root of the tree instead of the PROJECT_NAME. Since the tree basically has +# the same information as the tab index, you could consider setting +# DISABLE_INDEX to YES when enabling this option. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +GENERATE_TREEVIEW = YES + +# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that +# doxygen will group on one line in the generated HTML documentation. +# +# Note that a value of 0 will completely suppress the enum values from appearing +# in the overview section. +# Minimum value: 0, maximum value: 20, default value: 4. +# This tag requires that the tag GENERATE_HTML is set to YES. + +ENUM_VALUES_PER_LINE = 4 + +# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used +# to set the initial width (in pixels) of the frame in which the tree is shown. +# Minimum value: 0, maximum value: 1500, default value: 250. +# This tag requires that the tag GENERATE_HTML is set to YES. + +TREEVIEW_WIDTH = 250 + +# When the EXT_LINKS_IN_WINDOW option is set to YES doxygen will open links to +# external symbols imported via tag files in a separate window. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +EXT_LINKS_IN_WINDOW = NO + +# Use this tag to change the font size of LaTeX formulas included as images in +# the HTML documentation. When you change the font size after a successful +# doxygen run you need to manually remove any form_*.png images from the HTML +# output directory to force them to be regenerated. +# Minimum value: 8, maximum value: 50, default value: 10. +# This tag requires that the tag GENERATE_HTML is set to YES. + +FORMULA_FONTSIZE = 10 + +# Use the FORMULA_TRANPARENT tag to determine whether or not the images +# generated for formulas are transparent PNGs. Transparent PNGs are not +# supported properly for IE 6.0, but are supported on all modern browsers. +# +# Note that when changing this option you need to delete any form_*.png files in +# the HTML output directory before the changes have effect. +# The default value is: YES. +# This tag requires that the tag GENERATE_HTML is set to YES. + +FORMULA_TRANSPARENT = YES + +# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see +# http://www.mathjax.org) which uses client side Javascript for the rendering +# instead of using prerendered bitmaps. Use this if you do not have LaTeX +# installed or if you want to formulas look prettier in the HTML output. When +# enabled you may also need to install MathJax separately and configure the path +# to it using the MATHJAX_RELPATH option. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +USE_MATHJAX = NO + +# When MathJax is enabled you can set the default output format to be used for +# the MathJax output. See the MathJax site (see: +# http://docs.mathjax.org/en/latest/output.html) for more details. +# Possible values are: HTML-CSS (which is slower, but has the best +# compatibility), NativeMML (i.e. MathML) and SVG. +# The default value is: HTML-CSS. +# This tag requires that the tag USE_MATHJAX is set to YES. + +MATHJAX_FORMAT = HTML-CSS + +# When MathJax is enabled you need to specify the location relative to the HTML +# output directory using the MATHJAX_RELPATH option. The destination directory +# should contain the MathJax.js script. For instance, if the mathjax directory +# is located at the same level as the HTML output directory, then +# MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax +# Content Delivery Network so you can quickly see the result without installing +# MathJax. However, it is strongly recommended to install a local copy of +# MathJax from http://www.mathjax.org before deployment. +# The default value is: http://cdn.mathjax.org/mathjax/latest. +# This tag requires that the tag USE_MATHJAX is set to YES. + +MATHJAX_RELPATH = http://cdn.mathjax.org/mathjax/latest + +# The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax +# extension names that should be enabled during MathJax rendering. For example +# MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols +# This tag requires that the tag USE_MATHJAX is set to YES. + +MATHJAX_EXTENSIONS = + +# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces +# of code that will be used on startup of the MathJax code. See the MathJax site +# (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an +# example see the documentation. +# This tag requires that the tag USE_MATHJAX is set to YES. + +MATHJAX_CODEFILE = + +# When the SEARCHENGINE tag is enabled doxygen will generate a search box for +# the HTML output. The underlying search engine uses javascript and DHTML and +# should work on any modern browser. Note that when using HTML help +# (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET) +# there is already a search function so this one should typically be disabled. +# For large projects the javascript based search engine can be slow, then +# enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to +# search using the keyboard; to jump to the search box use + S +# (what the is depends on the OS and browser, but it is typically +# , /