All Implemented Interfaces:
PersistentSet
All Known Implementing Classes:
XATransactionController, TransactionManager, NoOpTransaction, RAMTransaction
Each transaction controller is associated with a transaction context which provides error cleanup when standard exceptions are thrown anywhere in the system. The transaction context performs the following actions in response to cleanupOnError:
Field Summary | ||
---|---|---|
static final int | MODE_RECORD | Constant used for the lock_level argument to openConglomerate() and openScan() calls. Pass in MODE_RECORD if you want the conglomerate to be opened with record level locking (but the system may override this choice and provide table level locking instead). |
static final int | MODE_TABLE | Constant used for the lock_level argument to openConglomerate() and openScan() calls. Pass in MODE_TABLE if you want the conglomerate to be opened with table level locking - if this mode is passed in the system will never use record level locking for the open scan or controller. |
static final int | ISOLATION_NOLOCK | No locks are requested for data that is read only. Uncommitted data may be returned. Writes only visible previous to commit. Exclusive transaction length locks are set on data that is written, no lock is set on data that is read. No table level intent lock is held so it is up to caller to insure that table is not dropped while being accessed (RESOLVE - this issue may need to be resolved differently if we can't figure out a non-locked based way to prevent ddl during read uncommitted access). ONLY USED INTERNALLY BY ACCESS, NOT VALID FOR EXTERNAL USERS. |
static final int | ISOLATION_READ_UNCOMMITTED | No locks are requested for data that is read only. Uncommitted data may be returned. Writes only visible previous to commit. Exclusive transaction length locks are set on data that is written, no lock is set on data that is read. No table level intent lock is held so it is up to caller to insure that table is not dropped while being accessed (RESOLVE - this issue may need to be resolved differently if we can't figure out a non-locked based way to prevent ddl during read uncommitted access). Note that this is currently only supported in heap scans. TODO - work in progress to support this locking mode in the 5.1 storage system. |
static final int | ISOLATION_READ_COMMITTED | No lost updates, no dirty reads, only committed data is returned. Writes only visible when committed. Exclusive transaction length locks are set on data that is written, short term locks ( possibly instantaneous duration locks) are set on data that is read. |
static final int | ISOLATION_READ_COMMITTED_NOHOLDLOCK | No lost updates, no dirty reads, only committed data is returned. Writes only visible when committed. Exclusive transaction length locks are set on data that is written, short term locks ( possibly instantaneous duration locks) are set on data that is read. Read locks are requested for "zero" duration, thus upon return from access no read row lock is held. |
static final int | ISOLATION_REPEATABLE_READ | Read and write locks are held until end of transaction, but no phantom protection is performed (ie no previous key locking). Writes only visible when committed. Note this constant is currently mapped to ISOLATION_SERIALIZABLE. The constant is provided so that code which only requires repeatable read can be coded with the right isolation level, and will just work when store provided real repeatable read isolation. |
static final int | ISOLATION_SERIALIZABLE | Gray's isolation degree 3, "Serializable, Repeatable Read". Note that some conglomerate implementations may only be able to provide phantom protection under MODE_TABLE, while others can support this under MODE_RECORD. * |
static final int | OPENMODE_USE_UPDATE_LOCKS | Use this mode to the openScan() call to indicate the scan should get
update locks during scan, and either promote the update locks to
exclusive locks if the row is changed or demote the lock if the row
is not updated. The lock demotion depends on the isolation level of
the scan. If isolation level is ISOLATION_SERIALIZABLE or
ISOLATION_REPEATABLE_READ
then the lock will be converted to a read lock. If the isolation level
ISOLATION_READ_COMMITTED then the lock is released when the scan moves
off the row.
Note that one must still set OPENMODE_FORUPDATE to be able to change rows in the scan. So to enable update locks for an updating scan one provides (OPENMODE_FORUPDATE | OPENMODE_USE_UPDATE_LOCKS) |
static final int | OPENMODE_SECONDARY_LOCKED | Use this mode to the openConglomerate() call which opens the base table to be used in a index to base row probe. This will cause the openConglomerate() call to not get any row locks as part of it's fetches. It is important when using this mode that the secondary index table be successfully opened before opening the base table so that proper locking protocol is followed. |
static final int | OPENMODE_BASEROW_INSERT_LOCKED | Use this mode to the openConglomerate() call used to open the secondary indices of a table for inserting new rows in the table. This will let the secondaryindex know that the base row being inserted has already been locked and only previous key locks need be obtained. It is important when using this mode that the base table be successfully opened before opening the secondaryindex so that proper locking protocol is followed. |
static final int | OPENMODE_FORUPDATE | open table for update, if not specified table will be opened for read. |
static final int | OPENMODE_FOR_LOCK_ONLY | Use this mode to the openConglomerate() call used to just get the table lock on the conglomerate without actually doing anything else. Any operations other than close() performed on the "opened" container will fail. * |
static final int | OPENMODE_LOCK_NOWAIT | The table lock request will not wait.
The request to get the table lock (any table lock including intent or "real" table level lock), will not wait if it can't be granted. A lock timeout will be returned. Note that subsequent row locks will wait if the application has not set a 0 timeout and if the call does not have a wait parameter (like OpenConglomerate.fetch(). |
public static final int | OPEN_CONGLOMERATE | Constants used for the countOpen() call. * |
public static final int | OPEN_SCAN | |
public static final int | OPEN_CREATED_SORTS | |
public static final int | OPEN_SORT | |
public static final int | OPEN_TOTAL | |
static final byte | IS_DEFAULT | |
static final byte | IS_TEMPORARY | |
static final byte | IS_KEPT | |
public final int | RELEASE_LOCKS | |
public final int | KEEP_LOCKS | |
public final int | READONLY_TRANSACTION_INITIALIZATION |
Method from org.apache.derby.iapi.store.access.TransactionController Detail: |
---|
|
|
|
|
bits in the commitflag can turn on to fine tuned the "commit": KEEP_LOCKS - no locks will be released by the commit and no post commit processing will be initiated. If, for some reasons, the locks cannot be kept even if this flag is set, then the commit will sync the log, i.e., it will revert to the normal commit. READONLY_TRANSACTION_INITIALIZATION - Special case used for processing while creating the transaction. Should only be used by the system while creating the transaction to commit readonly work that may have been done using the transaction while getting it setup to be used by the user. In the future we should instead use a separate tranaction to do this initialization. Will fail if called on a transaction which has done any updates. |
Returns free space from the conglomerate back to the OS. Currently only the sequential free pages at the "end" of the conglomerate can be returned to the OS. |
|
There are 4 types of open "conglomerates" that can be tracked, those opened by each of the following: openConglomerate(), openScan(), createSort(), and openSort(). Scans opened by openSortScan() are tracked the same as those opened by openScan(). This routine can be used to either report on the number of all opens, or may be used to track one particular type of open. This routine is expected to be used for debugging only. An implementation may only track this info under SanityManager.DEBUG mode. If the implementation does not track the info it will return -1 (so code using this call to verify that no congloms are open should check for return <= 0 rather than == 0). The return value depends on the "which_to_count" parameter as follows: |
Individual rows that are loaded into the conglomerate are not logged. After this operation, the underlying database must be backed up with a database backup rather than an transaction log backup (when we have them). This warning is put here for the benefit of future generation. This function behaves the same as @see createConglomerate except it also populates the conglomerate with rows from the row source and the rows that are inserted are not logged. |
All parameters shared between openScan() and this routine are interpreted exactly the same. Logically this routine calls openScan() with the passed in set of parameters, and then places all returned rows into a newly created HashSet and returns, actual implementations will likely perform better than actually calling openScan() and doing this. For documentation of the openScan parameters see openScan(). |
Currently, only "heap"'s and ""btree secondary index"'s are supported, and all the features are not completely implemented. For now, create conglomerates like this:
Each implementation of a conglomerate takes a possibly different set of properties. The "heap" implementation currently takes no properties. The "btree secondary index" requires the following set of properties:TransactionController tc; long conglomId = tc.createConglomerate( "heap", // we're requesting a heap conglomerate template, // a populated template is required for heap and btree. null, // no column order null, // default collation order for all columns null, // default properties 0); // not temporary |
Sorts also do aggregation. The input (unaggregated) rows have the same format as the aggregated rows, and the aggregate results are part of the both rows. The sorter, when it notices that a row is a duplicate of another, calls a user-supplied aggregation method (see interface Aggregator), passing it both rows. One row is known as the 'addend' and the other the 'accumulator'. The aggregation method is assumed to merge the addend into the accumulator. The sort then discards the addend row. So, for the query: The input row to the sorter would have one column for a and another column for sum(b). It is up to the caller to get the format of the row correct, and to initialize the aggregate values correctly (null for most aggregates, 0 for count).select a, sum(b) from t group by a Nulls are always considered to be ordered in a sort, that is, null compares equal to null, and less than anything else. |
Get a transaction controller with which to manipulate data within the access manager. Tbis controller allows one to manipulate a global XA conforming transaction. Must only be called a previous local transaction was created and exists in the context. Can only be called if the current transaction is in the idle state. Upon return from this call the old tc will be unusable, and all references to it should be dropped (it will have been implicitly destroy()'d by this call. The (format_id, global_id, branch_id) triplet is meant to come exactly from a javax.transaction.xa.Xid. We don't use Xid so that the system can be delivered on a non-1.2 vm system and not require the javax classes in the path. |
Return a string with debugging information about current opened congloms/scans/sorts which have not been close()'d. Calls to this routine are only valid under code which is conditional on SanityManager.DEBUG. |
Returns a GroupFetchScanController which can be used to move rows around in a table, creating a block of free pages at the end of the table. The process will move rows from the end of the table toward the beginning. The GroupFetchScanController will return the old row location, the new row location, and the actual data of any row moved. Note that this scan only returns moved rows, not an entire set of rows, the scan is designed specifically to be used by either explicit user call of the SYSCS_ONLINE_COMPRESS_TABLE() procedure, or internal background calls to compress the table. The old and new row locations are returned so that the caller can update any indexes necessary. This scan always returns all collumns of the row. All inputs work exactly as in openScan(). The return is a GroupFetchScanController, which only allows fetches of groups of rows from the conglomerate. |
|
|
Drop a sort created by a call to createSort() within the current transaction (sorts are automatically "dropped" at the end of a transaction. This call should only be made after all openSortScan()'s and openSort()'s have been closed. |
Returns true and fetches the rightmost non-null row of an ordered conglomerate into "fetchRow" if there is at least one non-null row in the conglomerate. If there are no non-null rows in the conglomerate it returns false. Any row with a first column with a Null is considered a "null" row. Non-ordered conglomerates will not implement this interface, calls will generate a StandardException. RESOLVE - this interface is temporary, long term equivalent (and more) functionality will be provided by the openBackwardScan() interface. ISOLATION_SERIALIZABLE and MODE_RECORD locking for btree max: The "BTREE" implementation will at the very least get a shared row lock on the max key row and the key previous to the max. This will be the case where the max row exists in the rightmost page of the btree. These locks won't be released. If the row does not exist in the last page of the btree then a scan of the entire btree will be performed, locks acquired in this scan will not be released. Note that under ISOLATION_READ_COMMITTED, all locks on the table are released before returning from this call. |
|
Will have to change if we ever have more than one container in a conglomerate. |
|
|
|
The dynamic info is a set of variables to be used in a given ScanController or ConglomerateController. It can only be used in one controller at a time. It is up to the caller to insure the correct thread access to this info. The type of info in this is a scratch template for btree traversal, other scratch variables for qualifier evaluation, ... |
|
getOwner() on that object, guarantees that the lock
will be removed on a commit or an abort. |
The static info would be valid until any ddl was executed on the conglomid, and would be up to the caller to throw away when that happened. This ties in with what language already does for other invalidation of static info. The type of info in this would be containerid and array of format id's from which templates can be created. The info in this object is read only and can be shared among as many threads as necessary. |
This transaction "name" will be the same id which is returned in the TransactionInfo information, used by the lock and transaction vti's to identify transactions. Although implementation specific, the transaction id is usually a number which is bumped every time a commit or abort is issued. |
A superset of properties that "users" (ie. from sql) can specify. Store may implement other properties which should not be specified by users. Layers above access may implement properties which are not known at all to Access. This list is a superset, as some properties may not be implemented by certain types of conglomerates. For instant an in-memory store may not implement a pageSize property. Or some conglomerates may not support pre-allocation. This interface is meant to be used by the SQL parser to do validation of properties passsed to the create table statement, and also by the various user interfaces which present table information back to the user. Currently this routine returns the following list: derby.storage.initialPages derby.storage.minimumRecordSize derby.storage.pageReservedSpace derby.storage.pageSize |
|
|
|
This simply passes the operation to the RawStore which logs and does it. |
Same as openConglomerate(), except that one can optionally provide "compiled" static_info and/or dynamic_info. This compiled information must have be gotten from getDynamicCompiledConglomInfo() and/or getStaticCompiledConglomInfo() calls on the same conglomid being opened. It is up to caller that "compiled" information is still valid and is appropriately multi-threaded protected. |
Same as openScan(), except that one can optionally provide "compiled" static_info and/or dynamic_info. This compiled information must have be gotten from getDynamicCompiledConglomInfo() and/or getStaticCompiledConglomInfo() calls on the same conglomid being opened. It is up to caller that "compiled" information is still valid and is appropriately multi-threaded protected. |
The lock level indicates the minimum lock level to get locks at, the underlying conglomerate implementation may actually lock at a higher level (ie. caller may request MODE_RECORD, but the table may be locked at MODE_TABLE instead). The close method is on the ConglomerateController interface. |
All inputs work exactly as in openScan(). The return is a GroupFetchScanController, which only allows fetches of groups of rows from the conglomerate. |
The way that starting and stopping keys and operators are used may best be described by example. Say there's an ordered conglomerate with two columns, where the 0-th column is named 'x', and the 1st column is named 'y'. The values of the columns are as follows: x: 1 3 4 4 4 5 5 5 6 7 9 y: 1 1 2 4 6 2 4 6 1 1 1 A {start key, search op} pair of {{5.2}, GE} would position on {x=5, y=2}, whereas the pair {{5}, GT} would position on {x=6, y=1}. Partial keys are used to implement partial key scans in SQL. For example, the SQL "select * from t where x = 5" would open a scan on the conglomerate (or a useful index) of t using a starting position partial key of {{5}, GE} and a stopping position partial key of {{5}, GT}. Some more examples:
+-------------------+------------+-----------+--------------+--------------+ | predicate | start key | stop key | rows | rows locked | | | value | op | value |op | returned |serialization | +-------------------+-------+----+-------+---+--------------+--------------+ | x = 5 | {5} | GE | {5} |GT |{5,2} .. {5,6}|{4,6} .. {5,6}| | x > 5 | {5} | GT | null | |{6,1} .. {9,1}|{5,6} .. {9,1}| | x >= 5 | {5} | GE | null | |{5,2} .. {9,1}|{4,6} .. {9,1}| | x <= 5 | null | | {5} |GT |{1,1} .. {5,6}|first .. {5,6}| | x < 5 | null | | {5} |GE |{1,1} .. {4,6}|first .. {4,6}| | x >= 5 and x <= 7 | {5}, | GE | {7} |GT |{5,2} .. {7,1}|{4,6} .. {7,1}| | x = 5 and y > 2 | {5,2} | GT | {5} |GT |{5,4} .. {5,6}|{5,2} .. {5,6}| | x = 5 and y >= 2 | {5,2} | GE | {5} |GT |{5,2} .. {5,6}|{4,6} .. {5,6}| | x = 5 and y < 5 | {5} | GE | {5,5} |GE |{5,2} .. {5,4}|{4,6} .. {5,4}| | x = 2 | {2} | GE | {2} |GT | none |{1,1} .. {1,1}| +-------------------+-------+----+-------+---+--------------+--------------+ As the above table implies, the underlying scan may lock more rows than it returns in order to guarantee serialization. For each row which meets the start and stop position, as described above the row is "qualified" to see whether it should be returned. The qualification is a 2 dimensional array of @see Qualifiers, which represents the qualification in conjunctive normal form (CNF). Conjunctive normal form is an "and'd" set of "or'd" Qualifiers. For example x = 5 would be represented is pseudo code as: qualifier_cnf[][] = new Qualifier[1]; qualifier_cnf[0] = new Qualifier[1]; qualifier_cnr[0][0] = new Qualifer(x = 5) For example (x = 5) or (y = 6) would be represented is pseudo code as: qualifier_cnf[][] = new Qualifier[1]; qualifier_cnf[0] = new Qualifier[2]; qualifier_cnr[0][0] = new Qualifer(x = 5) qualifier_cnr[0][1] = new Qualifer(y = 6) For example ((x = 5) or (x = 6)) and ((y = 1) or (y = 2)) would be represented is pseudo code as: qualifier_cnf[][] = new Qualifier[2]; qualifier_cnf[0] = new Qualifier[2]; qualifier_cnr[0][0] = new Qualifer(x = 5) qualifier_cnr[0][1] = new Qualifer(x = 6) qualifier_cnr[0][0] = new Qualifer(y = 5) qualifier_cnr[0][1] = new Qualifer(y = 6) For each row the CNF qualfier is processed and it is determined whether or not the row should be returned to the caller. The following pseudo-code describes how this is done: }if (qualifier != null) {}for (int and_clause; and_clause < qualifier.length; and_clause++) { boolean or_qualifies = false; for (int or_clause; or_clause < qualifier[and_clause].length; or_clause++) {}DataValueDescriptor key = qualifier[and_clause][or_clause].getOrderable(); DataValueDescriptor row_col = get row column[qualifier[and_clause][or_clause].getColumnId()]; boolean or_qualifies = row_col.compare(qualifier[i].getOperator,if (or_qualifies) { break; } } if (!or_qualifies) {key, qualifier[i].getOrderedNulls, qualifier[i].getUnknownRV);don't return this row to the client - proceed to next row; |
There may (in the future) be multiple sort inserters for a given sort, the idea being that the various threads of a parallel query plan can all insert into the sort. For now, however, only a single sort controller per sort is supported. |
Return an open SortCostController which can be used to ask about the estimated costs of SortController() operations. |
|
In the future, multiple sort scans on the same sort will be supported (for parallel execution across a uniqueness sort in which the order of the resulting rows is not important). Currently, only a single sort scan is allowed per sort. In the future, it will be possible to open a sort scan and start retrieving rows before the last row is inserted. The sort controller would block till rows were available to return. Currently, an attempt to retrieve a row before the sort controller is closed will cause an exception. |
Return an open StoreCostController which can be used to ask about the estimated row counts and costs of ScanController and ConglomerateController operations, on the given conglomerate. |
This call will purge committed deleted rows from the conglomerate, that space will be available for future inserts into the conglomerate. |
This function behaves the same as @see createConglomerate except it also populates the conglomerate with rows from the row source and the rows that are inserted are not logged. Individual rows that are loaded into the conglomerate are not logged. After this operation, the underlying database must be backed up with a database backup rather than an transaction log backup (when we have them). This warning is put here for the benefit of future generation. |
|
if "close_controllers" is true then all conglomerates and scans are closed (held or non-held). If "close_controllers" is false then no cleanup is done by the TransactionController. It is then the responsibility of the caller to close all resources that may have been affected by the statements backed out by the call. This option is meant to be used by the Language implementation of statement level backout, where the system "knows" what could be affected by the scope of the statements executed within the statement. |
|
|
A nested user transaction can be used exactly as any other TransactionController, except as follows. For this discussion let the parent transaction be the transaction used to make the startNestedUserTransaction() call, and let the child transaction be the transaction returned by the startNestedUserTransaction() call. A parent transaction can nest a single readonly transaction and a single separate read/write transaction. If a subsequent nested transaction creation is attempted against the parent prior to destroying an existing nested user transaction of the same type, an exception will be thrown. The nesting is limited to one level deep. An exception will be thrown if a subsequent getNestedUserTransaction() is called on the child transaction. The locks in the child transaction of a readOnly nested user transaction will be compatible with the locks of the parent transaction. The locks in the child transaction of a non-readOnly nested user transaction will NOT be compatible with those of the parent transaction - this is necessary for correct recovery behavior. A commit in the child transaction will release locks associated with the child transaction only, work can continue in the parent transaction at this point. Any abort of the child transaction will result in an abort of both the child transaction and parent transaction, either initiated by an explict abort() call or by an exception that results in an abort. A TransactionController.destroy() call should be made on the child transaction once all child work is done, and the caller wishes to continue work in the parent transaction. AccessFactory.getTransaction() will always return the "parent" transaction, never the child transaction. Thus clients using nested user transactions must keep track of the transaction, as there is no interface to query the storage system to get the current child transaction. The idea is that a nested user transaction should be used to for a limited amount of work, committed, and then work continues in the parent transaction. Nested User transactions are meant to be used to implement system work necessary to commit as part of implementing a user's request, but where holding the lock for the duration of the user transaction is not acceptable. 2 examples of this are system catalog read locks accumulated while compiling a plan, and auto-increment. Once the first write of a non-readOnly nested transaction is done, then the nested user transaction must be committed or aborted before any write operation is attempted in the parent transaction. |