req.version(), req.futureId(), req.miniId(), tx != null && tx.onePhaseCommit(), entries.size(), IgnitePair<Collection<GridCacheVersion>> versPair = ctx.tm().versions(req.version()); GridCacheVersion dhtVer = req.dhtVersion(i); boolean ret = req.returnValue(i) || dhtVer == null || !dhtVer.equals(ver); /*read-through*/false, /*update-metrics*/true, /*event notification*/req.returnValue(i), CU.subjectId(tx, ctx.shared()), null, tx != null ? tx.resolveTaskName() : null, null, req.keepBinary()); req.version(), req.futureId(), req.miniId(), false, entries.size(),
final GridNearLockRequest req, @Nullable final CacheEntryPredicate[] filter0) { final List<KeyCacheObject> keys = req.keys(); if (req.inTx()) { GridCacheVersion dhtVer = ctx.tm().mappedVersion(req.version()); filter = req.filter(); if (req.firstClientRequest()) { assert nearNode.isClient(); if (top != null && needRemap(req.topologyVersion(), top.readyTopologyVersion(), req.keys())) { if (log.isDebugEnabled()) { log.debug("Client topology version mismatch, need remap lock request [" + "reqTopVer=" + req.topologyVersion() + ", locTopVer=" + top.readyTopologyVersion() + ", req=" + req + ']'); if (req.inTx()) { if (tx == null) { tx = new GridDhtTxLocal( ctx.shared(), req.topologyVersion(), nearNode.id(), req.version(), req.futureId(), req.miniId(), req.threadId(),
/** * Adds a key. * * @param key Key. * @param retVal Flag indicating whether value should be returned. * @param dhtVer DHT version. * @param ctx Context. * @throws IgniteCheckedException If failed. */ public void addKeyBytes( KeyCacheObject key, boolean retVal, @Nullable GridCacheVersion dhtVer, GridCacheContext ctx ) throws IgniteCheckedException { dhtVers[idx] = dhtVer; // Delegate to super. addKeyBytes(key, retVal, ctx); }
req.cacheId(), req.version(), req.futureId(), req.miniId(), false, 0, req.classError(), null, false);
/** * @param msgs Messages. * @param expCnt Expected number of messages. */ private void checkClientLockMessages(List<Object> msgs, int expCnt) { assertEquals(expCnt, msgs.size()); assertTrue(((GridNearLockRequest)msgs.get(0)).firstClientRequest()); for (int i = 1; i < msgs.size(); i++) assertFalse(((GridNearLockRequest)msgs.get(i)).firstClientRequest()); }
req = new GridNearLockRequest( cctx.cacheId(), topVer, tx.addKeyMapping(txKey, mapping.node()); req.addKeyBytes( key, retval,
if (!writer.writeHeader(directType(), fieldsCount())) return false;
assertEquals(dhtEntry.version(), req.dhtVersion(0));
msg = new GridNearLockRequest();
req.cacheId(), req.version(), req.futureId(), req.miniId(), false, 0, req.classError(), null, false);
/** * @param nearNode * @param req */ private void processNearLockRequest0(ClusterNode nearNode, GridNearLockRequest req) { IgniteInternalFuture<?> f; if (req.firstClientRequest()) { for (; ; ) { if (waitForExchangeFuture(nearNode, req)) return; f = lockAllAsync(ctx, nearNode, req, null); if (f != null) break; } } else f = lockAllAsync(ctx, nearNode, req, null); // Register listener just so we print out errors. // Exclude lock timeout and rollback exceptions since it's not a fatal exception. f.listen(CU.errorLogger(log, GridCacheLockTimeoutException.class, GridDistributedLockCancelledException.class, IgniteTxTimeoutCheckedException.class, IgniteTxRollbackCheckedException.class)); }
req = new GridNearLockRequest( cctx.cacheId(), topVer, tx.addKeyMapping(txKey, mapping.node()); req.addKeyBytes( key, retval && dhtVer == null,
if (!writer.writeHeader(directType(), fieldsCount())) return false;
final GridNearLockRequest req, @Nullable final CacheEntryPredicate[] filter0) { final List<KeyCacheObject> keys = req.keys(); if (req.inTx()) { GridCacheVersion dhtVer = ctx.tm().mappedVersion(req.version()); filter = req.filter(); if (req.firstClientRequest()) { assert nearNode.isClient(); if (top != null && needRemap(req.topologyVersion(), top.readyTopologyVersion())) { if (log.isDebugEnabled()) { log.debug("Client topology version mismatch, need remap lock request [" + "reqTopVer=" + req.topologyVersion() + ", locTopVer=" + top.readyTopologyVersion() + ", req=" + req + ']'); if (req.inTx()) { if (tx == null) { tx = new GridDhtTxLocal( ctx.shared(), req.topologyVersion(), nearNode.id(), req.version(), req.futureId(), req.miniId(), req.threadId(),
req.version(), req.futureId(), req.miniId(), tx != null && tx.onePhaseCommit(), entries.size(), IgnitePair<Collection<GridCacheVersion>> versPair = ctx.tm().versions(req.version()); GridCacheVersion dhtVer = req.dhtVersion(i); boolean ret = req.returnValue(i) || dhtVer == null || !dhtVer.equals(ver); /*read-through*/false, /*update-metrics*/true, /*event notification*/req.returnValue(i), CU.subjectId(tx, ctx.shared()), null, tx != null ? tx.resolveTaskName() : null, null, req.keepBinary(), req.version(), req.futureId(), req.miniId(), false, entries.size(),
assert req.firstClientRequest() : req;
req = new GridNearLockRequest( cctx.cacheId(), topVer, tx.addKeyMapping(txKey, mapping.node()); req.addKeyBytes( key, retval,
/** * Adds a key. * * @param key Key. * @param retVal Flag indicating whether value should be returned. * @param dhtVer DHT version. * @param ctx Context. * @throws IgniteCheckedException If failed. */ public void addKeyBytes( KeyCacheObject key, boolean retVal, @Nullable GridCacheVersion dhtVer, GridCacheContext ctx ) throws IgniteCheckedException { dhtVers[idx] = dhtVer; // Delegate to super. addKeyBytes(key, retVal, ctx); }
assertTrue(((GridNearLockRequest)msgs.get(0)).firstClientRequest()); assertTrue(((GridNearLockRequest)msgs.get(1)).firstClientRequest()); assertFalse(((GridNearLockRequest)msgs.get(i)).firstClientRequest());