@InterfaceAudience.Private public class RpcRetryingCallerWithReadReplicas extends Object
Modifier and Type | Field and Description |
---|---|
protected ClusterConnection |
cConnection |
protected org.apache.hadoop.conf.Configuration |
conf |
protected Get |
get |
protected ExecutorService |
pool |
protected TableName |
tableName |
protected int |
timeBeforeReplicas |
Constructor and Description |
---|
RpcRetryingCallerWithReadReplicas(RpcControllerFactory rpcControllerFactory,
TableName tableName,
ClusterConnection cConnection,
Get get,
ExecutorService pool,
int retries,
int operationTimeout,
int rpcTimeout,
int timeBeforeReplicas) |
Modifier and Type | Method and Description |
---|---|
Result |
call(int operationTimeout)
Algo:
- we put the query into the execution pool.
|
protected final ExecutorService pool
protected final ClusterConnection cConnection
protected final org.apache.hadoop.conf.Configuration conf
protected final Get get
protected final TableName tableName
protected final int timeBeforeReplicas
public RpcRetryingCallerWithReadReplicas(RpcControllerFactory rpcControllerFactory, TableName tableName, ClusterConnection cConnection, Get get, ExecutorService pool, int retries, int operationTimeout, int rpcTimeout, int timeBeforeReplicas)
public Result call(int operationTimeout) throws DoNotRetryIOException, InterruptedIOException, RetriesExhaustedException
Algo: - we put the query into the execution pool. - after x ms, if we don't have a result, we add the queries for the secondary replicas - we take the first answer - when done, we cancel what's left. Cancelling means: - removing from the pool if the actual call was not started - interrupting the call if it has started Client side, we need to take into account - a call is not executed immediately after being put into the pool - a call is a thread. Let's not multiply the number of thread by the number of replicas. Server side, if we can cancel when it's still in the handler pool, it's much better, as a call can take some i/o.
Globally, the number of retries, timeout and so on still applies, but it's per replica, not global. We continue until all retries are done, or all timeouts are exceeded.Copyright © 2007–2019 Cloudera. All rights reserved.