Modifier and Type | Method and Description |
---|---|
static Scan |
MetaTableAccessor.getScanForTableName(Connection connection,
TableName tableName)
Deprecated.
|
Modifier and Type | Field and Description |
---|---|
protected Scan |
ClientScanner.scan |
protected Scan |
ScannerCallable.scan |
Modifier and Type | Method and Description |
---|---|
Scan |
Scan.addColumn(byte[] family,
byte[] qualifier)
Get the column from the specified family with the specified qualifier.
|
Scan |
Scan.addFamily(byte[] family)
Get all columns from the specified family.
|
static Scan |
Scan.createScanFromCursor(Cursor cursor)
Create a new Scan with a cursor.
|
protected Scan |
ClientScanner.getScan() |
protected Scan |
ScannerCallable.getScan() |
Scan |
Scan.readAllVersions()
Get all available versions.
|
Scan |
Scan.readVersions(int versions)
Get up to the specified number of versions of each column.
|
Scan |
Scan.setACL(Map<String,Permission> perms) |
Scan |
Scan.setACL(String user,
Permission perms) |
Scan |
Scan.setAllowPartialResults(boolean allowPartialResults)
Setting whether the caller wants to see the partial results when server returns
less-than-expected cells.
|
Scan |
Scan.setAsyncPrefetch(boolean asyncPrefetch) |
Scan |
Scan.setAttribute(String name,
byte[] value) |
Scan |
Scan.setAuthorizations(Authorizations authorizations) |
Scan |
Scan.setBatch(int batch)
Set the maximum number of cells to return for each call to next().
|
Scan |
Scan.setCacheBlocks(boolean cacheBlocks)
Set whether blocks should be cached for this Scan.
|
Scan |
Scan.setCaching(int caching)
Set the number of rows for caching that will be passed to scanners.
|
Scan |
Scan.setColumnFamilyTimeRange(byte[] cf,
long minStamp,
long maxStamp) |
Scan |
Scan.setConsistency(Consistency consistency) |
Scan |
Scan.setFamilyMap(Map<byte[],NavigableSet<byte[]>> familyMap)
Setting the familyMap
|
Scan |
Scan.setFilter(Filter filter) |
Scan |
Scan.setId(String id) |
Scan |
Scan.setIsolationLevel(IsolationLevel level) |
Scan |
Scan.setLimit(int limit)
Set the limit of rows for this scan.
|
Scan |
Scan.setLoadColumnFamiliesOnDemand(boolean value) |
Scan |
Scan.setMaxResultSize(long maxResultSize)
Set the maximum result size.
|
Scan |
Scan.setMaxResultsPerColumnFamily(int limit)
Set the maximum number of values to return per row per Column Family
|
Scan |
Scan.setMaxVersions()
Deprecated.
It is easy to misunderstand with column family's max versions, so use
readAllVersions() instead. |
Scan |
Scan.setMaxVersions(int maxVersions)
Deprecated.
It is easy to misunderstand with column family's max versions, so use
readVersions(int) instead. |
Scan |
Scan.setNeedCursorResult(boolean needCursorResult)
When the server is slow or we scan a table with many deleted data or we use a sparse filter,
the server will response heartbeat to prevent timeout.
|
Scan |
Scan.setOneRowLimit()
Call this when you only want to get one row.
|
Scan |
Scan.setPriority(int priority) |
Scan |
Scan.setRaw(boolean raw)
Enable/disable "raw" mode for this scan.
|
Scan |
Scan.setReadType(Scan.ReadType readType)
Set the read type for this scan.
|
Scan |
Scan.setReplicaId(int Id) |
Scan |
Scan.setReversed(boolean reversed)
Set whether this scan is a reversed one
|
Scan |
Scan.setRowOffsetPerColumnFamily(int offset)
Set offset for the row per Column Family.
|
Scan |
Scan.setRowPrefixFilter(byte[] rowPrefix)
Set a filter (using stopRow and startRow) so the result set only contains rows where the
rowKey starts with the specified prefix.
|
Scan |
Scan.setScanMetricsEnabled(boolean enabled)
Enable collection of
ScanMetrics . |
Scan |
Scan.setSmall(boolean small)
Deprecated.
since 2.0.0. Use
setLimit(int) and setReadType(ReadType) instead.
And for the one rpc optimization, now we will also fetch data when openScanner, and
if the number of rows reaches the limit then we will close the scanner
automatically which means we will fall back to one rpc. |
Scan |
Scan.setStartRow(byte[] startRow)
Deprecated.
use
withStartRow(byte[]) instead. This method may change the inclusive of
the stop row to keep compatible with the old behavior. |
Scan |
Scan.setStopRow(byte[] stopRow)
Deprecated.
use
withStopRow(byte[]) instead. This method may change the inclusive of
the stop row to keep compatible with the old behavior. |
Scan |
Scan.setTimeRange(long minStamp,
long maxStamp)
Get versions of columns only within the specified timestamp range,
[minStamp, maxStamp).
|
Scan |
Scan.setTimestamp(long timestamp)
Get versions of columns with the specified timestamp.
|
Scan |
Scan.setTimeStamp(long timestamp)
Deprecated.
As of release 2.0.0, this will be removed in HBase 3.0.0.
Use
setTimestamp(long) instead |
Scan |
Scan.withStartRow(byte[] startRow)
Set the start row of the scan.
|
Scan |
Scan.withStartRow(byte[] startRow,
boolean inclusive)
Set the start row of the scan.
|
Scan |
Scan.withStopRow(byte[] stopRow)
Set the stop row of the scan.
|
Scan |
Scan.withStopRow(byte[] stopRow,
boolean inclusive)
Set the stop row of the scan.
|
Modifier and Type | Method and Description |
---|---|
static org.apache.hadoop.hbase.client.ScanResultCache |
ConnectionUtils.createScanResultCache(Scan scan) |
static long |
PackagePrivateFieldAccessor.getMvccReadPoint(Scan scan) |
ResultScanner |
HTable.getScanner(Scan scan)
The underlying
HTable must not be closed. |
ResultScanner |
AsyncTable.getScanner(Scan scan)
Returns a scanner on the current table as specified by the
Scan object. |
default ResultScanner |
Table.getScanner(Scan scan)
Returns a scanner on the current table as specified by the
Scan
object. |
protected void |
AbstractClientScanner.initScanMetrics(Scan scan)
Check and initialize if application wants to collect scan metrics
|
void |
AsyncTable.scan(Scan scan,
C consumer)
The scan API uses the observer pattern.
|
CompletableFuture<List<Result>> |
AsyncTable.scanAll(Scan scan)
Return all the results that match the given scan object.
|
static void |
PackagePrivateFieldAccessor.setMvccReadPoint(Scan scan,
long mvccReadPoint) |
Constructor and Description |
---|
ClientAsyncPrefetchScanner(org.apache.hadoop.conf.Configuration configuration,
Scan scan,
TableName name,
ClusterConnection connection,
RpcRetryingCallerFactory rpcCallerFactory,
RpcControllerFactory rpcControllerFactory,
ExecutorService pool,
int replicaCallTimeoutMicroSecondScan) |
ClientScanner(org.apache.hadoop.conf.Configuration conf,
Scan scan,
TableName tableName,
ClusterConnection connection,
RpcRetryingCallerFactory rpcFactory,
RpcControllerFactory controllerFactory,
ExecutorService pool,
int primaryOperationTimeout)
Create a new ClientScanner for the specified table Note that the passed
Scan 's start
row maybe changed changed. |
ClientSideRegionScanner(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path rootDir,
TableDescriptor htd,
RegionInfo hri,
Scan scan,
ScanMetrics scanMetrics) |
ClientSimpleScanner(org.apache.hadoop.conf.Configuration configuration,
Scan scan,
TableName name,
ClusterConnection connection,
RpcRetryingCallerFactory rpcCallerFactory,
RpcControllerFactory rpcControllerFactory,
ExecutorService pool,
int replicaCallTimeoutMicroSecondScan) |
ReversedClientScanner(org.apache.hadoop.conf.Configuration conf,
Scan scan,
TableName tableName,
ClusterConnection connection,
RpcRetryingCallerFactory rpcFactory,
RpcControllerFactory controllerFactory,
ExecutorService pool,
int primaryOperationTimeout)
Create a new ReversibleClientScanner for the specified table Note that the passed
Scan 's start row maybe changed. |
ReversedScannerCallable(ClusterConnection connection,
TableName tableName,
Scan scan,
ScanMetrics scanMetrics,
RpcControllerFactory rpcFactory) |
ReversedScannerCallable(ClusterConnection connection,
TableName tableName,
Scan scan,
ScanMetrics scanMetrics,
RpcControllerFactory rpcFactory,
int replicaId) |
Scan(Scan scan)
Creates a new instance of this class while copying all values.
|
ScannerCallable(ClusterConnection connection,
TableName tableName,
Scan scan,
ScanMetrics scanMetrics,
RpcControllerFactory rpcControllerFactory) |
ScannerCallable(ClusterConnection connection,
TableName tableName,
Scan scan,
ScanMetrics scanMetrics,
RpcControllerFactory rpcControllerFactory,
int id) |
TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.Path rootDir,
org.apache.hadoop.fs.Path restoreDir,
String snapshotName,
Scan scan) |
TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.Path rootDir,
org.apache.hadoop.fs.Path restoreDir,
String snapshotName,
Scan scan,
boolean snapshotAlreadyRestored)
Creates a TableSnapshotScanner.
|
TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.Path restoreDir,
String snapshotName,
Scan scan)
Creates a TableSnapshotScanner.
|
Modifier and Type | Method and Description |
---|---|
static <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AsyncAggregationClient.avg(AsyncTable<?> table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan) |
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.avg(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handle for calling the average method for
a given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.avg(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handle for calling the average method for
a given cf-cq combination.
|
static <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AsyncAggregationClient.max(AsyncTable<?> table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan) |
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.max(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the maximum value of a column for a given column family for the
given range.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.max(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the maximum value of a column for a given column family for the
given range.
|
static <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AsyncAggregationClient.median(AsyncTable<AdvancedScanResultConsumer> table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan) |
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.median(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handler for calling the median method for a
given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.median(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handler for calling the median method for a
given cf-cq combination.
|
static <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AsyncAggregationClient.min(AsyncTable<?> table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan) |
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.min(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the minimum value of a column for a given column family for the
given range.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.min(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the minimum value of a column for a given column family for the
given range.
|
static <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AsyncAggregationClient.rowCount(AsyncTable<?> table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan) |
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.rowCount(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the row count, by summing up the individual results obtained from
regions.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.rowCount(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It gives the row count, by summing up the individual results obtained from
regions.
|
static <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AsyncAggregationClient.std(AsyncTable<?> table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan) |
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.std(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handle for calling the std method for a
given cf-cq combination.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.std(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
This is the client side interface/handle for calling the std method for a
given cf-cq combination.
|
static <R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AsyncAggregationClient.sum(AsyncTable<?> table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan) |
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.sum(Table table,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It sums up the value returned from various regions.
|
<R,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
AggregationClient.sum(TableName tableName,
ColumnInterpreter<R,S,P,Q,T> ci,
Scan scan)
It sums up the value returned from various regions.
|
Modifier and Type | Method and Description |
---|---|
default RegionScanner |
RegionObserver.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s)
Called after the client opens a new scanner.
|
default void |
RegionObserver.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan)
Called before the client opens a new scanner.
|
static Map<byte[],Export.Response> |
Export.run(org.apache.hadoop.conf.Configuration conf,
TableName tableName,
Scan scan,
org.apache.hadoop.fs.Path dir) |
Modifier and Type | Method and Description |
---|---|
void |
ScanModifyingObserver.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan) |
Modifier and Type | Method and Description |
---|---|
boolean |
HalfStoreFileReader.passesKeyRangeFilter(Scan scan) |
Modifier and Type | Method and Description |
---|---|
static void |
TableMapReduceUtil.initMultiTableSnapshotMapperJob(Map<String,Collection<Scan>> snapshotScans,
Class<? extends TableMap> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapred.JobConf job,
boolean addDependencyJars,
org.apache.hadoop.fs.Path tmpRestoreDir)
Sets up the job for reading from one or more multiple table snapshots, with one or more scans
per snapshot.
|
static void |
MultiTableSnapshotInputFormat.setInput(org.apache.hadoop.conf.Configuration conf,
Map<String,Collection<Scan>> snapshotScans,
org.apache.hadoop.fs.Path restoreDir)
Configure conf to read from snapshotScans, with snapshots restored to a subdirectory of
restoreDir.
|
Constructor and Description |
---|
TableSnapshotRegionSplit(HTableDescriptor htd,
HRegionInfo regionInfo,
List<String> locations,
Scan scan,
org.apache.hadoop.fs.Path restoreDir) |
Modifier and Type | Method and Description |
---|---|
static Scan |
TableMapReduceUtil.convertStringToScan(String base64)
Converts the given Base64 string back into a Scan instance.
|
static Scan |
TableInputFormat.createScanFromConfiguration(org.apache.hadoop.conf.Configuration conf)
Sets up a
Scan instance, applying settings from the configuration property
constants defined in TableInputFormat . |
static Scan |
TableSnapshotInputFormatImpl.extractScanFromConf(org.apache.hadoop.conf.Configuration conf) |
Scan |
TableSplit.getScan()
Returns a Scan object from the stored string representation.
|
Scan |
TableInputFormatBase.getScan()
Gets the scan defining the actual details like columns etc.
|
Modifier and Type | Method and Description |
---|---|
static Triple<TableName,Scan,org.apache.hadoop.fs.Path> |
ExportUtils.getArgumentsFromCommandLine(org.apache.hadoop.conf.Configuration conf,
String[] args) |
protected List<Scan> |
MultiTableInputFormatBase.getScans()
Allows subclasses to get the list of
Scan objects. |
Map<String,Collection<Scan>> |
MultiTableSnapshotInputFormatImpl.getSnapshotsToScans(org.apache.hadoop.conf.Configuration conf)
Retrieve the snapshot name -> list<scan> mapping pushed to configuration by
MultiTableSnapshotInputFormatImpl.setSnapshotToScans(org.apache.hadoop.conf.Configuration, java.util.Map) |
Modifier and Type | Method and Description |
---|---|
static void |
TableInputFormat.addColumns(Scan scan,
byte[][] columns)
Adds an array of columns specified using old format, family:qualifier.
|
static String |
TableMapReduceUtil.convertScanToString(Scan scan)
Writes the given scan into a Base64 encoded string.
|
static List<TableSnapshotInputFormatImpl.InputSplit> |
TableSnapshotInputFormatImpl.getSplits(Scan scan,
SnapshotManifest manifest,
List<HRegionInfo> regionManifests,
org.apache.hadoop.fs.Path restoreDir,
org.apache.hadoop.conf.Configuration conf) |
static List<TableSnapshotInputFormatImpl.InputSplit> |
TableSnapshotInputFormatImpl.getSplits(Scan scan,
SnapshotManifest manifest,
List<HRegionInfo> regionManifests,
org.apache.hadoop.fs.Path restoreDir,
org.apache.hadoop.conf.Configuration conf,
RegionSplitter.SplitAlgorithm sa,
int numSplits) |
static void |
IdentityTableMapper.initJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
GroupingTableMapper.initJob(String table,
Scan scan,
String groupColumns,
Class<? extends TableMapper> mapper,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(byte[] table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
boolean initCredentials,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(String table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(TableName table,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a TableMap job.
|
static void |
TableMapReduceUtil.initTableSnapshotMapperJob(String snapshotName,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
org.apache.hadoop.fs.Path tmpRestoreDir)
Sets up the job for reading from a table snapshot.
|
static void |
TableMapReduceUtil.initTableSnapshotMapperJob(String snapshotName,
Scan scan,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
org.apache.hadoop.fs.Path tmpRestoreDir,
RegionSplitter.SplitAlgorithm splitAlgo,
int numSplitsPerRegion)
Sets up the job for reading from a table snapshot.
|
void |
TableInputFormatBase.setScan(Scan scan)
Sets the scan defining the actual details like columns etc.
|
void |
TableRecordReaderImpl.setScan(Scan scan)
Sets the scan defining the actual details like columns etc.
|
void |
TableRecordReader.setScan(Scan scan)
Sets the scan defining the actual details like columns etc.
|
Modifier and Type | Method and Description |
---|---|
static void |
TableMapReduceUtil.initMultiTableSnapshotMapperJob(Map<String,Collection<Scan>> snapshotScans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
org.apache.hadoop.fs.Path tmpRestoreDir)
Sets up the job for reading from one or more table snapshots, with one or more scans
per snapshot.
|
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job)
Use this before submitting a Multi TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars)
Use this before submitting a Multi TableMap job.
|
static void |
TableMapReduceUtil.initTableMapperJob(List<Scan> scans,
Class<? extends TableMapper> mapper,
Class<?> outputKeyClass,
Class<?> outputValueClass,
org.apache.hadoop.mapreduce.Job job,
boolean addDependencyJars,
boolean initCredentials)
Use this before submitting a Multi TableMap job.
|
static void |
MultiTableSnapshotInputFormat.setInput(org.apache.hadoop.conf.Configuration configuration,
Map<String,Collection<Scan>> snapshotScans,
org.apache.hadoop.fs.Path tmpRestoreDir) |
void |
MultiTableSnapshotInputFormatImpl.setInput(org.apache.hadoop.conf.Configuration conf,
Map<String,Collection<Scan>> snapshotScans,
org.apache.hadoop.fs.Path restoreDir)
Configure conf to read from snapshotScans, with snapshots restored to a subdirectory of
restoreDir.
|
protected void |
MultiTableInputFormatBase.setScans(List<Scan> scans)
Allows subclasses to set the list of
Scan objects. |
void |
MultiTableSnapshotInputFormatImpl.setSnapshotToScans(org.apache.hadoop.conf.Configuration conf,
Map<String,Collection<Scan>> snapshotScans)
Push snapshotScans to conf (under the key
MultiTableSnapshotInputFormatImpl.SNAPSHOT_TO_SCANS_KEY ) |
Constructor and Description |
---|
InputSplit(TableDescriptor htd,
HRegionInfo regionInfo,
List<String> locations,
Scan scan,
org.apache.hadoop.fs.Path restoreDir) |
TableSnapshotRegionSplit(HTableDescriptor htd,
HRegionInfo regionInfo,
List<String> locations,
Scan scan,
org.apache.hadoop.fs.Path restoreDir) |
TableSplit(TableName tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location)
Creates a new instance while assigning all variables.
|
TableSplit(TableName tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location,
long length)
Creates a new instance while assigning all variables.
|
TableSplit(TableName tableName,
Scan scan,
byte[] startRow,
byte[] endRow,
String location,
String encodedRegionName,
long length)
Creates a new instance while assigning all variables.
|
Modifier and Type | Method and Description |
---|---|
static boolean |
MobUtils.isCacheMobBlocks(Scan scan)
Indicates whether the scan contains the information of caching blocks.
|
static boolean |
MobUtils.isRawMobScan(Scan scan)
Indicates whether it's a raw scan.
|
static boolean |
MobUtils.isReadEmptyValueOnMobCellMiss(Scan scan)
Indicates whether return null value when the mob file is missing or corrupt.
|
static boolean |
MobUtils.isRefOnlyScan(Scan scan)
Indicates whether it's a reference only scan.
|
static void |
MobUtils.setCacheMobBlocks(Scan scan,
boolean cacheBlocks)
Sets the attribute of caching blocks in the scan.
|
Modifier and Type | Method and Description |
---|---|
static Scan |
ProtobufUtil.toScan(ClientProtos.Scan proto)
Convert a protocol buffer Scan to a client Scan
|
Modifier and Type | Method and Description |
---|---|
static ClientProtos.Scan |
ProtobufUtil.toScan(Scan scan)
Convert a client Scan to a protocol buffer Scan
|
Modifier and Type | Method and Description |
---|---|
static Scan |
QuotaTableUtil.makeQuotaSnapshotScan()
Creates a
Scan which returns only quota snapshots from the quota table. |
static Scan |
QuotaTableUtil.makeQuotaSnapshotScanForTable(TableName tn)
Creates a
Scan which returns only SpaceQuotaSnapshot from the quota table for a
specific table. |
static Scan |
QuotaTableUtil.makeScan(QuotaFilter filter) |
Modifier and Type | Class and Description |
---|---|
class |
InternalScan
Special scanner, currently used for increment operations to
allow additional server-side arguments for Scan operations.
|
Modifier and Type | Method and Description |
---|---|
protected KeyValueScanner |
HMobStore.createScanner(Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> targetCols,
long readPt)
Gets the MobStoreScanner or MobReversedStoreScanner.
|
protected KeyValueScanner |
HStore.createScanner(Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> targetCols,
long readPt) |
org.apache.hadoop.hbase.regionserver.HRegion.RegionScannerImpl |
HRegion.getScanner(Scan scan) |
RegionScanner |
Region.getScanner(Scan scan)
Return an iterator that scans over the HRegion, returning the indicated
columns and rows specified by the
Scan . |
org.apache.hadoop.hbase.regionserver.HRegion.RegionScannerImpl |
HRegion.getScanner(Scan scan,
List<KeyValueScanner> additionalScanners) |
RegionScanner |
Region.getScanner(Scan scan,
List<KeyValueScanner> additionalScanners)
Return an iterator that scans over the HRegion, returning the indicated columns and rows
specified by the
Scan . |
KeyValueScanner |
HStore.getScanner(Scan scan,
NavigableSet<byte[]> targetCols,
long readPt)
Return a scanner for both the memstore and the HStore files.
|
protected RegionScanner |
HRegion.instantiateRegionScanner(Scan scan,
List<KeyValueScanner> additionalScanners) |
protected org.apache.hadoop.hbase.regionserver.HRegion.RegionScannerImpl |
HRegion.instantiateRegionScanner(Scan scan,
List<KeyValueScanner> additionalScanners,
long nonceGroup,
long nonce) |
boolean |
StoreFileReader.passesKeyRangeFilter(Scan scan)
Checks whether the given scan rowkey range overlaps with the current storefile's
|
RegionScanner |
RegionCoprocessorHost.postScannerOpen(Scan scan,
RegionScanner s) |
void |
RegionCoprocessorHost.preScannerOpen(Scan scan) |
boolean |
SegmentScanner.shouldUseScanner(Scan scan,
HStore store,
long oldestUnexpiredTS)
This functionality should be resolved in the higher level which is
MemStoreScanner, currently returns true as default.
|
boolean |
NonLazyKeyValueScanner.shouldUseScanner(Scan scan,
HStore store,
long oldestUnexpiredTS) |
boolean |
StoreFileScanner.shouldUseScanner(Scan scan,
HStore store,
long oldestUnexpiredTS) |
boolean |
KeyValueScanner.shouldUseScanner(Scan scan,
HStore store,
long oldestUnexpiredTS)
Allows to filter out scanners (both StoreFile and memstore) that we don't
want to use based on criteria such as Bloom filters and timestamp ranges.
|
Constructor and Description |
---|
InternalScan(Scan scan) |
MobStoreScanner(HStore store,
ScanInfo scanInfo,
Scan scan,
NavigableSet<byte[]> columns,
long readPt) |
ReversedStoreScanner(HStore store,
ScanInfo scanInfo,
Scan scan,
NavigableSet<byte[]> columns,
long readPt)
Opens a scanner across memstore, snapshot, and all StoreFiles.
|
ReversedStoreScanner(Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> columns,
List<? extends KeyValueScanner> scanners)
Constructor for testing.
|
StoreScanner(HStore store,
ScanInfo scanInfo,
Scan scan,
NavigableSet<byte[]> columns,
long readPt)
Opens a scanner across memstore, snapshot, and all StoreFiles.
|
Modifier and Type | Method and Description |
---|---|
static RawScanQueryMatcher |
RawScanQueryMatcher.create(Scan scan,
ScanInfo scanInfo,
ColumnTracker columns,
boolean hasNullColumn,
long oldestUnexpiredTS,
long now) |
static NormalUserScanQueryMatcher |
NormalUserScanQueryMatcher.create(Scan scan,
ScanInfo scanInfo,
ColumnTracker columns,
DeleteTracker deletes,
boolean hasNullColumn,
long oldestUnexpiredTS,
long now) |
static UserScanQueryMatcher |
UserScanQueryMatcher.create(Scan scan,
ScanInfo scanInfo,
NavigableSet<byte[]> columns,
long oldestUnexpiredTS,
long now,
RegionCoprocessorHost regionCoprocessorHost) |
protected static Pair<DeleteTracker,ColumnTracker> |
ScanQueryMatcher.getTrackers(RegionCoprocessorHost host,
NavigableSet<byte[]> columns,
ScanInfo scanInfo,
long oldestUnexpiredTS,
Scan userScan) |
Constructor and Description |
---|
NormalUserScanQueryMatcher(Scan scan,
ScanInfo scanInfo,
ColumnTracker columns,
boolean hasNullColumn,
DeleteTracker deletes,
long oldestUnexpiredTS,
long now) |
RawScanQueryMatcher(Scan scan,
ScanInfo scanInfo,
ColumnTracker columns,
boolean hasNullColumn,
long oldestUnexpiredTS,
long now) |
UserScanQueryMatcher(Scan scan,
ScanInfo scanInfo,
ColumnTracker columns,
boolean hasNullColumn,
long oldestUnexpiredTS,
long now) |
Modifier and Type | Method and Description |
---|---|
ResultScanner |
RemoteHTable.getScanner(Scan scan) |
Modifier and Type | Method and Description |
---|---|
static ScannerModel |
ScannerModel.fromScan(Scan scan) |
Modifier and Type | Method and Description |
---|---|
RegionScanner |
AccessController.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s) |
void |
AccessController.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan) |
Modifier and Type | Method and Description |
---|---|
RegionScanner |
VisibilityController.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Scan scan,
RegionScanner s) |
void |
VisibilityController.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> e,
Scan scan) |
Modifier and Type | Method and Description |
---|---|
static Scan |
ProtobufUtil.toScan(ClientProtos.Scan proto)
Convert a protocol buffer Scan to a client Scan
|
Modifier and Type | Method and Description |
---|---|
static ClientProtos.ScanRequest |
RequestConverter.buildScanRequest(byte[] regionName,
Scan scan,
int numberOfRows,
boolean closeScanner)
Create a protocol buffer ScanRequest for a client Scan
|
static ClientProtos.Scan |
ProtobufUtil.toScan(Scan scan)
Convert a client Scan to a protocol buffer Scan
|
Modifier and Type | Method and Description |
---|---|
static Scan |
ThriftUtilities.scanFromThrift(TScan in) |
Copyright © 2007–2019 Cloudera. All rights reserved.