@InterfaceAudience.Private public class SplitLogManager extends Object
SplitLogManager monitors the tasks that it creates using the
timeoutMonitor thread. If a task's progress is slow then
SplitLogManagerCoordination.checkTasks()
will take away the
task from the owner SplitLogWorker
and the task will be up for grabs again. When the task is done then it is
deleted by SplitLogManager.
Clients call splitLogDistributed(Path)
to split a region server's
log files. The caller thread waits in this method until all the log files
have been split.
All the coordination calls made by this class are asynchronous. This is mainly to help reduce response time seen by the callers.
There is race in this design between the SplitLogManager and the SplitLogWorker. SplitLogManager might re-queue a task that has in reality already been completed by a SplitLogWorker. We rely on the idempotency of the log splitting task for correctness.
It is also assumed that every log splitting task is unique and once completed (either with success or with error) it will be not be submitted again. If a task is resubmitted then there is a risk that old "delete task" can delete the re-submission.
Modifier and Type | Class and Description |
---|---|
static class |
SplitLogManager.ResubmitDirective |
static class |
SplitLogManager.Task
in memory state of an active task.
|
static class |
SplitLogManager.TaskBatch
Keeps track of the batch of tasks submitted together by a caller in splitLogDistributed().
|
static class |
SplitLogManager.TerminationStatus |
Modifier and Type | Field and Description |
---|---|
static int |
DEFAULT_UNASSIGNED_TIMEOUT |
Constructor and Description |
---|
SplitLogManager(MasterServices master,
org.apache.hadoop.conf.Configuration conf)
Its OK to construct this object even when region-servers are not online.
|
Modifier and Type | Method and Description |
---|---|
static org.apache.hadoop.fs.FileStatus[] |
getFileList(org.apache.hadoop.conf.Configuration conf,
List<org.apache.hadoop.fs.Path> logDirs,
org.apache.hadoop.fs.PathFilter filter)
Get a list of paths that need to be split given a set of server-specific directories and
optionally a filter.
|
long |
splitLogDistributed(List<org.apache.hadoop.fs.Path> logDirs)
The caller will block until all the log files of the given region server have been processed -
successfully split or an error is encountered - by an available worker region server.
|
long |
splitLogDistributed(org.apache.hadoop.fs.Path logDir) |
long |
splitLogDistributed(Set<ServerName> serverNames,
List<org.apache.hadoop.fs.Path> logDirs,
org.apache.hadoop.fs.PathFilter filter)
The caller will block until all the hbase:meta log files of the given region server have been
processed - successfully split or an error is encountered - by an available worker region
server.
|
void |
stop() |
public static final int DEFAULT_UNASSIGNED_TIMEOUT
public SplitLogManager(MasterServices master, org.apache.hadoop.conf.Configuration conf) throws IOException
master
- the master servicesconf
- the HBase configurationIOException
public static org.apache.hadoop.fs.FileStatus[] getFileList(org.apache.hadoop.conf.Configuration conf, List<org.apache.hadoop.fs.Path> logDirs, org.apache.hadoop.fs.PathFilter filter) throws IOException
AbstractFSWALProvider.getServerNameFromWALDirectoryName(org.apache.hadoop.conf.Configuration, java.lang.String)
for more info on directory
layout.
Should be package-private, but is needed by
WALSplitter.split(Path, Path, Path, FileSystem,
Configuration, org.apache.hadoop.hbase.wal.WALFactory)
for tests.IOException
public long splitLogDistributed(org.apache.hadoop.fs.Path logDir) throws IOException
logDir
- one region sever wal dir path in .logsIOException
- if there was an error while splitting any log fileIOException
public long splitLogDistributed(List<org.apache.hadoop.fs.Path> logDirs) throws IOException
logDirs
- List of log dirs to splitIOException
- If there was an error while splitting any log filepublic long splitLogDistributed(Set<ServerName> serverNames, List<org.apache.hadoop.fs.Path> logDirs, org.apache.hadoop.fs.PathFilter filter) throws IOException
logDirs
- List of log dirs to splitfilter
- the Path filter to select specific files for consideringIOException
- If there was an error while splitting any log filepublic void stop()
Copyright © 2007–2019 Cloudera. All rights reserved.