@InterfaceAudience.Private public class FSTableDescriptors extends Object implements TableDescriptors
TableDescriptors
that reads descriptors from the
passed filesystem. It expects descriptors to be in a file in the
TABLEINFO_DIR
subdir of the table's directory in FS. Can be read-only
-- i.e. does not modify the filesystem or can be read and write.
Also has utility for keeping up the table descriptors tableinfo file.
The table schema file is kept in the TABLEINFO_DIR
subdir
of the table directory in the filesystem.
It has a TABLEINFO_FILE_PREFIX
and then a suffix that is the
edit sequenceid: e.g. .tableinfo.0000000003
. This sequenceid
is always increasing. It starts at zero. The table schema file with the
highest sequenceid has the most recent schema edit. Usually there is one file
only, the most recent but there may be short periods where there are more
than one file. Old files are eventually cleaned. Presumption is that there
will not be lots of concurrent clients making table schema edits. If so,
the below needs a bit of a reworking and perhaps some supporting api in hdfs.
Constructor and Description |
---|
FSTableDescriptors(org.apache.hadoop.conf.Configuration conf)
Construct a FSTableDescriptors instance using the hbase root dir of the given
conf and the filesystem where that root dir lives.
|
FSTableDescriptors(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path rootdir) |
FSTableDescriptors(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path rootdir,
boolean fsreadonly,
boolean usecache) |
FSTableDescriptors(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path rootdir,
boolean fsreadonly,
boolean usecache,
Function<TableDescriptorBuilder,TableDescriptorBuilder> metaObserver) |
Modifier and Type | Method and Description |
---|---|
void |
add(TableDescriptor htd)
Adds (or updates) the table descriptor to the FileSystem
and updates the local cache with it.
|
static TableDescriptor |
createMetaTableDescriptor(org.apache.hadoop.conf.Configuration conf) |
static TableDescriptorBuilder |
createMetaTableDescriptorBuilder(org.apache.hadoop.conf.Configuration conf) |
boolean |
createTableDescriptor(TableDescriptor htd)
Create new TableDescriptor in HDFS.
|
boolean |
createTableDescriptor(TableDescriptor htd,
boolean forceCreation)
Create new TableDescriptor in HDFS.
|
boolean |
createTableDescriptorForTableDirectory(org.apache.hadoop.fs.Path tableDir,
TableDescriptor htd,
boolean forceCreation)
Create a new TableDescriptor in HDFS in the specified table directory.
|
void |
deleteTableDescriptorIfExists(TableName tableName)
Deletes all the table descriptor files from the file system.
|
TableDescriptor |
get(TableName tablename)
Get the current table descriptor for the given table, or null if none exists.
|
Map<String,TableDescriptor> |
getAll()
Returns a map from table name to table descriptor for all tables.
|
Map<String,TableDescriptor> |
getByNamespace(String name)
Find descriptors by namespace.
|
static TableDescriptor |
getTableDescriptorFromFs(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path tableDir)
Returns the latest table descriptor for the table located at the given directory
directly from the file system if it exists.
|
static TableDescriptor |
getTableDescriptorFromFs(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path hbaseRootDir,
TableName tableName)
Returns the latest table descriptor for the given table directly from the file system
if it exists, bypassing the local cache.
|
static org.apache.hadoop.fs.FileStatus |
getTableInfoPath(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path tableDir)
Find the most current table info file for the table located in the given table directory.
|
boolean |
isTableInfoExists(TableName tableName)
Checks if a current table info file exists for the given table
|
boolean |
isUsecache() |
TableDescriptor |
remove(TableName tablename)
Removes the table descriptor from the local cache and returns it.
|
void |
setCacheOff()
Disables the tabledescriptor cache
|
void |
setCacheOn()
Enables the tabledescriptor cache
|
public FSTableDescriptors(org.apache.hadoop.conf.Configuration conf) throws IOException
IOException
public FSTableDescriptors(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path rootdir) throws IOException
IOException
public FSTableDescriptors(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path rootdir, boolean fsreadonly, boolean usecache) throws IOException
fsreadonly
- True if we are read-only when it comes to filesystem
operations; i.e. on remove, we do not do delete in fs.IOException
public FSTableDescriptors(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path rootdir, boolean fsreadonly, boolean usecache, Function<TableDescriptorBuilder,TableDescriptorBuilder> metaObserver) throws IOException
fsreadonly
- True if we are read-only when it comes to filesystem
operations; i.e. on remove, we do not do delete in fs.metaObserver
- Used by HMaster. It need to modify the META_REPLICAS_NUM for meta table descriptor.
see HMaster#finishActiveMasterInitialization
TODO: This is a workaround. Should remove this ugly code...IOException
public static TableDescriptorBuilder createMetaTableDescriptorBuilder(org.apache.hadoop.conf.Configuration conf) throws IOException
IOException
public static TableDescriptor createMetaTableDescriptor(org.apache.hadoop.conf.Configuration conf) throws IOException
IOException
public void setCacheOn() throws IOException
TableDescriptors
setCacheOn
in interface TableDescriptors
IOException
public void setCacheOff() throws IOException
TableDescriptors
setCacheOff
in interface TableDescriptors
IOException
public boolean isUsecache()
@Nullable public TableDescriptor get(TableName tablename) throws IOException
get
in interface TableDescriptors
IOException
public Map<String,TableDescriptor> getAll() throws IOException
getAll
in interface TableDescriptors
IOException
public Map<String,TableDescriptor> getByNamespace(String name) throws IOException
getByNamespace
in interface TableDescriptors
IOException
get(org.apache.hadoop.hbase.TableName)
public void add(TableDescriptor htd) throws IOException
add
in interface TableDescriptors
htd
- Descriptor to set into TableDescriptorsIOException
public TableDescriptor remove(TableName tablename) throws IOException
remove
in interface TableDescriptors
IOException
public boolean isTableInfoExists(TableName tableName) throws IOException
tableName
- name of tableIOException
public static org.apache.hadoop.fs.FileStatus getTableInfoPath(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path tableDir) throws IOException
TABLEINFO_DIR
subdirectory of the given directory for any table info
files and takes the 'current' one - meaning the one with the highest sequence number if present
or no sequence number at all if none exist (for backward compatibility from before there
were sequence numbers).IOException
public static TableDescriptor getTableDescriptorFromFs(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path hbaseRootDir, TableName tableName) throws IOException
IOException
public static TableDescriptor getTableDescriptorFromFs(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path tableDir) throws IOException
TableInfoMissingException
- if there is no descriptorIOException
public void deleteTableDescriptorIfExists(TableName tableName) throws IOException
org.apache.commons.lang3.NotImplementedException
- if in read only modeIOException
public boolean createTableDescriptor(TableDescriptor htd) throws IOException
IOException
public boolean createTableDescriptor(TableDescriptor htd, boolean forceCreation) throws IOException
IOException
public boolean createTableDescriptorForTableDirectory(org.apache.hadoop.fs.Path tableDir, TableDescriptor htd, boolean forceCreation) throws IOException
tableDir
- table directory under which we should write the filehtd
- description of the table to writeforceCreation
- if true,then even if previous table descriptor is present it will
be overwrittenIOException
- if a filesystem error occursCopyright © 2007–2019 Cloudera. All rights reserved.