Interface | Description |
---|---|
BulkLoadObserver |
Coprocessors implement this interface to observe and mediate bulk load operations.
|
CoprocessorHost.ObserverGetter<C,O> |
Implementations defined function to get an observer of type
O from a coprocessor of
type C . |
CoprocessorService | Deprecated
Since 2.0.
|
EndpointObserver |
Coprocessors implement this interface to observe and mediate endpoint invocations
on a region.
|
HasMasterServices | Deprecated
Since 2.0.0 to be removed in 3.0.0.
|
HasRegionServerServices | Deprecated
Since 2.0.0 to be removed in 3.0.0.
|
MasterCoprocessor | |
MasterCoprocessorEnvironment | |
MasterObserver |
Defines coprocessor hooks for interacting with operations on the
HMaster process. |
ObserverContext<E extends CoprocessorEnvironment> |
Carries the execution state for a given invocation of an Observer coprocessor
(
RegionObserver , MasterObserver , or WALObserver )
method. |
RegionCoprocessor | |
RegionCoprocessorEnvironment | |
RegionObserver |
Coprocessors implement this interface to observe and mediate client actions on the region.
|
RegionServerCoprocessor | |
RegionServerCoprocessorEnvironment | |
RegionServerObserver |
Defines coprocessor hooks for interacting with operations on the
HRegionServer process. |
SingletonCoprocessorService | Deprecated
Since 2.0.
|
WALCoprocessor |
WALCoprocessor don't support loading services using
Coprocessor.getServices() . |
WALCoprocessorEnvironment | |
WALObserver |
It's provided to have a way for coprocessors to observe, rewrite,
or skip WALEdits as they are being written to the WAL.
|
Class | Description |
---|---|
AggregateImplementation<T,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,R extends com.google.protobuf.Message> |
A concrete AggregateProtocol implementation.
|
BaseEnvironment<C extends Coprocessor> |
Encapsulation of the environment of each coprocessor
|
BaseRowProcessorEndpoint<S extends com.google.protobuf.Message,T extends com.google.protobuf.Message> |
This class demonstrates how to implement atomic read-modify-writes
using
Region.processRowsWithLocks(org.apache.hadoop.hbase.regionserver.RowProcessor<?, ?>) and Coprocessor endpoints. |
ColumnInterpreter<T,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,R extends com.google.protobuf.Message> |
Defines how value for specific column is interpreted and provides utility
methods like compare, add, multiply etc for them.
|
CoprocessorHost<C extends Coprocessor,E extends CoprocessorEnvironment<C>> |
Provides the common setup framework and runtime services for coprocessor
invocation from HBase services.
|
CoprocessorServiceBackwardCompatiblity | Deprecated |
CoprocessorServiceBackwardCompatiblity.MasterCoprocessorService | |
CoprocessorServiceBackwardCompatiblity.RegionCoprocessorService | |
CoprocessorServiceBackwardCompatiblity.RegionServerCoprocessorService | |
Export |
Export an HBase table.
|
Export.Response | |
MetaTableMetrics |
A coprocessor that collects metrics from meta table.
|
MetricsCoprocessor |
Utility class for tracking metrics for various types of coprocessors.
|
MultiRowMutationEndpoint |
This class demonstrates how to implement atomic multi row transactions using
HRegion.mutateRowsWithLocks(Collection, Collection, long, long)
and Coprocessor endpoints. |
ObserverContextImpl<E extends CoprocessorEnvironment> |
This is the only implementation of
ObserverContext , which serves as the interface for
third-party Coprocessor developers. |
Enum | Description |
---|---|
RegionObserver.MutationType |
Mutation type for postMutationBeforeWAL hook
|
Exception | Description |
---|---|
CoprocessorException |
Thrown if a coprocessor encounters any exception.
|
Annotation Type | Description |
---|---|
CoreCoprocessor |
Marker annotation that denotes Coprocessors that are core to HBase.
|
Multiple types of coprocessors are provided to provide sufficient flexibility for potential use cases. Right now there are:
Coprocessor
interface so that coprocessor framework
can manage it internally.
Another design goal of this interface is to provide simple features for making coprocessors useful, while exposing no more internal state or control actions of the region server than necessary and not exposing them directly.
Over the lifecycle of a region, the methods of this interface are invoked when the corresponding events happen. The master transitions regions through the following states:
unassigned -> pendingOpen -> open -> pendingClose -7gt; closed.
Coprocessors have opportunity to intercept and handle events in pendingOpen, open, and pendingClose states.
The region server is opening a region to bring it online. Coprocessors can piggyback or fail this process.
RegionObserver
interface it can
observe and mediate client actions on the region:
Here's an example of what a simple RegionObserver might look like. This
example shows how to implement access control for HBase. This
coprocessor checks user information for a given client request, e.g.,
Get/Put/Delete/Scan by injecting code at certain
RegionObserver
preXXX hooks. If the user is not allowed to access the resource, a
CoprocessorException will be thrown. And the client request will be
denied by receiving this exception.
package org.apache.hadoop.hbase.coprocessor; import org.apache.hadoop.hbase.client.Get; // Sample access-control coprocessor. It utilizes RegionObserver // and intercept preXXX() method to check user privilege for the given table // and column family. public class AccessControlCoprocessor extends BaseRegionObserverCoprocessor { // @Override public Get preGet(CoprocessorEnvironment e, Get get) throws CoprocessorException { // check permissions.. if (access_not_allowed) { throw new AccessDeniedException("User is not allowed to access."); } return get; } // override prePut(), preDelete(), etc. }
Coprocessor
and RegionObserver
provide certain hooks
for injecting user code running at each region. The user code will be triggered
by existing HTable
and HBaseAdmin
operations at
the certain hook points.
Coprocessor Endpoints allow you to define your own dynamic RPC protocol to communicate
between clients and region servers, i.e., you can create a new method, specifying custom
request parameters and return types. RPC methods exposed by coprocessor Endpoints can be
triggered by calling client side dynamic RPC functions -- HTable.coprocessorService(...)
.
To implement an Endpoint, you need to:
For a more detailed discussion of how to implement a coprocessor Endpoint, along with some sample
code, see the org.apache.hadoop.hbase.client.coprocessor
package documentation.
TableDescriptor
for a newly created table.
(Currently we don't really have an on demand coprocessor loading mechanism for opened regions.)
hbase.coprocessor.region.classes
from Configuration
.
Coprocessor framework will automatically load the configured classes as
default coprocessors. The classes must be included in the classpath already.
<property> <name>hbase.coprocessor.region.classes</name> <value>org.apache.hadoop.hbase.coprocessor.AccessControlCoprocessor, org.apache.hadoop.hbase.coprocessor.ColumnAggregationProtocol</value> <description>A comma-separated list of Coprocessors that are loaded by default. For any override coprocessor method from RegionObservor or Coprocessor, these classes' implementation will be called in order. After implement your own Coprocessor, just put it in HBase's classpath and add the fully qualified class name here. </description> </property>
The first defined coprocessor will be assigned
Coprocessor.Priority.SYSTEM
as priority. And each following
coprocessor's priority will be incremented by one. Coprocessors are executed
in order according to the natural ordering of the int.
'COPROCESSOR$1' => 'hdfs://localhost:8020/hbase/coprocessors/test.jar:Test:1000' 'COPROCESSOR$2' => '/hbase/coprocessors/test2.jar:AnotherTest:1001'
<path> must point to a jar, can be on any filesystem supported by the
Hadoop FileSystem
object.
<class> is the coprocessor implementation class. A jar can contain more than one coprocessor implementation, but only one can be specified at a time in each table attribute.
<priority> is an integer. Coprocessors are executed in order according to the natural ordering of the int. Coprocessors can optionally abort actions. So typically one would want to put authoritative CPs (security policy implementations, perhaps) ahead of observers.
Path path = new Path(fs.getUri() + Path.SEPARATOR + "TestClassloading.jar"); // create a table that references the jar TableDescriptor htd = TableDescriptorBuilder .newBuilder(TableName.valueOf(getClass().getTableName())) .setColumnFamily(ColumnFamilyDescriptorBuilder.of("test")) .setValue(Bytes.toBytes("Coprocessor$1", path.toString()+ ":" + classFullName + ":" + Coprocessor.Priority.USER)) .build(); Admin admin = connection.getAdmin(); admin.createTable(htd);
Copyright © 2007–2019 Cloudera. All rights reserved.