@InterfaceAudience.Public @InterfaceStability.Stable public interface StoreFuncInterface
Modifier and Type | Method and Description |
---|---|
void |
checkSchema(ResourceSchema s)
Set the schema for data to be stored.
|
void |
cleanupOnFailure(java.lang.String location,
org.apache.hadoop.mapreduce.Job job)
This method will be called by Pig if the job which contains this store
fails.
|
void |
cleanupOnSuccess(java.lang.String location,
org.apache.hadoop.mapreduce.Job job)
This method will be called by Pig if the job which contains this store
is successful, and some cleanup of intermediate resources is required.
|
org.apache.hadoop.mapreduce.OutputFormat |
getOutputFormat()
Return the OutputFormat associated with StoreFuncInterface.
|
void |
prepareToWrite(org.apache.hadoop.mapreduce.RecordWriter writer)
Initialize StoreFuncInterface to write data.
|
void |
putNext(Tuple t)
Write a tuple to the data store.
|
java.lang.String |
relToAbsPathForStoreLocation(java.lang.String location,
org.apache.hadoop.fs.Path curDir)
This method is called by the Pig runtime in the front end to convert the
output location to an absolute path if the location is relative.
|
void |
setStoreFuncUDFContextSignature(java.lang.String signature)
This method will be called by Pig both in the front end and back end to
pass a unique signature to the
StoreFuncInterface which it can use to store
information in the UDFContext which it needs to store between
various method invocations in the front end and back end. |
void |
setStoreLocation(java.lang.String location,
org.apache.hadoop.mapreduce.Job job)
Communicate to the storer the location where the data needs to be stored.
|
java.lang.String relToAbsPathForStoreLocation(java.lang.String location, org.apache.hadoop.fs.Path curDir) throws java.io.IOException
LoadFunc.getAbsolutePath(java.lang.String, org.apache.hadoop.fs.Path)
provides a default
implementation for hdfs and hadoop local file system and it can be used
to implement this method.location
- location as provided in the "store" statement of the scriptcurDir
- the current working direction based on any "cd" statements
in the script before the "store" statement. If there are no "cd" statements
in the script, this would be the home directory -
/user/
java.io.IOException
- if the conversion is not possibleorg.apache.hadoop.mapreduce.OutputFormat getOutputFormat() throws java.io.IOException
OutputFormat
associated with StoreFuncInterfacejava.io.IOException
- if an exception occurs while constructing the
OutputFormatvoid setStoreLocation(java.lang.String location, org.apache.hadoop.mapreduce.Job job) throws java.io.IOException
StoreFuncInterface
here is the
return value of relToAbsPathForStoreLocation(String, Path)
This method will be called in the frontend and backend multiple times. Implementations
should bear in mind that this method is called multiple times and should
ensure there are no inconsistent side effects due to the multiple calls.
checkSchema(ResourceSchema)
will be called before any call to
setStoreLocation(String, Job)
.location
- Location returned by
relToAbsPathForStoreLocation(String, Path)
job
- The Job
objectjava.io.IOException
- if the location is not valid.void checkSchema(ResourceSchema s) throws java.io.IOException
s
- to be checkedjava.io.IOException
- if this schema is not acceptable. It should include
a detailed error message indicating what is wrong with the schema.void prepareToWrite(org.apache.hadoop.mapreduce.RecordWriter writer) throws java.io.IOException
writer
- RecordWriter to use.java.io.IOException
- if an exception occurs during initializationvoid putNext(Tuple t) throws java.io.IOException
t
- the tuple to store.java.io.IOException
- if an exception occurs during the writevoid setStoreFuncUDFContextSignature(java.lang.String signature)
StoreFuncInterface
which it can use to store
information in the UDFContext
which it needs to store between
various method invocations in the front end and back end. This is necessary
because in a Pig Latin script with multiple stores, the different
instances of store functions need to be able to find their (and only their)
data in the UDFContext object.signature
- a unique signature to identify this StoreFuncInterfacevoid cleanupOnFailure(java.lang.String location, org.apache.hadoop.mapreduce.Job job) throws java.io.IOException
location
- Location returned by
relToAbsPathForStoreLocation(String, Path)
job
- The Job
object - this should be used only to obtain
cluster properties through JobContextImpl.getConfiguration()
and not to set/query
any runtime job information.java.io.IOException
void cleanupOnSuccess(java.lang.String location, org.apache.hadoop.mapreduce.Job job) throws java.io.IOException
location
- Location returned by
relToAbsPathForStoreLocation(String, Path)
job
- The Job
object - this should be used only to obtain
cluster properties through JobContextImpl.getConfiguration()
and not to set/query
any runtime job information.java.io.IOException
Copyright © 2007-2017 The Apache Software Foundation