Package | Description |
---|---|
org.apache.hadoop.fs |
Implementations of
AbstractFileSystem for hdfs
over rpc and hdfs over web. |
org.apache.hadoop.fs.azure |
A distributed implementation of
FileSystem for reading and writing files on
Azure Block Storage. |
org.apache.hadoop.fs.ftp | |
org.apache.hadoop.fs.s3 |
A distributed, block-based implementation of
FileSystem that uses Amazon S3
as a backing store. |
org.apache.hadoop.fs.s3native |
A distributed implementation of
FileSystem for reading and writing files on
Amazon S3. |
org.apache.hadoop.fs.viewfs | |
org.apache.hadoop.io |
Generic i/o code for use when reading and writing data to the network,
to databases, and to files.
|
org.apache.hadoop.mapred | |
org.apache.hadoop.mapred.lib | |
org.apache.hadoop.mapred.lib.db | |
org.apache.hadoop.mapreduce | |
org.apache.hadoop.mapreduce.lib.input |
Modifier and Type | Class and Description |
---|---|
class |
ChecksumFileSystem
Abstract Checksumed FileSystem.
|
class |
FilterFileSystem
A
FilterFileSystem contains
some other file system, which it uses as
its basic file system, possibly transforming
the data along the way or providing additional
functionality. |
class |
LocalFileSystem
Implement the FileSystem API for the checksumed local filesystem.
|
class |
RawLocalFileSystem
Implement the FileSystem API for the raw local filesystem.
|
Modifier and Type | Field and Description |
---|---|
protected FileSystem |
TrashPolicy.fs |
protected FileSystem |
FilterFileSystem.fs |
Modifier and Type | Method and Description |
---|---|
static FileSystem |
FileSystem.get(Configuration conf)
Returns the configured filesystem implementation.
|
static FileSystem |
FileSystem.get(URI uri,
Configuration conf)
Returns the FileSystem for this URI's scheme and authority.
|
static FileSystem |
FileSystem.get(URI uri,
Configuration conf,
String user)
Get a filesystem instance based on the uri, the passed
configuration and the user
|
FileSystem[] |
FilterFileSystem.getChildFileSystems() |
FileSystem[] |
FileSystem.getChildFileSystems()
Get all the immediate child FileSystems embedded in this FileSystem.
|
FileSystem |
Path.getFileSystem(Configuration conf)
Return the FileSystem that owns this Path.
|
protected static FileSystem |
FileSystem.getFSofPath(Path absOrFqPath,
Configuration conf) |
static FileSystem |
FileSystem.getNamed(String name,
Configuration conf)
Deprecated.
call #get(URI,Configuration) instead.
|
FileSystem |
LocalFileSystem.getRaw() |
FileSystem |
ChecksumFileSystem.getRawFileSystem()
get the raw file system
|
FileSystem |
FilterFileSystem.getRawFileSystem()
Get the raw file system
|
static FileSystem |
FileSystem.newInstance(Configuration conf)
Returns a unique configured filesystem implementation.
|
static FileSystem |
FileSystem.newInstance(URI uri,
Configuration conf)
Returns the FileSystem for this URI's scheme and authority.
|
static FileSystem |
FileSystem.newInstance(URI uri,
Configuration conf,
String user)
Returns the FileSystem for this URI's scheme and authority and the
passed user.
|
Modifier and Type | Method and Description |
---|---|
static Class<? extends FileSystem> |
FileSystem.getFileSystemClass(String scheme,
Configuration conf) |
Modifier and Type | Method and Description |
---|---|
static boolean |
FileUtil.compareFs(FileSystem srcFs,
FileSystem destFs) |
static boolean |
FileUtil.copy(File src,
FileSystem dstFS,
Path dst,
boolean deleteSource,
Configuration conf)
Copy local files to a FileSystem.
|
static boolean |
FileUtil.copy(FileSystem srcFS,
FileStatus srcStatus,
FileSystem dstFS,
Path dst,
boolean deleteSource,
boolean overwrite,
Configuration conf)
Copy files between FileSystems.
|
static boolean |
FileUtil.copy(FileSystem srcFS,
Path[] srcs,
FileSystem dstFS,
Path dst,
boolean deleteSource,
boolean overwrite,
Configuration conf) |
static boolean |
FileUtil.copy(FileSystem srcFS,
Path src,
File dst,
boolean deleteSource,
Configuration conf)
Copy FileSystem files to local files.
|
static boolean |
FileUtil.copy(FileSystem srcFS,
Path src,
FileSystem dstFS,
Path dst,
boolean deleteSource,
boolean overwrite,
Configuration conf)
Copy files between FileSystems.
|
static boolean |
FileUtil.copy(FileSystem srcFS,
Path src,
FileSystem dstFS,
Path dst,
boolean deleteSource,
Configuration conf)
Copy files between FileSystems.
|
static boolean |
FileUtil.copyMerge(FileSystem srcFS,
Path srcDir,
FileSystem dstFS,
Path dstFile,
boolean deleteSource,
Configuration conf,
String addString)
Copy all files in a directory to one output file (merge).
|
static FSDataOutputStream |
FileSystem.create(FileSystem fs,
Path file,
FsPermission permission)
create a file with the provided permission
The permission of the file is set to be the provided permission as in
setPermission, not permission&~umask
It is implemented using two RPCs.
|
static void |
FileUtil.fullyDelete(FileSystem fs,
Path dir)
Deprecated.
|
static TrashPolicy |
TrashPolicy.getInstance(Configuration conf,
FileSystem fs,
Path home)
Get an instance of the configured TrashPolicy based on the value
of the configuration parameter fs.trash.classname.
|
abstract void |
TrashPolicy.initialize(Configuration conf,
FileSystem fs,
Path home)
Used to setup the trash policy.
|
Path |
Path.makeQualified(FileSystem fs)
Deprecated.
|
static boolean |
FileSystem.mkdirs(FileSystem fs,
Path dir,
FsPermission permission)
create a directory with the provided permission
The permission of the directory is set to be the provided permission as in
setPermission, not permission&~umask
|
static boolean |
Trash.moveToAppropriateTrash(FileSystem fs,
Path p,
Configuration conf)
In case of the symlinks or mount points, one has to move the appropriate
trashbin in the actual volume of the path p being deleted.
|
Modifier and Type | Method and Description |
---|---|
static org.apache.hadoop.fs.FileSystem.Statistics |
FileSystem.getStatistics(String scheme,
Class<? extends FileSystem> cls)
Get the statistics for a particular file system
|
Constructor and Description |
---|
ChecksumFileSystem(FileSystem fs) |
FilterFileSystem(FileSystem fs) |
LocalFileSystem(FileSystem rawLocalFileSystem) |
Trash(FileSystem fs,
Configuration conf)
Construct a trash can accessor for the FileSystem provided.
|
Modifier and Type | Class and Description |
---|---|
class |
NativeAzureFileSystem
A
FileSystem for reading and writing files stored on Windows Azure. |
Modifier and Type | Method and Description |
---|---|
void |
WasbFsck.setMockFileSystemForTesting(FileSystem fileSystem)
For testing purposes, set the file system to use here instead of relying on
getting it from the FileSystem class based on the URI.
|
Modifier and Type | Class and Description |
---|---|
class |
FTPFileSystem
A
FileSystem backed by an FTP client provided by Apache Commons Net. |
Modifier and Type | Class and Description |
---|---|
class |
S3FileSystem
A block-based
FileSystem backed by
Amazon S3. |
Modifier and Type | Class and Description |
---|---|
class |
NativeS3FileSystem
A
FileSystem for reading and writing files stored on
Amazon S3. |
Modifier and Type | Class and Description |
---|---|
class |
ViewFileSystem
ViewFileSystem (extends the FileSystem interface) implements a client-side
mount table.
|
Modifier and Type | Method and Description |
---|---|
FileSystem[] |
ViewFileSystem.getChildFileSystems() |
Modifier and Type | Method and Description |
---|---|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass)
Deprecated.
|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
int bufferSize,
short replication,
long blockSize,
boolean createParent,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
CompressionCodec codec,
org.apache.hadoop.io.SequenceFile.Metadata metadata)
Deprecated.
|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
int bufferSize,
short replication,
long blockSize,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
CompressionCodec codec,
Progressable progress,
org.apache.hadoop.io.SequenceFile.Metadata metadata)
Deprecated.
|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType)
Deprecated.
|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
CompressionCodec codec)
Deprecated.
|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
CompressionCodec codec,
Progressable progress)
Deprecated.
|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
CompressionCodec codec,
Progressable progress,
org.apache.hadoop.io.SequenceFile.Metadata metadata)
Deprecated.
|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
Progressable progress)
Deprecated.
|
static void |
BloomMapFile.delete(FileSystem fs,
String name) |
static void |
MapFile.delete(FileSystem fs,
String name)
Deletes the named map file.
|
static long |
MapFile.fix(FileSystem fs,
Path dir,
Class<? extends Writable> keyClass,
Class<? extends Writable> valueClass,
boolean dryrun,
Configuration conf)
This method attempts to fix a corrupt MapFile by re-creating its index.
|
static void |
MapFile.rename(FileSystem fs,
String oldName,
String newName)
Renames an existing map directory.
|
Modifier and Type | Method and Description |
---|---|
FileSystem |
JobClient.getFs()
Get a filesystem handle.
|
Modifier and Type | Method and Description |
---|---|
protected void |
FileInputFormat.addInputPathRecursively(List<FileStatus> result,
FileSystem fs,
Path path,
PathFilter inputFilter)
Add files in the input path recursively into the results.
|
void |
SequenceFileAsBinaryOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job) |
void |
OutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job)
Check for validity of the output-specification for the job.
|
void |
FileOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job) |
static org.apache.hadoop.io.MapFile.Reader[] |
MapFileOutputFormat.getReaders(FileSystem ignored,
Path dir,
Configuration conf)
Open the output generated by this format.
|
RecordWriter<K,V> |
TextOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress) |
RecordWriter<K,V> |
SequenceFileOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress) |
RecordWriter<BytesWritable,BytesWritable> |
SequenceFileAsBinaryOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress) |
RecordWriter<K,V> |
OutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
Get the
RecordWriter for the given job. |
RecordWriter<WritableComparable,Writable> |
MapFileOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress) |
abstract RecordWriter<K,V> |
FileOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress) |
static boolean |
JobClient.isJobDirValid(Path jobDirPath,
FileSystem fs)
Checks if the job directory is clean and has all the required components
for (re) starting the job
|
protected boolean |
TextInputFormat.isSplitable(FileSystem fs,
Path file) |
protected boolean |
KeyValueTextInputFormat.isSplitable(FileSystem fs,
Path file) |
protected boolean |
FixedLengthInputFormat.isSplitable(FileSystem fs,
Path file) |
protected boolean |
FileInputFormat.isSplitable(FileSystem fs,
Path filename)
Is the given filename splittable? Usually, true, but if the file is
stream compressed, it will not be.
|
Modifier and Type | Method and Description |
---|---|
void |
NullOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job) |
void |
LazyOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job) |
void |
FilterOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job) |
protected RecordWriter<K,V> |
MultipleTextOutputFormat.getBaseRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable arg3) |
protected RecordWriter<K,V> |
MultipleSequenceFileOutputFormat.getBaseRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable arg3) |
protected abstract RecordWriter<K,V> |
MultipleOutputFormat.getBaseRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable arg3) |
RecordWriter<K,V> |
NullOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress) |
RecordWriter<K,V> |
MultipleOutputFormat.getRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable arg3)
Create a composite record writer that can write key/value data to different
output files
|
RecordWriter<K,V> |
LazyOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress) |
RecordWriter<K,V> |
FilterOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress) |
protected boolean |
CombineFileInputFormat.isSplitable(FileSystem fs,
Path file) |
Modifier and Type | Method and Description |
---|---|
void |
DBOutputFormat.checkOutputSpecs(FileSystem filesystem,
JobConf job)
Check for validity of the output-specification for the job.
|
RecordWriter<K,V> |
DBOutputFormat.getRecordWriter(FileSystem filesystem,
JobConf job,
String name,
Progressable progress)
Get the
RecordWriter for the given job. |
Modifier and Type | Method and Description |
---|---|
FileSystem |
Cluster.getFileSystem()
Get the file system where job-specific files are stored
|
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.mapreduce.JobSubmitter |
Job.getJobSubmitter(FileSystem fs,
org.apache.hadoop.mapreduce.protocol.ClientProtocol submitClient)
Only for mocking via unit tests.
|
Modifier and Type | Method and Description |
---|---|
protected void |
FileInputFormat.addInputPathRecursively(List<FileStatus> result,
FileSystem fs,
Path path,
PathFilter inputFilter)
Add files in the input path recursively into the results.
|
protected BlockLocation[] |
CombineFileInputFormat.getFileBlockLocations(FileSystem fs,
FileStatus stat) |
Copyright © 2015 Apache Software Foundation. All rights reserved.