HDFS的java api |
您所在的位置:网站首页 › java语言发展历史简单描述是什么 › HDFS的java api |
HDFS的java api
回顾:
任务
1.掌握hdfs的java操作
2.理解namenode与datanode的工作机制
目标
1.hdfs的java操作
2.hdfs的组成部分详解
第一节:使用java操作HDFS
1.1 配置开发环境
本课程的开发环境基于windows操作系统来配置的,使用的HDFS版本是hadoop2.7.1. 下载winutils的windows版本github.com/SweetInk/ha… 配置环境变量 第一步 第二步 第三步 压缩包(hadoop-common-2.7.1-bin)里的hadoop.dll,并拷贝到c:\windows\system32目录中。 在eclipse环境中创建一个maven项目,并引入依赖。 org.apache.hadoop hadoop-client 2.7.1 1.2 HDFS控制(Java)hadoop中关于文件操作类基本上全部是在org.apache.hadoop.fs包中,这些api能够支持的操作包含:打开文件,读写文件,删除文件等。 FileSystem,该类是个抽象类,只能通过来类的get方法得到具体类。get方法存在几个重载版本,常用的是这个: static FileSystem get(Configuration conf); 1.3 代码演示 import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; publicclass HadoopFSOperations { public static void main(String[] args) throws Exception { // createNewHDFSFile("/tmp/create2.c", "hello"); // System.out.println(readHDFSFile("/tmp/copy.c").toString()); // mkdir("/tmp/testdir"); // deleteDir("/tmp/testdir"); listAll("/tmp/"); } /* * upload the local file to the hds notice that the path is full like * /tmp/test.c */ public static void uploadLocalFile2HDFS(String s, String d) throws IOException { Configuration config = new Configuration(); FileSystem hdfs = FileSystem.get(config); Path src = new Path(s); Path dst = new Path(d); hdfs.copyFromLocalFile(src, dst); hdfs.close(); } /* * create a new file in the hdfs. notice that the toCreateFilePath is the * full path and write the content to the hdfs file. */ public static void createNewHDFSFile(String toCreateFilePath, String content) throws IOException { Configuration config = new Configuration(); FileSystem hdfs = FileSystem.get(config); FSDataOutputStream os = hdfs.create(new Path(toCreateFilePath)); os.write(content.getBytes("UTF-8")); os.close(); hdfs.close(); } /* * delete the hdfs file notice that the dst is the full path name */ public static boolean deleteHDFSFile(String dst) throws IOException { Configuration config = new Configuration(); FileSystem hdfs = FileSystem.get(config); Path path = new Path(dst); booleanisDeleted = hdfs.delete(path); hdfs.close(); returnisDeleted; } /* * read the hdfs file content notice that the dst is the full path name */ public static byte[] readHDFSFile(String dst) throws Exception { Configuration conf = new Configuration(); FileSystem fs = FileSystem.get(conf); // check if the file exists Path path = new Path(dst); if (fs.exists(path)) { FSDataInputStream is = fs.open(path); // get the file info to create the buffer FileStatus stat = fs.getFileStatus(path); // create the buffer byte[] buffer = newbyte[Integer.parseInt(String.valueOf(stat.getLen()))]; is.readFully(0, buffer); // 多次读取 // int length = 0; // while ((length = is.read(buffer, 0, 128)) != -1) { // System.out.println(new String(buffer, 0, length)); // } is.close(); fs.close(); returnbuffer; } else { thrownew Exception("the file is not found ."); } } /* * make a new dir in the hdfs * the dir may like '/tmp/testdir' */ public static void mkdir(String dir) throws IOException { Configuration conf = new Configuration(); FileSystem fs = FileSystem.get(conf); fs.mkdirs(new Path(dir)); fs.close(); } /* * delete a dir in the hdfs * dir may like '/tmp/testdir' */ public static void deleteDir(String dir) throws IOException { Configuration conf = new Configuration(); FileSystem fs = FileSystem.get(conf); fs.delete(new Path(dir)); fs.close(); } public static void listAll(String dir) throws IOException { Configuration conf = new Configuration(); FileSystem fs = FileSystem.get(conf); FileStatus[] stats = fs.listStatus(new Path(dir)); for (inti = 0; i |
今日新闻 |
点击排行 |
|
推荐新闻 |
图片新闻 |
|
专题文章 |
CopyRight 2018-2019 实验室设备网 版权所有 win10的实时保护怎么永久关闭 |