Hadoop-2.6.0上调用C的API实现类似云盘的功能

Hadoop-2.6.0上调用C的API实现类似云盘的功能(上传,下载,删除,重命名)

测试系统:CentOS6.6, hadoop-2.6.0

本次测试是调用hadoop下的C的API来访问HDFS实现类似云盘的上传,下载,删除,重命名的功能,其它功能还请有意者自行添加,废话少说,开始进入正题。

首先我们要能在hadoop-2.6.0上的C的API访问HDFS

详情可以访问:http://blog.csdn.net/u013930856/article/details/47660937

下面开始我们的云盘功能:

首先我们在主函数中连接到我们的hadoop服务器,并创建一个用户自己的文件夹

int main(int argc, char **argv)

{

char CreatDirName[30];
/*创建文件夹和路径*/

char DirNamePath[50];

int Create;

hdfsFS fs = hdfsConnect("10.25.100.130", 9000);     //连接到hadoop服务器

printf("请输入你想创建的文件夹和路径:\n");

scanf("%s", CreatDirName);

Create = hdfsCreateDirectory(fs, CreatDirName);

printf("Create = %d\n", Create);

if(Create == -1)

{

printf("创建失败!\n");

exit(1);

}

while(1)

{

int num;

HdfsChoseMenu_Function();

scanf("%d", &num);

switch(num)

{

case 1:HdfsSendFile_Function(fs, CreatDirName);     //Hdfs上传文件Function

break;

case 2:HdfsDownFile_Function(fs, CreatDirName);      //下载文件Function

break;

case 3:HdfsDelete_Function(fs);
//HdfsDelete_Function

break;

case 4:HdfsRename_Function(fs);
//HdfsRename_Function

break;

case 0:HdfsQuit_Function(fs);

break;

default:printf("PLease input Error!!!\n");

}

}

}

上传文件到服务器:

void HdfsSendFile_Function(hdfsFS fs, char CreatDirName[])     //Hdfs上传文件Function

{

char SendFileName[30];
//FileName

char SendFilePath[50];
//FilePath

char buffer[LENGTH];
//BufferFile

printf("请输入要上传的文件名: ");

scanf("%s", SendFileName);

sprintf(SendFilePath, "%s/%s", CreatDirName, SendFileName);

hdfsFile OpenFileName = hdfsOpenFile(fs, SendFilePath, O_WRONLY|O_CREAT, 0, 0, 0);

FILE *fp = fopen(SendFileName, "r");

if(NULL == fp)

{

printf("File:%s Not Found\n", SendFileName);

}

else

{

bzero(buffer, LENGTH);

tSize length = 0;

while((length = fread(buffer, sizeof(char), LENGTH, fp)) > 0)

{

printf("length = %d\n", length);

tSize num_written_bytes = hdfsWrite(fs, OpenFileName, buffer, length);

printf("num_written_bytes = %d\n", num_written_bytes);

if (hdfsFlush(fs, OpenFileName))

{

fprintf(stderr, "Failed to ‘flush‘ %s\n", SendFilePath);

exit(-1);

}

bzero(buffer, LENGTH);

}

fclose(fp);

hdfsCloseFile(fs, OpenFileName);

printf("\n>>>上传文件成功!!!\n\n");

}

}

下载文件:

void HdfsDownFile_Function(hdfsFS fs, char CreatDirName[])      //下载文件Function

{

char DownFileName[30];
//DownFileName

char DownFilePath[50];
//DownFilePath

char buffer[LENGTH];
//BufferFile

printf("请输入要下载的文件名: ");

scanf("%s", DownFileName);

sprintf(DownFilePath, "%s/%s", CreatDirName, DownFileName);

hdfsFile DownOpenFile = hdfsOpenFile(fs, DownFilePath, O_RDONLY, 0, 0, 0);

if(NULL == DownOpenFile)

{

printf("打开文件失败!\n");

exit(1);

}

else

{

FILE *fp = fopen(DownFileName, "w");

if(NULL == fp)

{

printf("File:\t%s Can Not Open To Write\n", DownFileName);

exit(1);

}

else

{

tSize D_length = 0;

while((D_length = hdfsRead(fs, DownOpenFile, buffer, LENGTH)) > 0)

{

printf("D_length = %d\n", D_length);

if(fwrite(buffer, sizeof(char), D_length, fp) < D_length)

{

printf("File:\t%s Write Failed\n", DownFileName);

break;

}

bzero(buffer, LENGTH);

}//sleep(1);

fclose(fp);

hdfsCloseFile(fs, DownOpenFile);

printf("\n>>>下载文件成功!!!\n\n");

}

}

}

删除文件:

void HdfsDelete_Function(hdfsFS fs)       //HdfsDelete_Function

{

int num_Delete;

char delete_HdfsFilePath[50];

printf("请输入你要删除的文件的名字和路径: ");

scanf("%s", delete_HdfsFilePath);

num_Delete = hdfsDelete(fs, delete_HdfsFilePath, 0);

printf("num_Delete = %d\n", num_Delete);

}

文件重命名:

void HdfsRename_Function(hdfsFS fs) //HdfsRename_Function

{

int num_Rename;

char HdfsFilePath[30] = {0};

char oldHdfsFileName[30] = {0};

char newHdfsFileName[30] = {0};

char oldHdfsFilePath[50] = {0};

char newHdfsFilePath[50] = {0};

printf("请输入要更改的文件路径和文件名: ");  //中间用空格隔开  example:/xiaodai 1.jpg

scanf("%s%s", HdfsFilePath, oldHdfsFileName);

printf("请输入更改后的文件名: ");

scanf("%s", newHdfsFileName);

sprintf(oldHdfsFilePath, "%s/%s", HdfsFilePath, oldHdfsFileName);

sprintf(newHdfsFilePath, "%s/%s", HdfsFilePath, newHdfsFileName);

num_Rename = hdfsRename(fs, oldHdfsFilePath, newHdfsFilePath);

printf("num_Rename = %d\n", num_Rename);

}

这只是简单的实现其功能,如果想继续添加其更多更能,还请开发者继续努力

这只是实现其功能的函数核心代码,其完整代码和操作文档详见:

http://download.csdn.net/detail/u013930856/9012061

版权声明:本文为博主原创文章,未经博主允许不得转载。

时间: 2024-08-05 07:06:23

Hadoop-2.6.0上调用C的API实现类似云盘的功能的相关文章

Hadoop-2.6.0上调用C的API实现相似云盘的功能

Hadoop-2.6.0上调用C的API实现类似云盘的功能(上传.下载.删除,重命名) 測试系统:CentOS6.6, hadoop-2.6.0 本次測试是调用hadoop下的C的API来訪问HDFS实现类似云盘的上传.下载,删除,重命名的功能,其他功能还请有意者自行加入,废话少说.開始进入正题. 首先我们要能在hadoop-2.6.0上的C的API訪问HDFS 详情能够訪问:http://blog.csdn.net/u013930856/article/details/47660937 以下開

Spark 1.0.0 部署Hadoop 2.2.0上

源码编译 我的测试环境: 系统:Centos 6.4 - 64位 Java:1.7.45 Scala:2.10.4 Hadoop:2.2.0 Spark 1.0.0 源码地址:http://d3kbcqa49mib13.cloudfront.net/spark-1.0.0.tgz 解压源码,在根去根目录下执行以下命令(sbt编译我没尝试) ./make-distribution.sh --hadoop 2.2.0 --with-yarn --tgz --with-hive 几个重要参数 --ha

Hadoop-2.6.0上的C的API訪问HDFS

在通过Hadoop-2.6.0的C的API訪问HDFS的时候,编译和执行出现了不少问题,花费了几天的时间,上网查了好多的资料,最终还是把问题给攻克了 參考文献:http://m.blog.csdn.net/blog/Aquester/25242215 系统:CentOS 6.6,hadoop-2.6.0, 在hadoop集群的datanode机器上进行 例子代码来源官方文档中的CAPI libhdfs: #include"hdfs.h" #include<stdio.h>

Ubuntu14.0上编译安装Hadoop

Ubuntu14.0上编译安装Hadoop 环境: hadoop-2.5.0.tar hadoop-2.5.0-src.tar jdk-7u71-linux-x64 protobuf-2.5.0.tar Maven3.0 安装步骤: 1 安装jdk ,配置环境变量 2 安装依赖包 3 安装maven 4安装protobuf-2.5.0.tar 5 编译Hadoop 6 安装hadoop 6.1 单机模式 6.2 伪分布模式 6.3 集群模式 1 安装jdk ,配置环境变量 下载jdk版本:jdk

CentOS7上Hadoop 2.6.0集群的安装与配置

转载请注明出处:http://blog.csdn.net/l1028386804/article/details/45740791 1.CentOS7安装 (1)最小化安装CentOS7 (2)配置网络连接 vi/etc/sysconfig/network-scripts/ifcfg-eth0 修改:BOOTPROTO=static ONBOOT=yes 添加:IPADDR= ipaddress(ip地址) NETMASK=subnetmask(子网掩码) GATEWAY=gateway(网关)

CentOS 64位上编译 Hadoop 2.6.0

1.操作系统编译环境 yum install cmake lzo-devel zlib-devel gcc gcc-c++ autoconf automake libtool ncurses-devel openssl-devel libXtst 2.安装JDK 下载JDK1.7,注意只能用1.7,否则编译会出错 http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html tar zxvf

64位CentOS上编译 Hadoop 2.2.0

1. 下载Hadoop 2.2.0 源码包,并解压 $ wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-2.2.0/hadoop-2.2.0-src.tar.gz $ tar zxf hadoop-2.2.0-src.tar.gz 2. 安装下面的软件 $ sudo yum install cmake lzo-devel zlib-devel gcc autoconf automake libtool ncurses-dev

Hadoop使用注意事项-远程调用

在虚拟机中用rhel6.5装了单机伪分布式的Hadoop,在宿主机中使用JAVA API开发程序,遇到了一些问题及解决: 1.连接不上 关掉iptables,最简单粗暴的方法 设置策略,允许远程访问端口才是正确的解决方式,具体方式还要学习. 注意:要在root下调用哦 #> service iptables stop 2.报错如下: java.lang.UnsupportedClassVersionError: hadoop/hdfs/HadoopHelper : Unsupported maj

hadoop&spark安装(上)

硬件环境: hddcluster1 10.0.0.197 redhat7 hddcluster2 10.0.0.228 centos7  这台作为master hddcluster3 10.0.0.202 redhat7 hddcluster4 10.0.0.181 centos7 软件环境: 关闭所有防火墙firewall openssh-clients openssh-server java-1.8.0-openjdk java-1.8.0-openjdk-devel hadoop-2.7.